Page 1

Indian Institute of Technology Guwahati


2nd Edition April ‘11 - March ’12

see inside

$ Dimension

PHP security







Message from the


I am pleased to learn about the publication of the second edition of Anantha, the magazine of MATRIX. I am hopeful that the publication of this edition will play an important role in the dissemination of information on various relevant academic activities within and outside the department. I invite the faculty, students and alumni to send their views, opinions and write-ups of academic nature for the purpose of publication in the magazine. I hope their contributions would be of great help to the readers. I congratulate the editorial team for their great effort towards bringing out the second edition, with the expectation that this magazine will maintain a high standard of professionalism commensurate with the academic goals of the department.

Dr. Rajen Kumar Sinha Head of Department Department of Mathematics


About Us - MATRIX

MATRIX is the student association of the Department of Mathematics. Since its inception in 2008, MATRIX has actively conducted various activities for the intellectual development and technical nourishment of its members. MATRIX has organised various lectures and workshops in fields like Object Oriented Programming, Web Designing, Stock Index Calculation etc. MATRIX plays a key role in fostering a friendly environment for both student-faculty and student-student relationships to grow. It undertakes social initiatives like organising the department picnic, freshers and faculty-student interaction sessions. MATRIX recognises the importance of a healthy discussion in the development of students and therefore has its own discussion forum where faculty and students can share knowledge and also discuss important issues related to current happenings around the globe.


MATRIX aims at making a mark for the Department of Mathematics, IIT Guwahati in the industry. Dimension, a technical confluence was started last year with the vision to provide a platform for students to interact with the mathematical fraternity around the globe. This year with the successful conduction of the second edition of Dimension, we are definitely a step closer to that goal. This year Dimension hosted 5 online events, ranging from Cryptography to Hacking and from Math Modelling to Trading and Arbitrage. The 2 days of Dimension witnessed an eminent Lecture Series with people from financial institutions like HDFC Life, ICICI Securities and from technical and research institutions like TCS Innovation Labs. MATRIX strives to promote the student interaction between academia and industry. It also tries to link with other mathematics and financial organizations in various parts of the world.

Message from the President

Here At MATRIX, we continue our onward march towards professional and intellectual development, paving the road to the future. The keywords remain the same, opportunity, challenge, innovation, support and development but the efforts have increased manifold. The highly qualified and creative professors of our department with their midas touch produce winners. We are committed towards providing the students with ample opportunities to explore different ideas and fields of Mathematics and Computing. MATRIX values the fact that the college is the most important phase of a person’s life where he/she learns to work in a team, take responsibility, make critical decisions and most importantly manage time. A major step towards achieving our objectives was Dimension, a technical confluence. Dimension consisted of 5 online events and a lecture series which covered the industrial application of Mathematical Finance, Computer


Science and social entrepreneurship. The past year has been an eventful one. From tough courses and labs to fun filled parties, we have seen it all. Among other things like the department picnic, we had conducted Dimension which was a great success. The highlight of the year was of course the placement session. The details of the placement session are mentioned in this magazine. Though not very old we have been able to establish ourselves as one of the best in the world. Credit goes to the undying spirit of the students. I would like to mention the tireless work of the previous and present executive councils of MATRIX, for bringing it where it is today. Last but not the least, I would like to thank our faculty advisor Dr. N Selvaraju and the chairman Dr. R. K. Sinha for their support and guidance which has been essential in the growth of MATRIX through the years.

Paranjay Srivastava President MATRIX

Message from the Editor

I take immense pleasure in unveiling before you the second edition of the annual magazine of the Department of Mathematics, ANANTHA. This issue of ANANTHA, brings to you some exceptional articles. The cover article of the magazine is a research article on Multi fractal temporally weighted detrended fluctuation analysis of arctic sea ice data, which has been an issue of great interest recently. It also includes interesting articles about the history of Infinity and its significance in maths, the appearance of Fibonacci numbers, Golden ratio and Phi in nature, the 2 envelopes paradox and the Prisoners’ Dilemma. Continuing the trend of presenting technological extravaganza, the

magazine includes articles on PageRank Algorithm, PHP security and Chomp, a strategic two-player game. The aim of this magazine is to present some interesting and enlightening facts and discoveries from the fields of Mathematics, Computer Science and Finance and surely this issue of ANANTHA is a step forward in this direction. I am extremely grateful to HoD, Dr. R.K. Sinha and Faculty Advisor Dr. N Selvaraju, who have always encouraged us by providing their invaluable suggestions. I am elated to acknowledge the invaluable contributions of all the members of the Publication Committee for putting together this magazine in its present form. Also, I would like to express my profound sense of gratitude to the President, Paranjay Srivastava, MATRIX for being constantly supportive throughout the session and for his efforts in helping with this magazine. And for the closing. I, for one, think that we have done a good job in making something you can cherish, something you can respect, so dive into the world of ANANTHA.

Infinity is just the beginning...

Shagun Rawat Publication Secretary MATRIX


Contents Message from the HoD


A Walk With the History of Infinity Before starting with Infinity, let me give a faulty proof of astatement which may highlight on why guys should look to Infinity in a different way. Statement: There exists an infinite decreasing sequence of natural numbers. Proof. Assume for the sake of contradiction that the longest decreasing sequence of natural numbers is finite. Let S = {a1,a2,...,an} be such a sequence. Then, choose some a0 larger than max ai, and note that S’ = {a0,a1, a2,...,an} forms a length of (n+ 1) decreasing sequence. This contradicts the maximality of S, and hence completes the proof . Clearly the statement is wrong. But what is wrong in the above proof? The error lies in the first line of the proof. One is not allowed to assume that a longest decreasing sequence exists. Observe that Well Ordering principle only ensures the shortest decreasing. Infact in this case longest decreasing sequence does not exist. Moral of the story: Be careful when dealing with infinity! History of infinity starts with a misconception of Zeno of Elea (490425BC). He stated that-there is actually no motion, because to get anywhere you first have to get halfway,


and before that you have to get a quarter of the way, etc. i.e. travel through an infinite number of process. From this Zeno concluded that motion is impossible. Now a days, a satisfactory answer of Zeno’s paradox can be established- through the conception of infinitesimal Process or Limits which was not there at Zeno’s time. Galileo in 1638 noticed a paradox on Infinity. He observed the Natural numbers (N) as numerous as the set of its perfect squares S={1,4,9,…} by one-to-one correspondences which produces two contradictory statements 1) While some natural numbers are perfect squares, some are clearly not. Hence the set N must be more numerous than the set S, or |N|>|S|. 2) Since for every perfect square there is exactlyone natural that is its square root, and for every natural there is exactly one perfect square, it follows that S and N are equinumerous, or |N|=|S|. With the language of Cardinality (number of elements in the set) there can be two propositions 1) that all infinite sets are equal in cardinality, and

A Walk With the History of Infinity 2) that if one set can be obtained by deleting members of another, then they have unequal cardinalities. The latter verdict can be paraphrased thus: some infinite sets have a larger cardinality than other infinite sets, or not all infinite sets are equal in cardinality. Therefore, these two verdicts of intuition directly contradict one another and cannot both be true as we have seen from the last example. Galelio’s paradox is paradoxical only in the weak sense: it violates our intuitions. It is not a contradiction as we always expecting that something similar will be occurring for infinite sets from finite sets. Later innovations due to Georg Cantor (worked on 1870-95), is set theory itself, the theory of infinite sets, and the modern concept of infinite cardinality. The key to this solution is simply to define equal cardinality through one-to-one correspondence, and then to show that these sets can be put into one-toone correspondence with one another. Similarly, we can prove that some infinite sets have a larger cardinality than others by showing that they cannot be put into one-to-one correspondence. Cantor's theory faced tremendous opposition in the late 19th century, from mathematicians as well as from philosophers and theologians. Not


only, it was denied and disbelieved; it was hated. In particular, Kronecker (one of Cantor’s teachers) opposed Cantor’s ideas and blocked his career. For the sake of later discussion, let us say that a set which can be put into one-to-one correspondence with at least one of its proper subsets is self-nesting. Charles Peirce in 1885, and Richard Dedekind in 1888, proposed to define infinity through self-nesting. According to this proposal, we don't know that infinite sets are selfnesting because of some proof; we know it because infinite sets are defined as those which are selfnesting. It's not hard to prove that all infinite sets, in fact, are self-nesting. And we already knew that only infinite sets are self-nesting, or that no finite sets have this property. We might cautiously generalize that all infinite sets have the same cardinality. Cantor found an elegant proof that the power set of any set, finite or infinite, possesses a greater cardinal-

A Walk With the History of Infinity ity than the original set; this important result is simply called Cantor's Theorem. In most of the times we faced difficulty to visualize Infinity. Descartes asks us to imagine, that is, possible to visualize,1,000-sided regular polygon. Can you do it? Try it right now. Chances are- you are, either visualizing something like a 20 or 30-sided polygon and pretending it has 1,000 sides, or you are visualizing a circle and pretending the sides are too small to see with your mind's eye. We know exactly what a polygon is; we can even compute the interior angle of its sides and, for a given edge, its area and perimeter. But we cannot visualize a simpler one. One reason I like Descartes' example is that it is finite. Philosophers who think the infinite utterly beyond human understanding often fail to notice that their arguments, once made specific, also apply to very large finite magnitudes as well. We cannot visualize infinitely many cherries in a tree, but neither can we visualize a billion. Does that disqualify us from using billions intelligibly and accurately? In between around 1900’s, the German mathematician David Hilbert


proposed a hypothetical scenario that keenly illustrates Cantor’s counter-intuitive results on infinite sets which we know as paradox of the Grand Hotel (or Hotel Infinity) which says it is possible to accommodate countably infinite passengers to a hotel with infinitely many rooms, all of which are occupied by a unit. There is another paradox discovered by Bertrand Russell in 1901, known as Russell paradox, showed that the naïve set theory created by Georg Cantor leads to a contradiction. The non constructive nature of the formal proofs relying on ‘Axiom of Choice’ leads to Banach-Tarski paradox which is on to decompose a 3-dimensional unit ball into a finite number of disjoint pieces and then reassemble the pieces to form two unit balls. Cantor’s basic approach in developing his theory of sets was too systematically relate the cardinality of various infinite number sets (in particular, the sets of integer, rational and real numbers) to that of the naturals. He started with the convention that the cardinality of naturals equal to N0, denotes the first transfinite cardinal and then showed the hierarchy of cardinal numbers: {1,2,3,…; N0,N1,N2,…}.

A Walk With the History of Infinity This led to an extended transfinite arithmetic and also to the first conjecture of the continuum hypothesis. Continuum Hypothesis (CH): There doesn’t exist a set S satisfying |N|< |S| < |R|. In other words continuum has greater cardinality that the naturals. But after |N| is it |R| the next cardinal numbers? Cantor stumbled into this question. But he failed to resolve it. This was finally answered many years later, ina way that Cantor never would have imagined:

It’s neither provable nor disprovable! Which a mathematician can never think at first… Kurt G¨odel (1940) showed that adding CH to the usual axioms of set theorydoes not produce a contradiction. Paul Cohen (1960) showed thatadding not-CH to the usual axioms ofset theory does not produce acontradiction. In other words it is independent of the continuum hypothesis from Ernst Zermelo and Abraham Fraenkel axiomatic theory (ZFC).


Eventually Cantor’s ideas won out andbecame part of mainstream mathematics. David Hilbert, the greatest mathematician ofthe early 20th century, said in 1926 that “No one can expel us from the paradise Cantor has created.” Here are few quotes on INFINITY. Alcohol gives you infinite patience for stupidity. ---Sammy Davis, Jr. A point is not part of a line.The smallest natural point is larger than all mathematical points, and this is proved because the natural point has continuity, and anything that is continuous is infinitely divisible; but the mathematical point is indivisible because it has no size. --Leonardo da Vinci A stupid man’s report of what a clever says can never be accurate, because he unconsciously translates what he hears into something that he can understand. ---Bertrand Russell

Pratibhamoy Das Research Scholar


This article covers some basic concepts related to PHP security which should be kept in mind while writing PHP code. Error reporting: PHP’s error reporting features can help PHP developers to identify and locate the mistakes. But the detailed information that PHP provides can be displayed to a malicious attacker, which is not desirable as the attacker can get a lot of confidential information from that. So, try to define your own function (as below) to handle errors. This function (my_error_handler) does not echo the php errors but just log the errors in the file mentioned so that only the php developers can see the errors and nobody else.


S ecurity

<?php set_error_handler('my_error_handler'); function my_error_handler($number, $string, $file, $line, $context) { $error = "=========\nPHP ERROR\n =========\n"; $error .= "Number: [$number]\n"; $error .= "String: [$string]\n"; $error .= "File: [$file]\n"; $error .= "Line: [$line]\n"; $error .= "Context:\n" . print_r($context, TRUE) . "\n\n"; error_log($error, 3, '/path/to/error_log'); } ?> Escaping output to user: Say you have taken some data from the user and saved it in your database. The data can be anything. For e.g. you are saving user comments in the database. While showing the data again to the user, make an habit of escaping the data using the PHP function htmlentities(), so that the special characters entered by the user and thus saved in the database are not interpreted by the browser. <?php $data = “<p>hi</p>”; $data = htmlentities($data, ENT _QUOTES, 'UTF-8'); echo "{$data}"; // Now the output will be: {<p>hi</p>}

PHP // If we wouldnâ&#x20AC;&#x2122;t have used htmlentities() function then the output would have been: {hi} ?> The best way to use htmlentities() is to specify two optional arguments the quote style (second argument) and the character set (the third argument). The quote style should always be ENT_QUOTES in order for the escaping to be most exhaustive, and the character set should match the character set indicated in the Content-type header that your application includes in each response. Escaping data to be entered in database: For MySQL users, the best escaping function is mysql_real_escape_string (). If there is no native escaping function for your database, addslashes() can be used as a last resort. <?php $username = mysql_real_escape _string($username); $sql = "SELECT * FROM profile WHERE username = '{$username}'"; $result = mysql_query($sql); ?> If we dont use mysql_real_escape_string() function and variable username contains some quotes then the query would give an error and gives opportunity to the attacker to get information about the database. So, basically this method is providing security against SQL Injection.

x 11

S ecurity

Security from session hijacking: The most common session attack is session hijacking. This refers to any method that an attacker can use to access another userâ&#x20AC;&#x2122;s session. The first step for any attacker is to obtain a valid session identifier, and therefore the secrecy of the session identifier is paramount. For defense against session hijacking the following technique can be used: <?php session_start(); if (isset($_SESSION['HTTP_US ER_AG ENT'])) { if ($_SESSION['HTTP_USER_AGENT'] !=md5($_SERVER['HTTP_USER_AGENT'])) { /* Prompt for password. */ exit; } } else { $_SESSION['HTTP_USER_AGENT'] = md5($_SERVER['HTTP_USER_AGENT']); } ?>

Piyush Bengani st B.Tech. 1styear

Internship Experience

An Experience that Remains

It was in March 2011 when I got the offer for the Amazon internship. With thoughts juggling between the lures and fantasies of a foreign internship like MITACS and of associating with Amazon but having had the taste of a research internship after the sophomore year, I chose to go for the latter.

The beginning: I worked as an intern in the Amazon Development Center at Hyderabad for about two and half months. The internship started with preliminaries like introduction to the various cultures and work ethics of the company, gifting cool company t-shirts and tumblers!! I was then introduced to my team named “Customer Returns” which dealt with refunding and a single emailing service and move exchanging customer orders. the templates to the Amazon Cloud After a week or so of induction which would make them easily cussessions on the various building tomizable according to the various tools, frameworks etc. that are marketplaces that Amazon currently used at Amazon, my mentor serves. introduced me to the project I After some grueling research of what was to work on. exactly the current daemons did, I realized that these were in every The project: My team sent varisense true to their name “daemons”!! ous kinds of emails to the cusMy team gave me full liberty to tomers at various stages of the return explore into choosing any angle to processing. The email processing was approach my project from. I talked to being handled by various daemons various other teams, especially the running separately and the email Amazon Cloud team to decide on the templates that were being populated exact service to store the email temby these daemons were not customizable plates from the many cloud storage without deployment. The project was services that were available. to integrate the various daemons into


Internship Experience

An Experience that Remains

By the end of the internship, I was able to successfully implement the new service for receiving notification about the kind of email to be sent, processing the necessary information, fetching the appropriate template from the cloud, populating the required data and finally sending the email.

Experience: The experience of the internship, if one was to describe it in a single word was “fulfilling”. Before going for the internship, the biggest conundrum I was facing was the decision to opt for a job, MBA or MS. I believe that one’s 3rd year internship should be chosen according to what one plans to do after graduation. Being slightly more inclined towards a corporate future, interacting with various people from eclectic backgrounds at Amazon gave me a clearer picture. Where I was iffy about my after-graduation-plans before the internship, I was definitely certain by the end of it. Companies like Amazon offer you a very open culture where anyone can talk to anyone. I met people who had joined there immediately different backgrounds in a whole new way. after graduation, or after MS, PhD I got a whole new insight into how one and some who had many many years should decide about one’s career and variof experience. Frequent team lunches ous other important aspects of life. and dinners and activities like foose The independence I got as an intern for my ball etc provide another way to project was a novel experience in itself. It refresh and interact with people from really taught me to trust my instinct,


Internship Experience

An Experience that

to weigh all the various pros and cons of the abundant technology and methods that are available to complete one single task, to come with techniques to solve any problem at hand, to convince others why what you think is right and most importantly to process and absorb what everyone in the team says and find the optimum solution considering all possibilities and to everyoneâ&#x20AC;&#x2122;s satisfaction!! Hyderabad: After a demanding 6th semester, Hyderabad came as a refreshing and a rejuvenating respite. Itâ&#x20AC;&#x2122;s a city which


caters to umpteen cultures and wonderfully so. The people, the culture, everything fills you with vigor and energy. I discovered the wonderful city with friends and my team. My team, most of them hyderabadis proved to be wonderful guides to the old sites like Char Minar and Golconda Fort. The history of the city is breathtakingly beautiful and engrossing. Necklace road, Paradise ki Biryani, Karachi Bakeryâ&#x20AC;&#x2122;s biscuits, Birla Mandir, Ramuji film city etc are some of the few out of the myriad wonderful things of Hyderabad one should not miss. A risky decision against a lucrative foreign internship. An enriching experience. And a summer totally worth it!

Nikita Garg th B.Tech. 4thyear


Trends and Multi-Fractality in Arctic Sea Ice The modulation of the atmosphere/ocean heat flux, considered as a bellwether of climate change, can be attributed to the Earth’s polar oceans since their surfaces are covered by a thin (several meters) mosaic of high albedo sea ice floes. As a consequence, sea ice is considered to be a more sensitive component of the cryosphere to perturbations and feedbacks, particularly the ice-albedo feedback which has driven large-scale climate events over Earth history, instead of the massive meteoric ice sheets that are several kilometers thick. The retreat of the Arctic sea ice coverage during recent decades (Figs. 1 and 2), has captured substantial interest. The fundamental question concerning the nature of the decay in ice coverage is; whether is it a trend associated with greenhouse forcing, or is it a fluctuation in a quantitative record that is short (~ 30 years) relative to the dynamics of the cryosphere on climatic epochs (> 10^6 yrs.)? Figure 1. The Sea Ice Concentration as on September 15 2001 and on September 15 2007. The Sea Ice Extent was observed to be at its minimum during the September of 2007. This image is retrieved from The Cryosphere TodayUIUC.

That a sufficiently large increase in greenhouse gas concentration will drive decay in the ice cover, is indicated by both, the past climate data and the basic physical arguments.


Trends and Multi-Fractality in Arctic Sea Ice However, Tietsche et al. in an article “Recovery mechanisms of Arctic summer sea ice” (2011), numerically prescribed ice-free summer states at various times during the projection of 21st century climates and found that ice extent typically recovered within several years. Whilst such rapid response times can be captured within the framework of relatively simple theory both internal and external forcings and their intrinsic time scales manifest themselves in large scale observations of the geophysical state of the system. Due to the fact that we cannot a-priori exclude the observed decline in the ice cover as being an intrinsic decadal oscillation or non stationary influence in the climate system, we used the finest temporal resolution in the observed record to examine the action of multiple scales. The fingerprints of the noisy dynamics


of the system on time scales longer than the seasonal record may reside in that record itself. Most observational studies of the satellite records of ice coverage extrapolate in time the annual or monthly means. Whilst the observed declines over this troika of decades, particularly the last decade are striking, our goal here is to begin a systematic effort to distinguish between long-term correlations and trends in this finite record. In so doing we examine whether there exists a multiplicity of persistent scales in the data that can provide a basis for examining cause and effect in the geophysical scale observables of the system.

Figure 2. Equivalent Ice Extent (EIE) during the satellite era (blue), shown relative to the mean (red) with the seasonal cycle removed. EIE, which differs from traditional ice extent or ice area, is defined as the total surface area, including land, north of the zonal–mean ice edge latitude, and thus is proportional to the sine of the ice edge latitude. EIE was defined by Eisenman in an article “Geographic muting of changes in the Arctic sea ice cover” (2010), to deal with the geometric muting of ice area associated with the seasonal bias of the influence of the Arctic basin land mass boundaries.

The basic approach of relevance is the multi-fractal generalization of Detrended Fluctuation Analysis (DFA) aptly called Multi-fractal Detrended Fluctuation Analysis (MF-DFA). In the last decade this

Trends and Multi-Fractality in Arctic Sea Ice


approach has been developed in many directions, from studying extreme events with nonlinear long-term memory, to examining the influence of additive noise on long-term correlations. We used a new extension of this methodology called Multifractal Temporally Weighted Detrended Fluctuation Analysis (MF-TWDFA) developed by Zhou and Leung, which exploits the intuition that in any time series points closer in time are more likely to be related than distant points and can provide a rather more clear signature of long time scales in the fluctuation function and its moments. This is expressed by application of weighted moving windows–points

the running sum of the raw data. This offers several advantages over the commonly used MF-DFA. Firstly, in MF-DFA the profile of the time series is fit using discontinuous polynomials, which can introduce errors in the determination of crossover times for new scalings, with a particular relevance at long time scales. Secondly, for time series of length N, MF-DFA is typically informative only up to N/4 whereas MF-TWDFA can be carried out to N/2. Finally, the generalized fluctuation functions Fq(s) for all moments q as a function of time scale s are substantially smoother for all s and this is markedly so for large values. This facilitates clear extraction of crossover times from one scaling to another.

Figure3. The profile for the Abedo after removing the seasonal cycle from the original timeseries.

Figure 4. The profile for the EIE after removing the seasonal cycle from the original time series.

nearer each other are weighted more than those farther away–to determinethe function used to fit the time series profile (Figs. 3 and 4);

We used MF-TWDFA to examine the multi-scale structure of two satellite based geophysical data sets for Arctic sea ice; the Equivalent Ice Extent (EIE)

Trends and Multi-Fractality in Arctic Sea Ice and albedo retrievals from the Advanced Very High Resolution Radiometer (AVHRR) Polar PathďŹ nder (APP) archive. The EIE data derives from retrievals of satellite passive microwave radiances over the Arctic converted to daily sea ice concentration using the NASA Team Sea Ice Algorithm. The mean EIE seasonal

cycle from 1978-present is shown in Fig. 6. Daily satellite retrievals of the directional hemispheric apparent albedo are determined from the APP archive. The apparent albedo is what would be measured by upward and downward looking radiometers and thus varies with the state of the atmosphere and the solar zenith angle.

Figure 5. The albedo histograms shown for days in midMarch (blue) and-September (red). If there is ice in a pixel for the 23 years duration of the data set then we compute the albedo for that pixel and average over all pixels that have met this criterion.

Figure 6. The mean seasonal cycle of the Equivalent Ice Extent (EIE) during the satellite era (Fig. 2).


Trends and Multi-Fractality in Arctic Sea Ice We examined the long-term correlations and multifractal properties of daily satellite retrievals of Arctic sea ice albedo and ice areal extent, for periods of ~ 23 years and 32 years respectively, with and without the seasonal cycle removed. The generalized Hurst exponents and multiple crossover timescales were found to range from the synoptic or weather time scale to decadal, with several between. Such multiple time scales were exhibited in both data sets and hence the approach provides a framework to examine ice dynamical and thermo-dynamical responses to climate forcing that goes beyond treatments that assume a process involving a single autocorrelation decay, such as a first-order autoregressive process. Indeed, the method shows that single decay autocorrelations cannot be meaningfully fitted to these geophysical observations. Our most important finding is that the strength of the seasonal cycle is such that it dominates the power spectrum and “masks” long term correlations on time scales beyond seasonal.

When removing the seasonal scale from the original record, the EIE data exhibits a white noise behavior from the seasonal to the bi-seasonal time scales, but the clear fingerprints of the short (weather) and long (~ 7 and 9 year) time scales remain, demonstrating a reentrant long-term persistence. Therefore, it is not possible to distinguish whether a given ice area minimum (maximum) will be followed by a minimum (maximum) that is larger or smaller. This means that while it is tempting to use an anomalous excursion associated with a low ice year to predict the following year’s minimum, or that two year’s henceforth, the data do not justify such a prediction. Finally, other methods find solely a rapid de-correlation and whereas we find multi-year and decadal transitions as well as the origin of the dominance of the seasonal cycle in long term persistence. Hence, we believe that combining such multifractal studies of model output and other observations will substantially improve the acuity with which one can disentangle the strength of the seasonal cycle in this highly forced system from the longer term trends. th

This work has been contributed by Sahil Agarwal (B.Tech. 4thyear), Woosok Moon (Yale University) and Professor John Wettlaufer (Yale University)


1 3


Pagerank is an algorithm developed by Sergey Brin and Larry Page (and hence the name Pagerank) which is used to assign importance or rank to various pages on the World Wide Web. This algorithm is one of the key foundations of Google search engine, but it is not the only one, itâ&#x20AC;&#x2122;s just one of the many indicators used by Google to rank various pages and present them when searched for. The algorithm in general may be applied to any set of entities with cross references between the entities. It is a mathematical algorithm based on graph theory. A higher rank indicate more value is associated with that page and it will be given priority (in terms of position in the search results) to pages with rank lower to it. Pagerank works on the assumption that a person surfing the net clicks on the links randomly without reading or thinking about what the link has to offer and based on this it calculates the probability that such a random surfer will land on a page. The basic idea behind the whole algorithm is as follows. All the pages in the World Wide Web are taken as nodes of a directed graph and each incoming hyperlink is taken as an


Pagerank Algorithm incoming edge. The number of incoming edges relate to the importance of the page as perceived by other pages. If a page has been referred by many other pages then that page is important and must be given a higher priority.

Now letâ&#x20AC;&#x2122;s see how it works with an example. Suppose for simplicity that there are only 5 pages, let them be named A, B, C, D and E. First of all each of the pages is given an equal pagerank, 1/5. Then for any page (letâ&#x20AC;&#x2122;s say A), the pages which have a link to that particular page (A in this case) are considered (let them be C and D), and the new pagerank of A is calculated given by PR(A) = PR(C) + PR(D)

Here PR(i) refers to Pagerank of page i. This is the most basic architecture. Next taking into considerations some others factors, this basic form is improved. Suppose that page C has a total of 3 out links out of which 1 is of Page A (repeated links are ignored) and page D has total two out links, one for A, then according to the random surfer model the probability of landing on page A is actually one third from page C and half from page D, so

1 3


we redefine our formula as PR(A) = PR(C)/3 + PR(D)/2

So now generalising the model for any number of pages the recursive definition of pagerank given as PR(i) = ∑ j PR(j)/O(j)

Where O( j) represents the total number of out links from the page j. Now to make the model more realistic, the following fact is considered. The probability of a surfer clicking on links decreases with the number of clicks. This is a necessity, or else the person will never settle down for a particular page, which is not at all realistic. So a damping factor, d is introduced defined as the chance that a person on a particular page will actually click on one of the links on the page. So the total pagerank score calculated by the above formula is multiplied by the factor d to take into account the above arguments. Also a

Pagerank Algorithm term (1-d)/N (N being the total number of pages) is added. This addition is done more due to mathematical reasons than any intuitive reasons, the mathematics behind it being that the sum of pageranks of all the pages must be 1. Hence the new improved formula PR(i)= (1-d)/N + d(∑ j PR(j)/O(j))

Various studies have been conducted to empirically determine the value of d and it is generally taken to be around 0.85 as was suggested in the original paper by Sergey Brin and Larry Page. Now comes the computation part. As is clear from the formula itself, it is recursive in nature, also with each recursion the pagerank of particular page will keep on increasing. So where do we stop? The answer depends on how precise we want the pageranks to be. So depending on the needs, a number  is decided upon and after each iteration, the difference between the final and the penultimate pageranks is calculated and when it is less than the desired value of , the iterations are stopped.

Rajat Kateja nd year B.Tech. 2nd


Chomp Even the simplest of games can pose tough mathematical challenges. One such example is the game of Chomp. Chomp starts with a rectangular array of counters arranged neatly in rows and columns. A move consists of selecting any counter, then removing that counter along with all the counters above and to the right of it. In effect, the player takes a rectangular or square "bite" out of the arrayâ&#x20AC;&#x201D;just as if the array were a rectangular, segmented chocolate bar. Two players take turns removing counters.The loser is the one forced to take the last "poisoned" counter in the lower left corner. The chocolatebar formulation of Chomp is due to David Gale, of the University of California, Berkeley. As it turns out, the first player can win for any rectangular position bigger than 1 x 1. To see this, assume that the second player has a winning strategy for any initial move of the first player. Now suppose the first player makes the first move by chomping just the top right-hand cookie. Then there is a reply which is


the first move of a winning strategy for the second player. If that is the case, then the first player could have opened with that very move and been guaranteed a win. Therefore, the second player could not have a winning strategy. Winning strategies are known for a few cases. The first scenario is when the array is a square. Here the first player can win by selecting the counter that is diagonally up and to the right of the poisoned counter. This would leave only the last row and column with the poisoned piece at the vertex. From that point on, the first player simply takes from one line whatever his or her opponent takes from the other line. Eventually, the second player must take the poisoned piece. The second scenario is when the array is two columns or two rows wide. Here the first player can always win by taking the counter at the top right so that one column or row is one counter longer than the other. From then on, the first player always plays so as to restore this situation.

Chomp Chomp belongs to a particular family of two-player combinatorial games (games in which nothing is hidden from the players and no chance is involved), which are described as poset games. A poset, or partially ordered set, is a set of elements in which some elements are smaller than other elements but not every pair of elements can necessarily be compared. Chomp can be seen as a

game played on a partially ordered set P with smallest element 0. A move consists of picking an element x of P and removing x and all larger elements from P. Whoever picks 0 loses. While on one hand this is very easy to mathematically prove that such a winning strategy exists, finding out that strategy is still a mystery that nobody yet has been able to solve.

Tanvi Rai th B.Tech. 4thyear

Two envelope paradox I have two envelopes, and inside each I have put some money. In fact, one envelope contains twice as much money as the other. I'll let you select one envelope, which you can have after the game is over. But as soon as you select one, I offer you the option to switch envelopes. Should you switch?

* Get explanation in the appendix


You reason as follows: My envelope has Rs.x, and with probability 1/2 the other envelope has either x/2 or Rs.2x. Thus the expected value of the other envelope is (1/2)(x/2) + (1/2)(Rs.2x) which is Rs.1.25x. This is greater than the Rs.x in my current envelope. Therefore I should switch envelopes... But if you do switch, a similar argument would instruct you to switch back... and therefore keep switching! What's going on here? Is there a flaw in the reasoning?

The Prisoner's Dilemma Can cooperation evolve in a society of egoists? This is an intriguing question as we know that nature prefers individuals with selfish motives. With a limited supply of resources available to a population, the competition amongst the organisms increases. Since it is the selfish behavior of the organism that ensures its sustenance, there is no reason for cooperation to occur. Whereas we know of instances where cooperation does take place â&#x20AC;&#x201C; Symbiosis; people cooperate with each other. Mutual cooperation, which benefits the cooperators and the lack of which is harmful for them, easily persist. But there are types of cooperation in which on cooperating one does well but any one of them would do better by failing to cooperate. In such a case it is difficult to find group cooperation, for the organisms cooperating are in a worst position than their counterparts leading to a dying out of cooperating tendencies in a population. The Prisoner's Dilemma is an elegant embodiment of such a case. In the Prisoner's Dilemma, two individuals can either cooperate or defect. And the selfish choice of defection yields a higher payoff than cooperation. But if both defect, both do worse than if both had cooperated. Game theory tells us that each player thinks defecting as a better option and assumes that the other player will come to the same conclusion. Thus, rational players of Prisoner's Dilemma will always defect. However, when this game is


played repeatedly (Iterated Prisoner's Dilemma-IPD), cooperation becomes possible among rational players. The best strategy for this game is 'Tit for Tatâ&#x20AC;&#x2122; i.e. cooperate in the first round and then, in the subsequent rounds, make the move made by the opponent in the previous round. This strategy cannot be exploited more than once, but the player tends to cooperate a lot, generating many reward payoffs. Consider the strategies available for playing the iterated Prisoner's Dilemma that are deterministic and which use the outcomes of the previous three moves, say to make a choice in the current move. Since there are 4 possible outcomes for each move, there are 4x4x4 = 64 different histories of the three previous moves. Therefore to determine its choice of cooperation or defection , a strategy would only need to determine what to do in each of the situations that could arise. To get the strategy started, three hypothetical moves are needed which precede the start of the game for which we require six more genes making a total of 70. This string would specify what the individual would do in every possible circumstance and therefore completely define a particular strategy.

The Prisoner's Dilemma

21 There are precisely 270 70≈ 1021 strategies. An exhaustive search for good strategies in such a big collection is impossible. If a computer had examined these strategies at the rate of 100/s since the beginning of the universe, less than 1% would have been checked by now. To find effective strategies in such a huge set , a very powerful technique is needed. The “genetic algorithm” given by J. Holland is such a technique. These algorithms are inspired by biological genetics and adapted by Holland into a general problem solving technique. The simulation process works in five stages: 1. An initial population is chosen. 2. Each individual is run in the current environment to determine its effectiveness. 3. The relatively successful candidates are selected to have more offspring. 4. The successful individuals are then randomly paired off to produce two offspring per mating. The strategy of an offspring is determined from the strategies of the two parents. This is done by


using two genetic operators: crossover and mutation. a. Crossover is a way of constructing the chromosomes of the two offspring from the parent chromosomes. It selects one or more places to break the parent chromosomes so as to construct the offsprings each of whom has some genetic material from both parents. b. Mutation in the offspring occurs by randomly changing a very small proportion of the chromosome. 5. This gives a new population which is again checked for its effectiveness. This new population will show behavior more like those of the successful individuals of the previous generation than the unsuccessful ones. Computer programs that “evolve” in ways that resemble natural selection can solve complex problems even their creators do not fully understand -John Holland

Salwa Ali Khan nd year M.Sc. 2nd

Campus Placement Result


2007 – 2011 Batch

2008 – 2012 Batch



Phi - 1.618..... Everyone, meet PHI, pronounced as fee. Not to be confused with PI. As we mathematicians like to say: PHI is one H of a lot cooler than PI! The number PHI was derived from the Fibonacci sequence—a progression famous not only because the sum of adjacent terms equals the next term, but because the quotients of adjacent terms possess the astonishing property of approaching the number 1.618—PHI! Despite PHI's seemingly mystical mathematical origins, the truly mindboggling aspect of PHI was its role as a fundamental building block in nature. Plants, animals, and even human beings all possess dimensional properties that adhere with eerie exactitude to the ratio of PHI to 1. If we study the relationship between females and males in a honeybee community, we find that the female bees always outnumber the male bees. If you divide the number of female bees by the number of male bees in any beehive in the world, you always get the same number. PHI. Sunflower seeds grow in opposing spirals. Can you guess the ratio of each rotation's diameter to the next? PHI. Spiralled pinecone petals, leaf arrangement


Examples of Golden Ratio phi in nature (row-wise) 1) Sea shell 2) Cactus 3) Cyclon in ocean 4) Plant 5) Sun flower 6) Galaxy

on plant stalks, insect segmentation—all display astonishing obedience to the Divine Proportion. The human body is literally made of building blocks whose proportional ratios always equal PHI. PHI's ubiquity in nature clearly exceeds coincidence, and so the ancients assumed the number PHI must have been preordained by the Creator of the universe. Early scientists heralded one-point-sixone-eight as the Divine Proportion. All of you. Try it. Measure the distance from the tip of your head to the floor. Then divide that by the distance from your belly button to the floor. Guess what number you get. PHI! Want another example? Measure the distance from your shoulder to your fingertips, and then divide it by the distance from your elbow to your fingertips. PHI again. Another? Hip to floor divided by knee to floor. PHI again. Finger joints. Toes. Spinal divisions. PHI. PHI. PHI. My friends, each of you is a walking tribute to the Divine Proportion.

Shubham Luhadia st B.Tech. 1s year

DIMENSION 2012 A technical confluence presented by MATRIX in association with the IIT Guwahati Chapter of SIAM 3-4 March 2012

Lecturer 1 Dr. Rajan M A Research Scientist, TCS Innovation Labs, Bangalore Lecture: MATHEMAGIC FOR PHYSICAL WORLD PROBLEMS- ENGINEER’S PARADISE

Dr. Rajan M A, is a prominent research scientist at TCS Innovation Labs and is also working as a visiting academic faculty at SJC Institute of Technology, Chikkaballapur and University VCE, Bengaluru. He has around 10 years of experience in IT and space sciences R&D services. He has also worked as a Development Manager in TCS and Senior Scientist Engineer in ISRO Satellite Centre. His lecture on ‘Mathemagic for physical world problems Engineer’s Paradise’ was highly informative. He discussed about the research of ancient Indian mathematicians and the use of innovative techniques to solve real life problems life like using Graph theory in Wireless adhoc networks, hashing and Chinese remainder theorem in Large Scale Data Handling, Robust data storage and Cryptographic schemes.

Lecturer 2 Mr. Sudhir Kumar Jha Zonal Head, Active Trader Service of East and Central India, ICICI Securities Lecture: HOW STOCK MARKETS WORK AND MUTUAL FUNDS

Mr. Sudhir Kumar Jha is an eminent personality in the field of Investment Banking and financial services, with an MBA from the The Institute of Chartered Financial Analysts of India. He was the Senior Vice-President at IndiaBulls Securities Ltd. Presently he is the Zonal Head, Active Trader Service of East and Central India at ICICI Securities Ltd. based at Kolkata. In his lecture on ‘How Stock markets work and Mutual Funds’ he discussed about the dynamics and technicalities of stock markets and investing strategies. He started by giving a basic view about stocks and options and also discussed the technicalities of IPO and book building.



Lecturer 3 Mr. Pradip Bhattacharyya Regional Training Manager, HDFC Life, NE Region Lecture: THE CONCEPT AND BENEFITS OF LIFE INSURANCE

Mr. Pradip Bhattacharyya, with over 18 years of experience in the field of investment banking is the Regional Training Manager, HDFC Life for the North East Region. Mr. Bhattacharyya has 11 years of experience in the field of Sales and 7 years of experience in Training. His lecture on ‘The concept of Life-Insurance and the benefits of savings through LifeInsurance’ gave an insight on the functioning of life-insurance and awareness about the technicalities pertaining to insurance so that insurance is not considered just a ‘Money Back policy’.



DIMENSION 2012 Online Events What to buy? What to sell? When to trade to hedge risk? Or just speculate! Want to try something more? DIMENSION 2012 presented the second edition of its very own Online Virtual Stock Market, ETF 2.0. With all the features of a real time stock market and some serious challenge from fellow traders, ETF 2.0 was the right place to test trading instinct! The event was a huge success just like ETF with 8,897 hits over a stretch of 4 days.

"Know Hacking! But No Hacking!" DIMENSION presented before you “The Hacker’s World” where people could battle on the grounds of computer and prove to be the next tech-genius. This Hacking event included PHP and Linux knowledge. The online hacking event had 368 hits.

This cryptographic event involved using deciphering skills to break the 3-tier security barrier set up by the security agencies to prevent burglary. From the simplest of ciphers to the trickiest of encryption algorithms, it needed all. This event was highly appreciated with a total of 1,123 hits over a stretch of 4 days.


ETF 2.0

Digital Wall


DIMENSION 2012 Online Events Arbitrage opportunities appear here and there, all the time in the financial markets. Pure 100% risk-free stock market arbitrage opportunities are hard to spot, however it is interesting to understand how the markets offer these opportunities when theres no risk at all! This Arbitrage event included skills to spot arbitrage opportunities to make money...without risks!! This innovative event had a total of 126 hits.

The Mathematical Modeling event comprised of using mathematical modeling skills to solve the water crisis issues of Dime Valley. The local population recently discovered a water source at the hill top that promises to resolve the water scarcity problem in the valley. The job was to devise an optimal water pipeline network that provides water to each and every village in the valley. The challenge was to find the shortest and cheapest pathway. This strategic event received 72 hits and proved the toughest of all.

Invent Money


Over the stretch of 3 days, www. thedimension. org recorded more than 90,000 hits and was a huge success! Over the year it received more than 5 lakh hits!



Explanation of Two envelope paradox The expected value calculation is flawed because the conditioning on the relative value Rs.x is incorrect. You need to have some idea of what the prior distribution of money in the envelope is before you can do the calculation. For instance, if you knew that the two amounts were Rs.5 and Rs.10, then if you took the Rs.5 envelope (i.e., if x=5), there is NO chance that the amount in the other envelope is Rs.x/2; it must be Rs.2x=Rs.10. Similarly, if you took the Rs.10 envelope, the other envelope must be Rs.5. So conditioning on whether you took the Rs.5 or Rs.10 envelope, the expected value of the other envelope is actually (1/2)(10) + (1/2)(5) = 7.5. However, if you observe what's in your envelope, then you can condition on what you see; the expected value of the other envelope is Rs.10 if you see Rs.5 in yours, or vice versa. If you see Rs.5 you should switch, and if you see Rs.10 you should not. So there is no paradox in this case.

However, surprisingly, there are some prior probability distributions of money in the envelopes for which it always makes sense to switch (whether or not you look at what's inside your envelope)! For instance, suppose that amount of money in the two envelopes is (Rs.2k,Rs.2k+1) with probability (2/3)k/3, for each integer k>=0. It is a fun exercise to check that no matter what you have in your envelope, the other envelope has higher expected value, and you should switch! How to resolve this paradox is a perplexing philosophical question. (Some of you may object that the prior distribution has infinite mean, but this does not fully resolve the paradox, since in theory if such a distribution exists, one would still have to wrestle with the paradox of continually switching envelopes!) The study of mathematical models for decision-making is called game theory, and probability theory helps us understand expected values.

Pavan Kumar B.Tech. 1st year


Try this -> Start from capital â&#x20AC;&#x2DC;Mâ&#x20AC;&#x2122; taking n=1. -> Take i=1 if you want to read it as Mathematics or take i=2 if you want to read it as Matrix. P.S. :- initially you may find it strange but later you would like it

n++ if (i==2) if (i==1,n==1)



1,n i==

if ( designed by :- Himanshu Bansal


The department magazine of Department of Mathematics, IIT Guwahati.


The department magazine of Department of Mathematics, IIT Guwahati.