Issuu on Google+

In this issue... Features 11

A mathematical view of voting systems

Alexander Bolton explains why

18 26 32 53


elections are never fair Menace Oliver Child tells us how to teach matchboxes to play noughts and crosses Fractional calculus Rafael Prieto Curiel takes us through the calculus of witchcraft and wizardry You can count on Dirichlet James Cann counts the factors of very big numbers Analogue computing Bernd Ulmann has fun with differential equations

In conversation with Ian Stewart

We chat to the professor about Manifold, tigers and everything in between... 4


Regulars 3 10 17 24 30

Page 3 model What's hot and what's not Horoscope Dear Dirichlet Chalkdust comic


Roots: Fibonacci's legacy

43 44 46 50 60

50 1

by Tom Hockenhull with Emma Bell

How to make...

...a hexaflexagon

Prize crossnumber Biography: Martin Gardner On the cover... Top ten...


spring 2016


Editorial Director Rafael Prieto Curiel The team Rob Becke Samuel Brown Dougal Davis Alex Doak Eleanor Doman Sebastiano Ferraris Aryan Ghobadi Nikoleta Kalaydzkieva Rudolf Kohulák Anna Lambert Trupti Patel Kin an Huda Ramli Mahew Scroggs Pietro Servini Ehsan Shahriari Adam Townsend Mahew Wright Cartoonist Tom Hockenhull Cover photo adapted from Spherical Dendrite by Mark J Stock @chalkdustmag chalkdustmag

Chalkdust has made it to its first birthday, and we now have an extra year of knowledge and interesting anecdotes to share. One of the most exciting things that has happened since our last edition is the expansion of our team. Friends and colleagues join Chalkdust, and with their energy and enthusiasm we grow stronger, as we aim to ensure that this project gets passed down to the next generation. A particular highlight this issue has been the opportunity to meet Professor Ian Stewart, who has inspired us through his dedicated work as a maths populariser. In his interview, you can read about the legacy that he is creating as he shares his belief that maths is fun, useful and at times provocative. In these pages, you will find the results of many hours of endeavour and heated debate: What is your favourite shape? What is the best calculator? Does the horoscope take maths puns too far? You will find interesting articles, such as how matchboxes can beat you at noughts and crosses, or the mathematical reasons why voting systems are never perfect. And you will find our regular entries, including our crossnumber competition with its prize of £100 worth of maths goodies. It has been a delight to receive articles and contributions from our growing readership. If you enjoy these as much as we do, we publish a weekly blog post and monthly newsletter so you don’t have to wait six months until we publish our next issue. You can subscribe to these on our website and also keep in touch with us through social media. Rafael Prieto Curiel Editorial Director

Acknowledgements We would like to thank our sponsors for their support, without which we would not be able to keep this project free for our readers. Also, special thanks should go to Prof. Robb McDonald and Dr Luciano Rila and all the staff at the UCL Department of Mathematics, as well as to all of this issue’s authors and contributors. We would also like to thank Dr Hannah Fry and Prof. David Colquhoun, amongst many others, who help us promote our project through their social media. We would also like to thank the Institute of Mathematics and its Applications, the London Mathematical Society and the Turing Gateway to Mathematics for supporting this project. ISSN 2059-3805 (Print). ISSN 2059-3813 (Online). Published by Chalkdust Magazine, UCL, Gower Street, London WC1E 6BT, UK. © Copyright for articles is retained by the original authors, and all other content is copyright Chalkdust Magazine 2016. All rights reserved. If you wish to reproduce any content, please contact us at Chalkdust Magazine, UCL, Gower Street, London WC1E 6BT or email


Have you ever reached the start of a traffic jam fearing the worst—road works, an accident, a fallen tree—to later discover no clear reason for the delay? Then you fell victim to one of the strangest traffic phenomena: the phantom traffic jam. This ghostly foe may seem supernatural, but can actually be predicted through the theory of shockwaves.

We can model traffic by treating it as a compressible fluid, the equations for which are above. In this case, x is the distance along a road (of infinite length), ρ(x) is the density of cars at any given point and u(x) is the speed. A car braking a little too hard somewhere will result in a small decrease in the steady ¯ and the traffic density state solution, where all cars are travelling at the same speed u is constant. The car behind then has to brake slightly, as does the car behind that , and the one behind that, and so on, resulting in a soliton-like wave—aptly named a jamiton—travelling backwards along the road at a speed of around 12 mph (as calculated by traffic monitoring agencies). Spooky, huh?


spring 2016


In conversation with...

Ian Stewart Trupti Patel and Matthew Wright


 an airy office in the Mathematics Institute of the University of Warwick we find Ian Stewart, the prominent maths professor, Fellow of the Royal Society and one of the UK’s most prolific popularisers of mathematics. He has published over 80 books, between 1991 to 2001 took over Martin Gardner’s original Mathematical Games column for the magazine Scientific American, and in 1995 won the Michael Faraday Prize for excellence in communicating science to UK audiences. He greets us with a kind smile, a warm handshake and leads us to his desk. He begins by telling us how he first became involved in writing about maths for a general audience during his undergraduate days at Cambridge, when he edited Eureka, the Archimedean Society’s magazine. He moved on to the University of Warwick to begin his PhD and, on graduating, was awarded a full time position there; at the time the University’s mathematics department had only recently been founded by Christopher Zeeman. During his early years at Warwick, Stewart began producing a magazine called Manifold, which “although not strictly speaking a mathematical magazine, mostly concentrated on topics that had some kind of mathematical connection”. Over the course of its 12-year lifetime, more than 20 issues were printed and sent to university libraries and mathematics departments around the world, with 600 subscribers at the magazine’s peak. Stewart acknowledges that its success was partly due to the open-mindedness and support provided by the department’s senior researchers. It acted as a good advertisement for the newly founded department, managing to aract good researchers to the annual maths symposium held at the university and convincing mathematicians that Warwick was a “friendly place with amusing people”.



Covers of Manifold magazine, which ran between 1968 and 1980. The popular game Mornington Crescent from I’m Sorry I Haven’t a Clue was based on the game Finchley Central, which first appeared in Manifold.

Stewart found his home in the Mathematics Institute at Warwick, and barring a few sabbaticals has always been there, gradually becoming one of Britain’s best-loved popular maths authors. When asked about his favourite self-authored book, Stewart pauses to think for a moment, raising his head slightly. But not for very long: he leans in and proclaims “17 Equations!”. With a big smile on his face and not much prompting he tells us that the idea for 17 Equations that Changed the World came about during a book fair in Frankfurt, when a Dutch publishing house asked Stewart’s English publisher if he would be interested in the project. Stewart accepted, and originally designed a book about 30 equations, but then narrowed this down to 20 to make the book more manageable. He finally seled on 17 as he believed that this was a much more interesting number! By focusing on the historical impact and stories behind the equations, Stewart created a fascinating book that was accessible to everybody. In the chapter on relativity, Stewart was proud of debunking one prevalent myth, that the equation E = mc 2 was directly responsible for the development of the atomic bomb. In fact this is not the case at all, as nuclear explosions only use a small percentage of the materials’ mass energy, and it was already known experimentally that nuclear reactions could release a lot of energy. But the myth prevailed as this was one of the ways the American government managed to convince the public that the atomic bomb might possibly work. Another of his favourites is Why Beauty is Truth: a This was one way the American History of Symmetry, which looks at the historical degovernment managed to convelopment of group theory explained through the convince the public that the atomic cept of symmetry. The book starts with the Babylobomb could possibly work. nians, where the calculations of ancient scribes reveal the earliest known solutions to quadratic equations. It moves on to the Renaissance period and its aempts to solve quartic equations, follows this up with Galois’ work on quintics (anyone who has studied Galois theory will no doubt be aware of Stewart’s excellent textbook on the subject), before ending with modern developments in group theory. The book was very well received and shortlisted for the Royal Society Winton Science Prize for Science Books. Turning to his competitors in the popular maths market, Stewart likes Marcus du Sautoy’s Music of the Primes, Douglas Hofstadter’s seminal Gödel, Escher, Bach and Zeno’s Paradox by Joseph Mazur. He’s also a fan of Euclid in the Rainforest, again by Joseph Mazur, where each chapter is presented as a personal story taking its readers to the heart of mathematics: logic and proof. 5

spring 2016


From le to right: Ian Stewart, Terry Pratche and Jack Cohen, the authors of The Science of Discworld. Cohen and Stewart are being made honorary wizards at Unseen University during the award of an honorary degree to Terry Pratche by the University of Warwick.

In the late nineties he began collaborating on the science of Discworld with Terry Pratche, the much loved, best-selling fantasy author, and Jack Cohen. As Stewart begins describing his experiences of collaborating with Pratche and Cohen he sits up in his chair and his speech quickens. Pratche’s Discworld books are set in a fictional world consisting of a flat disc balanced on the back of four elephants who in turn are perched on the back of a giant turtle. The Science of Discworld books aimed to explain interesting scientific ideas through the use of fiction, although Pratche had to be convinced that the books would work as he did not believe that the scientific backdrop to a fictional, magical world could be wrien about. In the end, the story was adapted to be about the wizards of Unseen University’s accidental creation of Roundworld, a place where magic doesn’t exist. Between the third and final books Pratche was diagnosed with Alzheimer’s disease but still managed to write new stories until he sadly passed away last year. In 1995, Stewart received the Michael Faraday Prize for popularising science and he claims this put him in the running to present the Royal Institution Christmas Lectures, which he did in 1997 with a series entitled The Magical Maze. He gave five lectures, the final of which was on symmetry, focusing especially on paerns in the animal kingdom. The lecture began with William Blake’s poem The Tyger, which he eagerly recited to us, ending with the line “Dare frame thy fearful symmetry?”. To help him explain the mathematics of paern formation in animals he decided to bring a live tiger onto the stage. Finding a tiger to borrow was harder than first anticipated and the


chalkdust producers almost had to sele on a lion cub, since cubs have rudimentary stripes on parts of their body. Eventually, however, a living tigress, Nikka, was found. She was six months old and two burly men had to hold onto her with chains, with the front two rows of chairs used as a barrier. A live tiger is certainly one way to keep an audience aentive during a maths lecture! Given his vast experience we asked Stewart to share his thoughts about how best to popularise maths to the masses. He believes that there are two messages that need to be conveyed to the general public. The first and most important is that maths is not stuck in the Dark Ages, that there is still lots of new research still taking place. The second is to then explain to the public what this new stuff actually is, and applications of mathematics are not necessarily the best way to get people interested. Esoteric subjects such as 17-dimensional manifolds, the Big Bang, quantum The tiger’s distinctive paern can be theory, catastrophe theory, fractals, the Riemann hypothunderstood using group theory. esis and Fermat’s Last Theorem are all examples of complicated abstract mathematics that have captured the public’s aention without needing to be reduced to everyday applications. On the other hand there are areas of maths that he Esoteric subjects such as 17tries to avoid popularising. Despite this, he was asked dimensional manifolds, fractals to write a book about the famous Millennium Prize and the Riemann hypothesis Problems, a collection of seven unsolved maths conhave all captured the public’s jectures with a correct solution to one of them worth aention. $1m. Only one, the Poincaré conjecture, has been solved so far (although the winner, Grigori Perelman, did not claim his prize). One of the problems is the Hodge conjecture, a major unsolved problem in the field of algebraic topology, which asks “which cohomology classes in H k,k (X) come from complex subvarieties Z?”. This is a very technical piece of pure mathematics that is hard enough to explain to your average maths professor, let alone the general public! He believes that some areas of maths are easier to popularise than others and different techniques can be used to capture different types of audiences. History interests a certain class of people and telling stories about the mathematicians involved oen works: who did what and what sort of person were they (think of the recent films about Alan Turing and Stephen Hawking). And, of course, another good way to interest people is by using links to physics, biology, economics and financial markets (although in the laer case Stewart points out that the global financial crisis shows that these links are not always good advertisements for mathematics). Perhaps unusually for a modern day mathematician, Stewart’s own mathematical research has been broad, crossing the infamous pure/applied dividing line. He started out his career as a pure mathematician, working in abstract algebra and group theory. However he later began to work increasingly in applied maths, oen in the field of dynamical systems, and produced influential work in animal locomotion and paern formation. Does he think working in many different areas of maths is something more mathematicians should do? “Well there are some deep thinkers who stay in one area for seven years, such as Andrew Wiles when he was proving Fermat’s Last The7

spring 2016

chalkdust orem, and they think about close and related mathematical ideas to keep it alive. Whereas some people, such as Terence Tao, constantly do fiy pieces of work simultaneously in what appear to be totally unrelated areas of mathematics and so oen put them together.” He thinks it is good for mathematicians to move out of their comfort zones, as in doing so you oen find spin-offs back into your own area, and it helps you avoid geing stuck in a rut. Although he concedes that for some people this doesn’t work: “you may end up geing distracted by several things but end up doing nothing!”. What does Stewart think of the relationship between research and popularisation? He believes mathematical research does help with popularising science and maths, and gives authority to his writing. Most people who write about pop science usually work in the field and have developed a skill of being able to write in a way others would understand. But there are exceptions, and there are some excellent journalists who spend a lot of time talking to scientists and trying to understand what is going Ian posing in a Chalkdust T-shirt. on; Stewart believes both approaches work. The relationship between popularisation and research can work both ways however; in fact, he has oen found that popularising something has helped his research. “Writing for a different audience makes you rethink everything—oen you find that as you try to explain things to an audience who do not understand things perfectly well, you realise you don’t understand it as well as you thought. And so you get feedback in both directions.” Popularisers also get exposed to different areas of maths that can inspire new research ideas. It was having to review a book on robot locomotion for New Scientist, for example, that got Stewart thinking about how animals walked, spawning a whole new area of mathematical inquiry in which he has subsequently published many papers.

Writing for a different audience makes you rethink everything. Stewart’s next book, due out at some point in 2016, is called Calculating the Cosmos, and will look at cosmology and astronomy through the window of maths. He sounds particularly excited about this book, and believes it potentially is his best yet. In 17 Equations that Changed the World he was quite outspoken about cosmologists’ current ideas about dark maer, dark energy and inflation. We asked him what his thoughts were now. “I still am outspoken!”, he exclaims. There is a large amount of energy missing from the universe, and the dynamics of objects such as galaxies don’t seem to agree with our theoretical predictions. To account for this, physicists have to add extra energy components to the universe: dark maer, dark energy and the inflaton. This certainly works: the maths can now accurately describe the dynamics of the universe we observe, although these extra components have never


chalkdust been directly detected. There are alternatives: one could, for example, modify the laws of gravity on large scales, an approach that Stewart believes shouldn’t be neglected. “I think modified gravity is the way forward. I don’t think the impetus of dark maer is that strong.” Physicists would lead you to believe that “it is dark maer; we know it’s there”, but Stewart doesn’t find the evidence convincing: “there are a lot of neglected alternatives: that’s bad.” From his airy office, it was to a more visible sign of the wonders of our universe that Stewart went with his family for Christmas: the northern lights. A well-deserved break for one of mathematics’ most famous and best communicators. Trupti Patel is a PhD student at the London Centre for Nanotechnology at UCL. She works on superconducting quantum interference devices. Mahew Wright is a PhD student at the Department of Mathematics at UCL. He works on cosmology and modified theories of gravity.

My favourite shape Everyone loves a good shape. You may think that you learnt all the shapes at primary school, but there are plenty still around that mathematicians find interesting. We have spread some of our favourites throughout this issue. We’d really love to hear about yours! Send them to us at chalkdustmag, and you might, @chalkdustmag or just see them on our blog!

Penrose tiles Rudolf Kohulák

My favourite shape is a rhombus that has been split into two pieces called ‘kite’ and ‘dart’. These shapes might not look interesting, but the British physicist Roger Penrose discovered an unusual feature of these objects. They can be arranged to cover the whole plane without any gaps or overlaps. However, the resulting image is highly unsymmetrical. For instance, it lacks translational symmetry (ie you cannot shi the paern such that the result would end up being identical to the original picture). The discovery revolutionised the field of crystallography and led to the identification of quasicrystals.


spring 2016




Maths is a fickle world. Stay à la mode with our guide to the latest trends.

Agree? Disagree? @chalkdustmag chalkdustmag

40-minute conference snores 5-minute talks Show us your coolest stuff and let us chat to you later.

You lost everyone after that entire page of algebra, mate.

Maths T-shirts

274,207,281 − 1 The new largest known prime number, found in January 2016!

Show your friends you’re better than them by wearing some maths they don’t understand. (buy yours at

Maths tattoos

257,885,161 − 1 Fake proofs of the Riemann Hypothesis Three cheers for Dr Enoch! We will never get tired of the panicked excitement following one of these.

17½ million digits of insignificance, you useless old largest prime.

Volume = pi*z*z*a Everyone knows you use the letters a for radii and z for heights.

Pizza cutting Want equally-sized pieces but not all with crust? No longer only possible with just 6 or 12 friends!

Counterexamples Please don’t find one of these. It would ruin a lot of PhDs.

More free fashion advice online at

Pictures T-shirt: Wikimedia Commons, user Camisetas. Dr Enoch: Opeyemi Enoch.

eiπ + 1 = SHUT UP



A mathematical view of

voting systems

Philip Halling

Alexander Bolton


 aer I began my undergraduate degree, Barack Obama won the 56th United States presidential election. The next day, my pure maths tutor asked me if I had followed it closely, saying that elections really interested him. I was surprised to hear this: surely the only maths involved was adding up the number of votes? In reality, voting systems hold considerable interest for mathematicians, and there are several mathematical results and theorems concerning electoral processes. The main thing that I like about the language of mathematics is that it allows us to make extremely precise statements without ambiguity and, as you’ll see, we can make precise mathematical statements about voting systems—with some surprising results.

Transitive preferences and voting systems We define an electorate E to be a vector of voters (v1 , . . . , vn ). Each voter vi has a ranking (with ties allowed) of the vector of candidates C = (c1 , . . . , cm ). We write x ≻i y to mean that voter vi prefers x to y. The ranking is transitive, meaning that if x ≻i y and y ≻i z then we must have x ≻i z. An interesting property of the votes is that if a majority of voters prefer x to y, and a majority prefer y to z, it does not necessarily follow that a majority prefer x to z. A simple example with three voters is x ≻1 y ≻1 z, z ≻2 x ≻2 y and y ≻3 z ≻3 x. I’ll use the notation “≻”, without the subscript i, to refer to the preferences of a group of voters. 11

spring 2016

chalkdust A voting system is an algorithm that takes the voters’ preferences and outputs a relation between candidates, which we denote by “≻E ”. So c1 ≻E c2 would mean that the voting system rated c1 as preferable to c2 . The relation ≻E is also transitive. Formally, if T(C) denotes the set of total orderings of C then a voting system is a function F : T(C)n → T(C), where n is the number of voters. In the case of two candidates, May’s theorem (1952) provides some good news. Consider a voting system that satisfies the following sensible conditions: • Each voter is given the same weight. (For any permutation σ of the voters, F(E) = F(σ(E)).) • If we relabel the candidates then the result must be similarly relabelled. (For any permutation τ of the m candidates, if C is permuted to become τ (C) then F(E) becomes τ (F(E)).) • If c1 ≻E c2 or c1 ≈E c2 (the laer representing no preference between c1 and c2 ) and c1 increases in popularity then c1 must win. (If a voter vi changes their ranking from c2 ≻i c1 to c1 ≈i c2 or c1 ≻i c2 , or from c1 ≈i c2 to c1 ≻i c2 , then c1 ≻E c2 ). May’s theorem tells us that the only voting system satisfying these properties is plurality, where each voter casts a vote for their favourite outcome and whichever outcome is more popular is chosen. So in the case where there are only two options, there is a procedure that accurately reflects the wishes of the electorate. If there are more than two alternatives then Arrow’s theorem (1950) provides a negative counterpart to May’s theorem. Consider the following criteria for a voting system: 1. If every voter prefers candidate x to candidate y then x must be ranked higher than y by the voting system (∀E ∀x,y (∀i x ≻i y) =⇒ x ≻E y). If every voter is agreed on the ranking of two candidates, it is essential that the voting system reflects this. 2. Suppose that x and y are candidates and that x ≻E y. If some of the voters change their preferences for the other candidates (eg candidate z becomes more popular), but each voter preserves their preference between x and y, then x ≻E y must still hold. Consider a group that wants to rank candidates for a job position. They evaluate all the candidates and agree that they prefer candidate x to y. If they then receive new information about z (but no new information about x and y), it seems strange that they should reconsider their group preference x ≻E y. 3. There is no dictator, defined as a single voter vi who can always determine the election (∀E ∀x,y @vi (x ≻i y =⇒ x ≻E y)). This is an essential property for a voting system to have! Perhaps surprisingly, if there are more than two candiNo voting system exists that dates then Arrow’s theorem shows that no voting system satisfies the three criteria. exists that satisfies all three criteria. Proofs of this theorem usually use contradiction, assuming that the system satisfies the first two conditions and showing that it cannot satisfy the third. Most systems that are used in the world satisfy conditions 1 and 3 but do not satisfy condition 2. Condition 2 is known as independence of irrelevant alternatives.



Plurality (AKA first-past-the-post) Plurality is a very common voting system. Each voter chooses their favourite candidate, and whichever candidate receives the most votes is declared the winner. Problems with this system are well known, and I will give an example of how it violates independence of irrelevant alternatives. Consider an election with three candidates x, y, and z where the voters can be split into three groups, and their preferences are shown in Table 1. Under plurality, candidate x would win. But if nine or more voters change their preference from x ≻ z ≻ y to z ≻ x ≻ y, then the result is a victory for y. This is in clear violation of independence of irrelevant alternatives since these voters kept their preference x ≻ y, with the situation being exactly as that described in the definition.

Anthony Burgess

A British ballot paper, using the plurality, or first-past-the-post, voting system.

This is also a violation of the majority loser criterion, which is as follows: if a majority of voters prefers every other candidate over a given candidate, then the given candidate must not win. Once the nine voters switch their support, there is still a majority that prefers both x to y and z to y. In Table 1, candidates x and z are clones, meaning that no Number of voters Preference voter ranks any other candidate to be between (or equal 110 x ≻z ≻y to) x and z. This definition expands to a multi-member 102 y ≻x ≻z clone set, a strict subset of the candidates with the prop8 z ≻x ≻y erty that no voter ranks any candidate outside the clone set to be between (or equal to) any members of the clone Table 1: Voters’ preferences. set. The independence of clones condition requires that removing a clone from a clone set must not improve or harm the ranking of any candidate outside the clone set. Plurality violates this condition because z’s removal would hinder candidate y. There is, therefore, a simple way to affect the outcome of a plurality election in your favour without having to convince anyone else to support you. If you introduce a clone of an opponent then the vote for your opponent may split between your opponent and their clone, meaning that you require fewer votes to win. In practice, this fact is well known and some people in British elections do not vote for their preferred candidate because they do not want to split the vote against the party they dislike.

Alternative vote On 5 May 2011, the UK voted to keep the plurality system for parliamentary elections, rejecting the alternative vote. The alternative vote is defined as follows: voters rank the candidates, and the initial results are based on each voter’s first-preference votes. If a candidate receives more than half of first-preference votes then that candidate wins. Otherwise, the candidate with the fewest 13

spring 2016

chalkdust first-preference votes is eliminated. Votes for the eliminated candidate are added to the totals of the remaining candidates based on who is ranked next on each vote. If no remaining candidates are ranked on a vote, the voter is assumed to be indifferent between them and the vote is discarded. This process continues until one candidate wins by obtaining more than half the votes. The alternative vote is, by Arrow’s theorem, not independent of irrelevant alternatives, but it is clone-proof, unlike plurality voting. Consider the example above where 9 voters changed their preference from x ≻ z ≻ y to z ≻ x ≻ y. If we run an alternative vote election here, no candidate would have a majority of first-preference votes, so candidate z would be eliminated. Candidate z’s 17 votes would then be given to x, giving x a majority. The alternative vote satisfies the majority loser condition (if a majority of voters prefer every other candidate over a given candidate, then the given candidate cannot win), whereas plurality does not. So is the alternative vote a beer system than plurality? It does have some nice properties that plurality does not have, but the opposite is also true. In order for a voting system to be consistent, we require that if we arbitrarily break the electorate into two parts and both parts of the electorate declare the same winner, then an election on the whole electorate should declare the same winner as the sub-electorates. Plurality is a consistent voting system but the alternative vote is not. Consider the example of two sets of voters and their amalgamation in Table 2, from Woodall (1994). For electorate (a), z is eliminated so x gains 0 votes and y gains 3, Australian Electoral Commission making y the winner. For electorate (b), x is eliminated so An Australian ballot paper, using the y gains 3 votes and z gains 0, making y the winner again. alternative vote system. In the election with the merged electorate (a) + (b), y is eliminated, so x gains 0 votes and z gains 8 votes, making z the winner. Voter preference x ≻y ≻z y ≻z ≻x z ≻y ≻x

Voters (a) 6 4 3

Voters (b) 3 4 6

Voters (a) + (b) 9 8 9

Table 2: Voter preferences in two electorates.


chalkdust Consistency is an important property for protecting against gerrymandering of electoral districts: if two areas both support the same candidate using the alternative vote, it is possible to merge the two areas and have the newly created area elect a different candidate. A voting system satisfies the monotonicity criterion if the following holds: given an electorate E, if some voters improve their ranking of x without changing the rankings of Steven Nass, licensed under Creative Commons CC BY-SA 4.0 other candidates to create a new elecHow to steal an election by gerrymandering. torate E ′ , then for any y, x ≻E y =⇒ x ≻E ′ y, ie candidate x cannot be harmed if more voters favour x than before. As with consistency, plurality satisfies the monotonicity criterion but the alternative vote does not. Consider the following example from Johnson (2005), Voter preference Voters Voters′ shown in Table 3. Using the original electorate, z is elimiz ≻x ≻y ≻w 5 5 nated in the first round and z’s 5 votes pass to x, so x has y ≻ z ≻ x ≻ w 4 4 11 votes. Then y is eliminated and all 6 of y’s votes go y ≻x ≻z ≻w 2 0 to x because z is eliminated. So x now has 17 votes and x ≻y ≻z ≻w 6 8 w has 9, so x wins a substantial victory with an electoral w ≻z ≻x ≻y 9 9 preference of x ≻E w ≻E y ≻E z. Suppose the two voters with ranking y ≻ x ≻ z ≻ w now change their ranking to x ≻ y ≻ z ≻ w, making x even more popular than before. Table 3: An electorate showing that the alternative vote is not monotonic. Then under the new electorate (denoted Voters′ in Table 3), y is eliminated in the first round and y’s 4 votes pass to z, so z has 9 votes. Then x is eliminated and all 8 of x’s votes go to z. So the final result is that w has 9 votes and z has 17, and the new electorate preference is z ≻E ′ w ≻E ′ x ≻E ′ y. As a result of x’s increased popularity, x has been demoted from first choice to third choice! A related but different property to monotonicity is the participation property. Under the alternative vote, it is possible for a voter to harm their preferred candidate by voting, whilst this is not possible with plurality.

Conclusion This is only a very brief introduction to this area, and there is much more to be said. What I find engaging is that mathematical devices and constructions such as theorems and counterexamples can be applied to voting systems in order to compare them. We can say with complete confidence that no preferential voting system will ever be devised that satisfies Arrow’s three criteria, and 15

spring 2016

chalkdust we can also make statements about all possible elections run under a voting system, such as “the alternative vote is clone-proof”. Many different voting systems are in use around the world, and it is a mathematical certainty that every electoral system lacks some desirable characteristics, so when discussing which system is the “best” it can only ever be a debate about which criteria people think are most important, and that may depend heavily on the context of the election.

Alexander Bolton is a PhD student at Imperial College London, applying Bayesian change point models to detect regimes in multivariate stochastic processes.

References and further reading Arrow KJ (1950). A difficulty in the concept of social welfare. Journal of Political Economy 58 (4) 328–346. Johnson PE (2005). Voting Systems. hp:// May KO (1952). A set of independent necessary and sufficient conditions for simple majority decision. Econometrica 20, 680–684. Woodall DR (1994). Properties of preferential election rules. Voting maers 3, 8–15.

My favourite shape


Belgin Seymenoğlu

We can start with just a point which, believe it or not, is already a simplex. Then if we introduce a second point, we can connect the two to get a new shape called a 1-simplex (or a line to you and me). Next, if we take a third point, and connect it to our two other points, we have the 2-simplex, otherwise known as a triangle. But if we then connect our three points in the triangle to yet another new point, we get a three-dimensional shape: the tetrahedron (or the 3-simplex).

What’s more, there is yet another member of the family: a four-dimensional shape. This shape is called the 4-simplex, and it has five vertices. The 4-simplex is useful in population biology because if you have, for example, five different species, you can represent the fractions of each population by ploing a point in the 4-simplex. If that’s not enough for you, you can make a five-dimensional, six-dimensional or even an n-dimensional simplex!





Mar 21 - Apr 19

Apr 20 - May 20

May 21 - Jun 20

Today will be very odd for you, even though you are in your prime.

A new mode of transport will give you a greater range. But do not be mean to those you love.

A decision will have a great bearing on your future happiness. It will test your quick reflexes.

You share your sign with:

You share your sign with:

You share your sign with:

Sophie Germain

Carl Friedrich Gauss

Blaise Pascal




Jun 21 - Jul 22

Jul 23 - Aug 22

Aug 23 - Sep 22

An attempt to appear normal will send your life off at a tangent.

An irrational idea will have a deep effect on you. It will multiply and remain irrational.

You will think everything is set, then a member of your group will surprise you with a ring.

You share your sign with:

You share your sign with:

You share your sign with:

Alan Turing

Pierre de Fermat

Bernhard Riemann




Sep 23 - Oct 22

Oct 23 - Nov 21

Nov 22 - Dec 21

Your maximum happiness is unstable. But your minimum point is too!

Something will appear lost in translation. But on reflection all will be clear. You share your sign with:

You share your sign with:

You share your sign with:

Martin Gardner

Yutaka Taniyama

Ada Lovelace




Dec 22 - Jan 19

Jan 20 - Feb 18

Feb 19 - Mar 20

It will be hard to differentiate friend and foe. Integrating a stranger will help you.

If you divide your problems, the remainder will be manageable.

You can use the eigenvalues of a matrix to diagonalise it!

You share your sign with:

You share your sign with:

You share your sign with:

Isaac Newton

Daniel Bernoulli

Albert Einstein


spring 2016


Menace the Machine Educable Noughts And Crosses Engine Oliver Child


 use of machine learning to teach computers to play board games has had a lot of interest lately. Big companies such as Facebook and Google have both made recent breakthroughs in teaching AI the complex board game, Go. However, people have been using machine learning to teach computers board games since the mid-twentieth century. In the early 1960s Donald Michie, a British computer scientist who helped break the German Tunny code during the Second World War, came up with Menace (the Machine Educable Noughts And Crosses Engine). Menace uses 304 matchboxes all filled with coloured beads in order to learn to play noughts and crosses.

How Menace works Menace “learns” to play noughts and crosses by playing the game repeatedly against another player, each time refining its strategy until aer having played a certain number of games it becomes almost perfect and its opponent is only able to draw or lose against it. The learning process involves being “punished” for losing and “re- Donald Mitchie Donald Michie’s original Menace. warded” for drawing or winning, in much the same way that a child learns. This type of machine learning is called reinforcement learning.


chalkdust The 304 matchboxes that make up Menace represent all the possible layouts of a noughts and crosses board it might come across while playing. This is reduced from a much larger number by removing winning layouts, only allowing Menace to play first and treating rotations and reflections as the same board.

Each of these layouts would be represented by a single matchbox, as they are all either rotations or reflections of one another.

Each of these matchboxes contains a number of coloured beads, each colour representing a valid move Menace could play for the corresponding board layout. The starting number of beads in each matchbox varies depending on the number of turns that have already been played. In Donald Michie’s original version of Menace, the box representing Menace’s first turn had four beads for each different move. The boxes representing the layouts of the board for Menace’s second turn contained three beads for each different move; there were two beads each for Menace’s third; and one of each in the boxes representing Menace’s fourth go. There are no boxes representing Menace’s fih move as there is only one space remaining and Menace is forced to take it.

Coloured beads representing possible moves.

To speed up the learning process even more, only beads representing unique possible moves are used. So even though all the spaces are free on an empty board, only three need to be represented: centre, side and corner. All other positions are equivalent to one of these three. When Menace makes its move one must find the box representing the current board layout and take a bead at random from that box. The bead represents the space in which Menace wishes to place its counter. This process is repeated every time it is Menace’s go until either somebody wins or the board is filled. The beads shown are the only ones required for each scenario: all other positions on the board are equivalent to one of the positions marked.

Aer having completed a game, Menace is punished or rewarded depending on the outcome. If Menace lost, the beads representing the moves Menace played are removed. If it was a draw, an extra bead of the colour played is added to each relevant matchbox; while if Menace won, three extra beads are added. This means that if Menace played badly, it will have a smaller chance of playing the same game next time. However, if Menace played well, it is more likely to follow the same route the next time and win again. 19

spring 2016


An example game When playing against Menace, Menace always starts, otherwise the number of matchboxes needed would be greatly increased. 1st move, Menace’s turn: The operator finds the matchbox that displays the empty board, opens it and takes out a random bead. In this case the random bead is red, which means that Menace wants to place its counter in the centre-top space. 2nd move, player’s turn: The human player places the counter in the desired spot, which is in the centre. 3rd move, Menace’s turn: The operator finds the matchbox that displays the current board layout, opens it and takes out a random bead. The random bead is now blue, which means that Menace wants to place its counter in the top-le corner. 4th move, player’s turn: The human player places the counter in the desired spot, here blocking Menace from geing three in a row. 5th move, Menace’s turn: The operator finds the matchbox that displays the current board layout, opens it and takes out a random bead. The random bead is green, which means that Menace wants to place its counter in the middle row and le-most column. 6th move, player’s turn: The human player places the counter in the boom-le corner and obtains three in a row. The human player wins! With the game over, Menace, since it lost, must be punished. Hence every bead that represented a move played is removed from its corresponding box, as shown below.



Building Menace I built a physical implementation of Menace so that I could play against the real thing myself. I wanted to create an experience similar to that which Michie must have had when he came up with the idea. What is most striking is the time it takes to play one game: finding the right box, taking a bead out and then placing Menace’s piece in the corresponding space on the board is very time-consuming.

To make things faster I therefore decided to write a Python script to work in the same way that Menace does, with the computer doing all the work for me. Using the script, I could also save the state of the boxes and try out new reward systems and bead arrangements without losing the results from a previous set of data. On top of this, I could now train Menace by playing it against already existing programmed strategies for noughts and crosses. I first had it play hundreds of games against a program that placed its counters randomly. You could see that aer having played a certain number of games Menace had evolved. But when I played against Menace aerwards, it would sometimes make irrational moves. When Menace played against the random strategy, the random strategy would oen place its counter somewhere other than where it could easily block Menace, leing Menace win and 21

spring 2016

chalkdust therefore reinforcing the sequence of moves that had been played. These moves, however, might not necessarily have been good ones: Menace might just have got lucky. I also wrote a program for a perfect strategy for noughts and crosses to train Menace against. This strategy only lets another player draw or lose against it. What struck me when training Menace against this program was the number of times it ran out of beads in certain boxes, having lost so many times that all of the beads from a particular box were removed. If there are no beads le in the box representing the current scenario, Menace is said to “resign” as it doesn’t “think” it can win in its current situation. This is OK if it’s a box representing a move that you rarely come across, but when it’s the box representing the empty board, the box that Menace always starts off with, there’s a problem! To fix this, all I needed to do was to add more beads to this box, although this had the consequence that it takes Menace longer to learn.

Graphs showing Menace against a perfect strategy. The y-axis shows the number of games Menace has drawn minus the number it has lost. On the right, we can see that aer having added beads to the starting state of Michie’s original Menace, it takes longer to draw the same number of games as it has lost (cross the x-axis).

Machine learning is being used in many fields today including in the work of technology giants such as Facebook and Google, whom we have already mentioned have made breakthroughs in the more complex game of Go. The AI programs learning to play Go are not saving all the possible layouts of the game as we have done with noughts and crosses, as there are more of these than there are atoms in the universe. Instead, these programs detect similar paerns and use them as a starting point for their learning. However, the programs are then taught in a similar way to the way we taught Menace to play against different strategies, although the programs learning Go play against themselves repeatedly. So ideas that emerged from simple machines in the past are still being used today for much more complicated tasks. Oliver Child is a high school student living in Brussels who is interested in all kinds of maths and computing. Oliver was first introduced to Menace by Mahew Scroggs, who has made a version of Menace that you can play against at



Moonlighting agony uncle Professor Dirichlet answers your personal problems. Want the Prof’s help? Contact

Dear Dirichlet,

municate with and I’m trying to learn BSL to com f, dea is rk wo at gue lea col A new college, but every classes at the local community e som d nce me com e hav I . her she cocks her head to the side. time I converse with my hands,


— All thumbs, Royal Leamington Sp

Is this a social cue I’m missing?


An interesting dilemma... but I think I’ve heard of this problem before. It sounds like your college is teaching you cosine language. Your friend has obviously worked out that rotating by π/2 allows her to understand you, as sin(x) = cos(x - π/2). Alternatively, your new colleague is a dog.

Dear Dirichlet,

As part of my new year’s resolu tions, my partner and I are trying to lose weight. Part of our regime is to bring packed lunches in to work every day, ins tead of purchasing cooked lunches from the cantee n. We have brought back a rec ipe book from our amazing holiday in Barcelona, and have been enjoying mixtur es of small meals from there. Despite this, and to my surpris e, over the last month we’ve act ually put on half a stone! Do you know any dieting tips that can help?

— Wishful shrinking, York

DIRICHLET SAYS: Sounds like you’re suffering from a bad case of

buy-no-meal expansion. I must warn you though: the growing body of research on this subject suggests that this phenomenon might be exacerbated by your choice of eating combinations of small Spanish dishes. My recommendation is that the diet, for both you and your partner, will be more coefficient if you avoid consuming too many ta-pascal-ories.



Dear Dirichlet,

My brother has travelled to We llington for a few months while he works for a client. As we usually see him most we ekends, I think the kids will mis s him a lot. I know we could always videocall, but is there an easier way for us to see him?

— Torquil Farquhar de Smith, Fraoch

y Bay, Scotland

DIRICHLET SAYS: You are fortunate that he has moved to New Zealand.

Pop into your local Waterstones and pick up an antipodal map (shouldn’t be more than £l0). If you take a short walk to northern Spain (about 500 miles, and 500 more), you should be able to apply the map there. Bob’s your uncle: in the projection space you’ll be together. Warning: this projection ends up flooding most of the world.

Dear Dirichlet,

and we have always ghbour Ian for a few years now, I have lived next door to my nei relenta new BMW and since then has ght bou he , ago s nth mo two But got on well. y to tell him to my aging hatchback. I’d like a wa of te sta the on d nte me com lessly tful response? me miserable. Any tips for a tac stop his mocking, as it’s making

— Plotting something, West Ruislip

 DIRICHLET SAYS: Clearly you’ve reached the point where you need to draw the line and coordinate an attack on this unfriendly fellow. Take some axes to his right hand while shouting “SPATIAL DELIVERY”. That ought to be plain enough for Car-tease-Ian. (Apologies for the graphic nature of my response.)

Dear Dirichlet,

I am a badger. Last night I spent a very cold night in the forest, as I arrived home to find I couldn’t get in. I would like to avoid this happening again. What should I do?

— Badger, Hemel Hempstead

DIRICHLET SAYS: Your sett A will be open if x ϵ A, ε > 0 s.t. Bε (x) ϲ A.



More Dear Dirichlet, including two exclusive collections, online at 25

spring 2016


Fractional calculus The calculus of witchcraft and wizardry Yusuke Kawasaki, licensed under Creative Commons CC BY 2.0

Rafael Prieto Curiel


 a function is usually regarded as a discrete operation: we use the first derivative of a function to determine the slope of the line that is tangent to it, and we differentiate twice if we want to know the curvature. We can even differentiate a function negative times—ie integrate it—and thanks to that we measure the area under a curve. But why stop there? Is calculus limited to discrete operations, or is there a way to define the half derivative of a function? Is there even an interpretation or an application of the half derivative? Fractional calculus is a concept as old as the traditional version of calculus, but if we have always thought about things using only whole numbers then suddenly using fractions might seem like taking the Hogwarts Express from King’s Cross station. However, fractional calculus opens up a whole new area of beautiful and magical maths. How do we interpret the half derivative of a function? Since we are only halfway between the first derivative of a function and not differentiating it at all, then maybe the result should also be somewhere between the two? For example, if we have the function ecx , with c > 0, then we can write any derivative of the function as Dn e cx = c n e cx , which works for n = 1, 2, . . . , but also works for integrating the function (n = −1) and doing nothing to it at all (n = 0). As an aside, the symbol Dn might seem like a weird notation to represent the nth derivative, or D−n to represent the nth integral of a function, but it’s just an easy


chalkdust way to represent derivatives and integrals at the same time. So if we have a real number ν, why not express the fractional derivative of e cx as Dν e cx = cν e cx ? With this new derivative we know that if ν is an integer then the fractional expression has the same result as the traditional version of the derivative or integral. That seems like an important thing, right? If we want to generalise something, then we cannot change what was already there. If the fractional derivative is a linear operator (ie if a is a constant then D1/2 af (x) = aD1/2 f (x)) , then we would also obtain that [ ] [ ] D1/2 D1/2 e cx = D1/2 c1/2 e cx = c1/2 D1/2 [e cx ] = ce cx , so half differentiating the half derivative gives us the same result as just applying the first derivative. In fact for this very first definition of a fractional derivative, we get that Dν [D μ e cx ] = Dν+μ e cx = cν+μ e cx for all real values of ν and μ. Great: our fractional derivative has at least some properties that sound like necessary things. Differentiating a derivative or integrating an integral should just give us the expected derivative or appropriate integral. This way of defining a fractional derivative for the exponential function is perhaps a good introductory example, but some important questions need to be asked. Firstly, is this the only way to define the half derivative for e cx such that it has the above properties, or could we come up with a different definition? Secondly, what happens if c < 0? For example, with c = −1, we would get that D1/2 e−x = ie−x , which is imaginary. So the fractional derivative of a real-valued function could be complex or imaginary? That sounds like dark arts to me. And finally, how does that Dν f (x) work if we are not talking about the exponential function, but if we have a polynomial or, even simpler, a constant function, like f (x) = 4? If we start with f (x) = 4 (a boring, horizontal line), we know that its first derivative is D1 f (x) = 0, so should the half derivative be something like D1/2 f (x) = 2? Then, if we half differentiate that expression again, we obtain a zero on the le-hand side (since D1 f (x) = 0) and the half derivative of a constant function (in this case g(x) = 2), on the right-hand side. But this is certainly not right! We said that the half derivative of a constant function is half the value of that constant, but now we obtain that it is zero! There is nothing worse for a mathematician than a system that is not consistent. The best way is to begin with a more formal definition. Perhaps aer having to integrate a function thousands and thousands of times, Augustin-Louis Cauchy discovered in the 19th century a way in which he could write the repeated integral of a function in a very elegant way: ∫ x 1 −n (x − t)n−1 f (t) dt. D f (x) = (n − 1)! a Not only is this a beautiful and simple formula, it also gives us a way to write any iterated integration (although to actually solve it, we would usually need to do some not-so-beautiful integration by parts). So why don’t we just change that number n to a fraction, like 9¾? Everything in that 27

spring 2016

chalkdust expression would work smoothly… except for that dodgy factorial! The value of n factorial (wrien as n!, possibly the worst symbol ever used in maths since now we cannot express a number with surprise) is the product of the numbers from 1 through to n, ie 1 × 2 × · · · × n. What, then, would the factorial of 9¾ be? Maybe close to 10! but not quite there yet? Luckily for us, an expression for the factorial of a real number has intrigued mathematicians for centuries, and brilliant minds like Euler and Gauss, amongst others, have worked on this issue. They defined the gamma function, Γ, in such a way that it has the two properties we need: first, Γ(n) = (n − 1)!, so we can use Γ(n) instead of the factorial. Second—and even more importantly, given that we are dealing with fractions here—is that the function is well-defined and continuous for every positive real number, so we can now compute the factorial of 9¾, which is only 57% of the value of 10!. Now, we can write the repeated integral as 1 D f (x) = Γ(ν) −ν

∫ a


(x − t)ν−1 f (t) dt,

which gives the same result as before when ν is a positive integer and is well-defined when ν is not an integer. The integral above is known as the fractional integral of the function f. Awesome!

Graph of the gamma function Γ(x).

What happens if we take the derivative of the repeated integral? Easy! We get the fractional derivative, right? Not quite, since we have two options: we could either differentiate the original function first and then take the fractional integral, or we could fractionally integrate first and then take the derivative. Damn! Both definitions are equally valid and we mathematicians hate having two definitions for the same thing. But are they even the same thing? If we differentiate first and then take the fractional integral—known as the Caputo derivative—we don’t necessarily get the same result as if we fractionally integrate a function first and then take its derivative. The laer is called the Riemann–Liouville derivative or simply the fractional derivative since it is the one more frequently used. As an example, let’s look at the 9¾ derivative of a polynomial, say f (x) = x 9 . The 9¾ Caputo derivative is zero, since we first differentiate x 9 ten times, which is zero, and then integrate it; but


chalkdust the Riemann–Liouville derivative is D9¾ f (x) =

9! −1/4 x , Γ( 14 )

which is clearly different to the Caputo derivative. Two things are to be noted here. The fractional part is only contained in the integral, so in order to obtain both of the 9¾ derivatives of a function we need to quarter integrate the tenth derivative or differentiate the ¼ integral ten times. Also, and very importantly in fractional calculus, the fractional integral depends on its integration limits (just as in the traditional version of calculus) but since the fractional derivative is defined in terms of the fractional integral, then the fractional derivatives also depend on the limits. There are many applications of fractional calculus in, for example, engineering and physics. Interestingly, most of the applications have emerged in the last twenty years or so, and it has allowed a different approach to topics such as viscoelastic damping, chaotic systems and even acoustic wave propagation in biological tissue. Perhaps fractional calculus is a bit tricky to interpret, seeming at first to be a weird generalisation of calculus but for me, just thinking about the 9¾ derivative of a function was like discovering the entry into a whole new world between platforms 9 and 10. Certainly, there is some magic hidden behind fractional calculus!

The fractional derivatives of Dν x 9 , with ν between 0 and 10, and x between 0 and 1.

Rafael Prieto Curiel is doing a PhD in mathematics and crime. You can follow him on Twier @rafaelprietoc or visit his blog

Chessboard Squares It was once claimed that there are 204 squares on a chessboard. Can you justify this claim? Source: Thinking Mathematically by John Mason, Leone Burton & Kaye Stacey Answers at


spring 2016


You can count on Dirichlet James Cann


 the days before email, mathematicians relied upon pen, paper and the postman to share ideas and communicate fiendish numerical taunts. An excited Dirichlet wrote to Kronecker in 1858:

“... that sum, which I could only describe up to an error of order √x at the time of my last letter, I’ve now managed to home in on significantly.”∗

Dirichlet’s sum is associated to the divisor function d(n): the number of distinct divisors of the natural number n. The number six is divisible by 1, 2, 3 and 6, so d(6) = 4. Four has divisors 1, 2 and 4, and so d(4) = 3. It might seem odd that a great mathematician was troubled by a quantity whose description amounts to a good grasp of ‘times tables’—still odder that he was confessing to a distinguished contemporary that the damned thing had caused him grief. It is not hard (although perhaps time-consuming) A plot of the divisor function, d(n). to work out the value of d(n) for any given natural number n. But what if we want to provide a formula to describe it? Where to begin? ∗

Translated from the original using the author’s indelicate German.



Mean values The trouble is, the divisor function is not at all wellbehaved. In fact, it jumps significantly in value infinitely oen. The adjacent integers d(200560490130) = 211 and d(200560490131) = 2 provide one such example, but you can happily construct as large a jump as you fancy. Try it.

The divisor function is not at all well-behaved, jumping significantly in value infinitely oen.

The divisor function is one of many interesting arithmetic functions; understanding their asymptotic behaviour as n → ∞ is a key question in multiplicative number theory. However, as we saw above, the value of these functions can be erratic. To try to smooth this wildness, you could instead consider certain types of ‘average’. Dirichlet’s sum was one such average. He began by accumulating a sum from the divisor function on adjacent naturals: D(x) = d(1) + d(2) + . . . + d([x]) =




where [x] indicates the largest integer less than or equal to x. Upon dividing the sum D(x) by x, we get the mean value of the divisor function up to x. Taking the mean has the effect of ‘smoothing out’ the jumps in the divisor function. But just how well-behaved is this average? Do we get some sort of regular behaviour as x becomes very large?

Plots of the sum of the divisor function, D(x) (le), and its mean value, D(x)/x (right).


spring 2016


Hyperbolæ Estimating the divisor mean comes more naturally if we take a different perspective on the divisor function. Instead of viewing d(n) as the number of divisors of n, we think of it as the number of natural-valued pairs (a, b) such that n = ab. The number 6 may be wrien 1 × 6 = 2 × 3 = 3 × 2 = 6 × 1, giving us the four pairs (1, 6), (2, 3), (3, 2) and (6, 1). If we wanted to see the divisors, we could just look at the first number in each pair. For 4 we have the three pairs: (1, 4), (2, 2) and (4, 1); again showing d(4) = 3. Why not treat these natural-valued pairs, from here on called laice points, as coordinates in the plane? The question of counting the number of divisors of n turns into that of counting the laice points lying on the hyperbola XY = n. If n is not a natural number, then the hyperbola contains no laice points. If n is a natural, then the hyperbola contains exactly d(n) laice points. Let us picture the XY = 1 hyperbola given by seing n = 1. If we now let n be real-valued and increase it continuously, the hyperbola sweeps through the upper quadrant of the plane. The only time that the curve passes through laice points is when n hits a natural number.



√ √ ( n, n)

By the reasoning above d(1)+d(2)+. . .+ d([x]) is a count of all of the pairs that the hyperbola sweeps through as we run n from 1 through to x. Dirichlet’s sum D(x) counts the number of laice points lying below the hyperbola XY = x. Perhaps the simplest way of summing up these laice points is to group them into vertical towers. For each natural-valued X-coordinate we look vertically, counting [ nx ] for the number of laice points lying directly above X = n, but below the hyperbola. So

XY = n

XY = 1 X The hyperbolæ XY = 1 and XY = n. Imagine increasing n.

D(x) =

∑[x ] n6x




Keeping track of errors: big-O notation Our count above is still exact and depends upon the precise value of the divisor function, which we know to jump suddenly in value. If we are happy to sele for trying to determine the dominant behaviour of D(x), then we can afford to be a lile less precise: allowing for some small deviation


chalkdust can make it easier to provide a more useful formula whose value is easy to calculate. This deviation from the precise value of the function is usually called error, and provided that we don’t introduce too much, we can get a good idea of the dominant behaviour. For instance, say we want to get an idea of the size of the function f(x) = [x 2 ] for large x. It makes sense to resign ourselves to an error of at most 1, taking just the larger and larger x 2 term. To keep track of our neglected error we write f(x) = [x 2 ] = x 2 + O(1). The symbol O(1) means that we are allowing an error that is at most constant with respect to x. If our function were more complicated, we might wish to keep track of an error that varies with respect to x, instead of remaining constant. Big-O notation gives us this flexibility: we write r(x) = O(g(x)), for some positive function g(x), to mean that there is a constant K and a positive number x0 such that whenever x > x0 we have that |r(x)| 6 Kg(x). Let us lend flavour to this slightly knoy definition. For the longevity of this article and for our younger readership, we define an analogue radio to be an obsolete cuboidal object with a knob, which ingests radio waves and—subject to correct knob position and alignment of the clouds— excretes a melody of questionable taste. One day—not too cloudy—you twizzle the knob and your favourite tune blares out. At least, you think it is your favourite tune … There is some distractingly apparent white noise, rendering it a challenge to make out the precise melody. You try turning up the volume, but this only helps up to a certain point. From some volume onwards, the white noise remains proportionally loud. We can return to big-O notation by thinking of the x variable in the definition above as the volume of the radio’s excretion, the white noise as the error r(x) from your favourite tune. Then r(x) = O(g(x)) is to say that: whenever the volume x is greater than some x0 , the radio’s deviation r(x) from your favourite tune is at most proportional to some function g(x). Now for a more mathematical example: f(x) = x 7 + 5x + sin x. If x is large then x 7 is very large. If you were to plot a graph of f(x), the 5x and sin x terms would seem less and less significant. We could make use of big-O notation to indicate the dominant behaviour: f(x) = x 7 + O(x). For Dirichlet’s sum we can now write ) ∑1 ∑(x + O(1) = x + O(x). (3) D(x) = n n n6x n6x ∑ The sum n6x n1 is (up to a constant) the same as the integral of 1t over the interval [1, x]. This integral is log x. The constant in question approaches the Euler-Mascheroni constant, γ ≈ 0.57, ∑ 1 1 at a rate of O( x ). As x gets large this error vanishes, so that the sum n6x n is close in value to log x + γ. Substituting into (3) gives D(x) = x (log x + γ) + O(x) = x log x + O(x).


Phrased differently, the mean value of the divisor function does behave well for large x: it grows like log x with some fixed constant error: 1∑ d(n) = log x + O(1). (5) x n6x 35

spring 2016


Exploiting symmetry This was not good enough for Dirichlet. He wanted to pin down the constant—that number floating around in the O(1) envelope above. With some rather clever counting he exploited a natural symmetry of the hyperbola XY = x so as to make fewer than the x lots of O(1) approximations we made in (3). If we swap the X and Y axes the hyperbola is le unchanged: there is a reflective symmetry about the line X = Y. Counting the laice points lying below the hyperbola but vertically above the X-axis coordinates √ 1, 2, . . . , [ x] is exactly the same as counting the laice points lying below the hyperbola but horizontally to the right of the Y-axis coordi√ nates 1, 2, . . . , [ x]. Combined, these counts cover all of the laice points under the hyperbola. Except that we have over-counted the √ √ [ x] × [ x] laice points lying in the square below the hyperbola which is sent to itself under the symmetry. In symbols: ∑ [ x ] [√ ]2 D(x) = 2 − x . √ n


By symmetry, the pink and cyan areas contain the same number of points.

n6 x

Arithmetic along the lines of (3) transforms this into: √ D(x) = x log x + (2γ − 1)x + O( x). √ Dirichlet had pinned down the constant. Upon dividing both sides above by x, the O( x) turns into O( √1x ), which vanishes as x becomes large. We have shown that the mean value of the divisor function d(n) taken over n 6 x, for large x, is log x + 2γ − 1.

Dirichlet’s divisor problem Well, I feel content. Don’t you? But not Dirichlet. Never Dirichlet. Despite a decent amount of work, a rather sneaky argument and answering all of the questions that he had asked, he wanted √ to further dissect the O( x). He was convinced that he was being far more imprecise than he needed to be: like approximating [x 2 ] by x 2 + O(x), when in fact we can be sure that it is x 2 + O(1). Writing Δ(x) = D(x) − x log x − (2γ − 1)x, our efforts so far have shown that Δ(x) = O(x1/2 ). In his leer to Kronecker, Dirichlet hinted that he could replace O(x1/2 ) by something significantly more precise. Nothing in all of Dirichlet’s


chalkdust notes has been found to back up this claim. Perhaps it was a simply a taunt aimed at Kronecker? Perhaps he made a mistake. In any case it gave birth to the Dirichlet divisor problem:

Determine the smallest value α such that Δ(x) = O(xα+ε ) as x → ∞ which holds true for all ε > 0. The “ε condition” is a neat way of saying that we are only interested in determining the smallest possible α up to a fixed power of log x. The logarithm grows more slowly than any positive power of x. So for instance x 2 (log x)7 = O(x 2+ε ) for every ε > 0.

The other end up The first fiy years aer Dirichlet’s leer saw lile improvement. Progress was slow and the going was tough. Together with Lilewood, Hardy decided to take a different tack to the ‘O’ approach. Noting the difficulty of containing the error in a big-O envelope, their idea was to “aack the problem from the other end”, introducing the notion of ‘Ω’ results. They sought to display positive functions g(x), for which there exists a constant K such that the multiple Kg(x) is exceeded by |Δ(x)| for arbitrarily large values of x. By finding such a g of as large an order of magnitude as they could, they were proving that the divisor sum error Δ(x), was at best O(g(x)). In this way, they could install the lower bound of an interval for possible α. The year 1915 saw Hardy prove an ‘Ω’ result showing that α > 41 . When read alongside Dirichlet’s hyperbola result we surmise that α ∈ [ 14 , 12 ].

Hot oscillations Since around 1922, the sharp end of research has involved playing with a daunting representation of the error: ( ) ∞ √ x1/4 ∑ d(n) 1 Δ(x) = √ cos 4π nx − π . (6) 4 π 2 n = 1 n 3/4 Deriving such an expression is no mean feat. As illustration: we can encode the divisor function in a Dirichlet series, which turns out to equal the square of the Riemann-zeta function. As a meromorphic function, the techniques of complex analysis can be put to work. Mellin transforms, asymptotic formulæ for Bessel functions and contour integration all feature in the derivation of the equation for Δ(x) above. Because of the oscillatory nature of the cosine, researchers have used this handy expression to provide both lower bounds and upper bounds for α, turning that infamous phrase: “It’s not the size, it’s what you do with it” on its head. Here, it’s what you do with it that indicates its size: ‘Ω’ or ‘O’. 37

spring 2016

chalkdust In the form above, the representation is not very useful. Far beer is a truncation of the sum at some carefully chosen n = N(x), together with a big-O error for the remainder of the terms. There is an art to deciding the truncation point so as to reveal dominant behaviour of the expression whilst keeping the error in check.

Closing in on α Where does the problem stand? Thanks to Hardy we have a lower bound α > 14 . In fact, from computational evidence and heuristic arguments this is expected to be exactly on the money—we believe that α = 41 . If true, the first person to show an upper bound that is also 14 will have solved Dirichlet’s divisor problem. However, 150 years of dedicated work on the upper bound can be read in the decelerating sequence 33 27 15 12 346 35 7 , 82 , 46 , 37 , 1067 , 108 , 22 . The current best has stood since 2003, when a of improvements: 12 , 13 , 100 131 ≈ 0.3149. contribution by Martin Huxley showed that α 6 416

The upper bound (blue) on α, slowly geing closer to

1 4


We have confined α to the interval [0.25, 0.3149]. At the current rate of progress, we are still a long time from squeezing this interval to one number, thus pinning down the precise value of α. Well over a century on, Dirichlet’s challenge is still to be met.

James Cann is a PhD student at UCL and is aached to the London School of Geometry & Number Theory. Reach him by post at ‘UCL Department of Mathematics’, or @jame5cann

My favourite shape

Sierpinski triangle Nikoleta Kalaydzhieva

My favourite shape is the Sierpinski triangle. It is one of the most basic fractal shapes, but appears in various mathematical areas. What I find fascinating about it is how many different ways there are for constructing it. For example, you could use a methodical geometric approach by inscribing a similar triangle in the original one via its midpoints and iterating. Another, more intriguing construction, is via the Chaos game. You can even construct it using basic algebra, by shading the odd numbers in Pascal’s triangle.



Have you been taking your weekly dose of chalkdust?

Can computers understand images?

How can you tile a chessboard?

Will you win the lottery?

Is there a perfect maths font?

Is there anybody out there?

How does traďŹ&#x192;c jam?

Why is it hard to pull books apart?

Can you do sums with old money?

All these, and more, have been answered on our weekly blog, covering topics from cosmology to sport, ďŹ&#x201A;uid dynamics to sociology, history to politics, with a generous sprinkling of puzzles. Read it every week and sign up to our monthly newsletter at 39

spring 2016


Emma Bell


 initial thoughts when the name Fibonacci is mentioned centre around sequences, rabbits, nature and spirals. However, the Fibonacci legacy is much more fundamental to modern scientific studies, and without his influence, mathematics—as we know it— would not exist. Leonardo, of the family of Bonacci, was born in Pisa, Italy, in around 1170. It wouldn’t be until the French mathematician, Édouard Lucas, wrote extensively about the 1, 1, 2, 3, 5, 8, . . . sequence in 1877 that the “Fibonacci sequence” would become more well-known. Leonardo’s father was a successful merchant and customs officer, travelling around the Mediterranean with his family in tow. Throughout his childhood, the young FiThe famous Fibonacci spiral. bonacci was encouraged by his father to study calculation. It was during a stay in Algeria that he became hooked on maths due to the “marvellous instruction” of his tutor. In 1202, Leonardo published Liber Abaci—The Book of Calculation. This was the era of the Fourth Crusade, Genghis Khan and Eleanor of Aquitaine. There are no original copies of the book known to exist now, and only 3 copies of the second edition, printed in 1228, are le—these reside in Italy and the Vatican. The book was an instruction manual for merchants and tradesmen. It contained worked problems with abstract ideas hidden within everyday examples of profit margins, currency conversions and


chalkdust tax: a 13th century “functional maths” course of great use to commercial tradesmen! One such problem is the following about birds: A certain man buys 30 birds which are partridges, pigeons, and sparrows, for 30 denari. A partridge he buys for 3 denari, a pigeon for 2 denari, and 2 sparrows for 1 denaro, namely 1 sparrow for 1/2 denaro. It is sought how many birds he buys of each kind. The solution to this problem can be found at the end of this article. While it may seem unremarkable to our modern eyes, this bird problem beautifully illustrates Fibonacci’s legacy to mathematics: Liber Abaci introduced the western world to Hindu–Arabic numerals.

Engraving of Leonardo of Pisa.

Leonardo’s book encourages and advocates the use of novem figuras indorum—nine Hindu figures—and tells the reader that with the numerals 9 to 1, along with the sign 0, any number may be wrien. He provides a translation from Roman numerals to his chosen figures.

Leonardo explains place value, intrinsic to maths today, and the radical idea that the position of each numeral shows what power of ten it represents. He goes to great lengths to describe how to calculate with these numerals, with examples of addition, subtraction, multiplication and division set out with increasing difficulty. He explains how to use fractions, ratio and proportion, and shows how to group digits in threes to make large numbers easier to read.

Novem figuras indorum: nine Hindu figures.

It’s interesting to note that, in keeping with the approach of the time, Leonardo describes 0 as a sign, rather than a numeral. He calls the sign “zephyr” meaning “empty”, and groups it along with the operations rather than the digits. 41

spring 2016

chalkdust Liber Abaci made Leonardo of Pisa famous. He aracted the aention of the Holy Roman Emperor, Frederick II, who was renowned for his thirst for knowledge. The two men wrote to each other for many years, and Fibonacci published more matheFibonacci’s legacy is visible to matical works, building on those first steps in Liber each and every one of us, every Abaci. Once the printing press was introduced to single day. Europe in the 15th century, the use of the Hindu– Arabic number system spread even further. Fibonacci’s legacy is visible to each and every one of us, every single day. It’s not in the number of petals on a flower, or spirals on a pine cone: it’s the numerals themselves. I for one think that’s an amazing gi to bestow. Solution to the bird problem Let x = partridge, y = pigeon and z = sparrow, so the problem gives us two equations: x + y + z = 30, 1 3x + 2y + z = 30. 2 We know that the values of x, y and z are actual birds, so must be positive whole numbers, and can’t be zero. Doubling the second equation and subtracting the first to eliminate z gives: 5x + 3y = 30.

19th century statue of Leonardo Fibonacci in the Old Cemetery, Pisa.

Now look at this last equation. The 5x and 30 both divide by 5, so it must also divide 3y. Therefore, y is a multiple of 5. But it can’t be 10 or larger else the last equation won’t work: x cannot be zero. So y = 5,

x = 3,

and z = 22.

Emma Bell is a maths teacher at Franklin College in Grimsby, UK. You can follow her on Twier @El_Timbre, and email her at

Further reading The Man of Numbers: Fibonacci’s Arithmetic Revolution by Keith Devlin



a hexaflexagon

You will need

A long strip of paper a few cm wide (or a printed template from, scissors, glue, a protractor or set square.



Fold over the left end of the strip to make an angle of 120°.


Fold the right end of the strip under the strip to make an angle of 120°. Then, tuck it over the top of the one you folded before.

Cut off the extra bit of the top strip, being careful to only cut the top strip and not the bottom one.



Apply glue to the overhanging area of the bottom strip (coloured in red), then fold it over and stick it down.


You have made a hexaflexagon! If you fold it in the right way, you will be able to open it out to reveal other faces.

Tube map platonic solids and Fröbel stars: more How to Make at 43

spring 2016


#3 Set by Humbug






5 8 9




13 14

16 18













24 27




28 33







41 42




Rules Although many of the clues have multiple answers, there is only one solution to the completed crossnumber. As usual, no numbers begin with 0. Use of Python, OEIS, Wikipedia, etc is advised for some of the clues. To enter, send us the sum of the across clues via the form on our website ( by 22 July 2016. Only one entry per person will be accepted. Winners will be notified by email and announced on our blog by 30 July 2016. One randomly-selected correct answer will win a ÂŁ100 Maths Gear goody bag, including non-transitive dice, a Festival of the Spoken Nerd DVD, solids of constant width and much, much more. Three randomly-selected runners up will win a Chalkdust T-shirt. Maths Gear is a website that sells nerdy things worldwide, with free UK shipping. Find out more at



Across 1 A multiple of 999. 5 Half the difference between 45A and 1A. 7 An integer. 9 A multiple of 41A. 12 4D multiplied by 43D. 13 43D less than 7A. 14 37A less than 21D. 15 A number whose name includes all five vowels exactly once. 16 The sum of the digits of 8D. 18 An anagram of 24,680. 21 A non-prime number whose highest common factor with 756 is 1. 24 The sum of the digits of 29D. 25 The product of four consecutive Fibonacci numbers. 26 The product of 14A and 21D. 29 The largest known n such that all the digits of 2n are not zero. 30 The HTTP error code for “I’m a teapot”. 32 A prime number that is the sum of 25 consecutive prime numbers. 35 The number of different nets of a cube (with reflections and rotations being considered as the same net). 36 A Fibonacci number. 37 When wrien in a base other than 10, this number is 256. 39 Why is 6 afraid of 7? 40 The number of ways to play the first 3 moves (2 white moves, 1 black move) in a game of chess. 41 A multiple of 719. 42 A prime number that is two less than another prime number. 44 Half of 29D. 45 The sum of 6D, 8D, 31D, 37D and 43D.

Down (7) (7) (3) (8) (4) (3) (3) (5) (2) (5) (3) (2) (6) (6) (2) (3) (5) (2)

(5) (3) (3) (4)

(8) (3) (7) (7)

1 A power of 2. 2 A palindrome. 3 This number’s cube root is equal to its number of factors. 4 An odd number. 5 A square number. 6 5D multiplied by 1 less than 27D. 8 An odd number. 10 A multiple of 7. 11 The number of pairs of twin primes less than 1,000,000. 17 The number of factors of 26A. 19 Greater than 30A. 20 Each digit of this number is a prime number and larger than the digit before it. 21 37A more than 14A. 22 A multiple of 27. 23 A number n such that (n − 1)! + 1 is divisible by n 2 . 24 The smallest number that cannot be changed into a prime by changing one digit. 27 A three digit number. 28 A multiple of 34D. 29 A multiple of 7. 31 A prime number in which three different digits each appear twice. 33 1,000,006 less than 6D. 34 The smallest number that is nonpalindromic when wrien in binary, but whose square is palindromic when wrien in binary. 37 The square of this number only contains the digits 1, 2, 3 and 4. 38 A cube number. 39 The largest number that cannot be wrien as the sum of integers, the sum of whose reciprocals is 1. 43 The smallest number that is twice the sum of its digits. 45

(7) (8) (5) (2) (5) (7) (6) (2) (4) (3) (3) (3)

(3) (3) (3) (3)

(3) (8) (7) (6) (7) (4)

(5) (5) (2)


spring 2016


The Mathematical Games of

Martin Gardner

Alex Bellos

Matthew Scroggs


 all began in December 1956, when an article about hexaflexagons was published in Scientific American. A hexaflexagon is a hexagonal paper toy which can be folded and then opened out to reveal hidden faces. If you have never made a hexaflexagon, then you should stop reading and make one right now (instructions on page 43). Once you’ve done so, you will understand why the article led to a craze in New York; you will probably even create your own mini-craze because you will just need to show it to everyone you know. The author of the article was, of course, Martin Gardner. Martin Gardner was born in 1914 and grew up in Tulsa, Oklahoma. He earned a bachelor’s degree in philosophy from the University of Chicago and aer four years serving in the US Navy during the Second World War, he returned to Chicago and began writing. Aer a few years working on children’s magazines and the occasional article for adults, Gardner was introduced to John Tukey, one of the students who had been involved in the creation of hexaflexagons. Soon aer the impact of the hexaflexagons article became clear, Gardner was asked if he had enough material to maintain a A Christmas flexagon. The monthly column. This column, Mathematical Games, was writtemplate can be found at ten by Gardner every month from January 1956 for 26 years until December 1981. Throughout its run, the column introduced the world to a great number of mathematical ideas, including Penrose tiling (see page 9), the Game of Life, public key encryption, the art of MC Escher, polyominoes and a matchbox machine learning robot called Menace (see pages 18–22).



Life Gardner regularly received topics for the column directly from their inventors. His collaborators included Roger Penrose, Raymond Smullyan, Douglas Hofstadter, John Conway and many, many others. His closeness to researchers allowed him to write about ideas that the general public were previously unaware of and share newly researched ideas with the world. In 1970, for example, John Conway invented the Game of Life, oen simply referred to as Life. A few weeks later, Conway showed the game to Gardner, allowing him to write the first ever article about the now-popular game. In Life, cells on a square laice are either alive (black) or dead (white). The status of the cells in the next generation of the game is given by the following three rules: 1. Any live cell with one or no live neighbours dies of loneliness; 2. Any live cell with four or more live neighbours dies of overcrowding; 3. Any dead cell with exactly three live neighbours becomes alive. For example, here is a starting configuration and its next two generations:

The first three generations of a game of Life.

The collection of blocks on the right of this game is called a glider, as it will glide to the right and upwards as the generations advance. If we start Life with a single glider, then the glider will glide across the board forever, always covering five squares: this starting position will not lead to the sad ending where everything is dead. It is not obvious, however, whether there is a starting configuration that will lead the number of occupied squares to increase without bound. Originally, Conway and Gardner thought that this was impossible, but aer the article was published, a reader and mathematician called Bill Gosper discovered the glider gun: a starting arrangement in Life that fires a glider every 30 generations. As Gosper’s glider gun. each of these gliders will go on to live forever, this starting configuration results in the number of live cells perpetually increasing! This discovery allowed Conway to prove that any Turing machine can be built within Life: starting arrangements exist that can calculate the digits of pi, solve equations, or do any other calculation a computer is capable of (although very slowly)! 47

spring 2016


RSA Another concept that made it into Mathematical Games shortly aer its discovery was public key cryptography. In mid-1977, mathematicians Ron Rivest, Adi Shamir and Leonard Adleman invented the method of encryption now known as RSA (the initials of their surnames). Here, messages are encoded using two publicly shared numbers, or keys. These numbers and the method used to encrypt messages can be publicly shared as knowing this information does not reveal how to decrypt the message. Rather, decryption of the message requires knowing the prime factors of one of the keys. If this key is the product of two very large prime numbers, then this is a very difficult task.

Something to think about

Encrypting with RSA To encode the message 809, we will use the public key: s = 19 and r = 1769 The encoded message is the remainder when the message to the power of s is divided by r : 80919 ≡ 388 mod 1769 Decrypting with RSA To decode the message, we need the two prime factors of r (29 and 61). We multiply one less than each of these together: a = (29 − 1) × (61 − 1) = 1680. We now need to find a number t such that st ≡ 1 mod a. Or in other words: 19t ≡ 1 mod 1680 One solution of this equation is t = 619 (calculated via the extended Euclidean algorithm). Then we calculate the remainder when the encoded message to the power of t is divided by r : 388619 ≡ 809 mod 1769

Gardner had no education in maths beyond high school, and at times had difficulty understanding the material he was writing about. He believed, however, that this was a strength and not a weakness: his struggle to understand led him to write in a way that other nonmathematicians could follow. This goes a long way to explaining the popularity of his column. Aer Gardner finished working on the column, it was continued by Douglas Hofstadter and then AK Dewney before being passed down to Ian Stewart (see pages 4–9). Gardner died in May 2010, leaving behind hundreds of books and articles. There could be no beer way to end than with something for you to go away and think about. These of course all come from Martin Gardner’s Mathematical Games: • Find a number base other than 10 in which 121 is a perfect square. • Why do mirrors reverse le and right, but not up and down? • Every square of a 5-by-5 chessboard is occupied by a knight. Is it possible for all 25 knights to move simultaneously in such a way that at the finish all cells are still occupied as before?

Mahew Scroggs came from Lower Brailes, and is now a PhD student at UCL working on finite and boundary element methods. His website,, and Twier, @mscroggs, contain lots of maths, almost all of which originally came from Martin Gardner’s articles.



My favourite shape

Helicoid Alex Doak

My favourite shape is the helicoid, as it has many interesting geometric properties. Firstly, it is a ruled surface. The helicoid is constructed by moving a straight line in space; in this case by rotating it about an axis while moving along said axis at a constant speed. Ruled surfaces are very popular in architecture, such as hyperboloid cooling towers and, of course, helicoid staircases. Secondly, it is a minimal surface. In fact, it has been proven that the helicoid, along with the plane, are the only ruled minimal surfaces!

Routes You start at A and are allowed to move either to the right or upwards. How many different routes are there to get from A to B?

Source: Answers at

Did you know... can now wear Chalkdust! Soon all the cool kids will be wearing Chalkdust Tshirts. Show oďŹ&#x20AC; your love for the magazine, while simultaneously making your friends jealous with this stylish addition to your wardrobe. Get yours at


spring 2016


On the cover:

Spherical Dendrite by Mark J Stock

CF11_1179 by Mark J Stock

Rudolf Kohulák


 are surrounded by complex structures and systems that appear to be lawless and disorderly. Mathematicians try to look for paerns in the seemingly chaotic behaviour and build models that are simple, and yet have the capacity to accurately predict the reality around us. But can a scientific or mathematical model have any artistic value? It seems that the answer is yes. There is a group of digital and algorithmic artists that use science and computational mathematics to create visual art. However, there is an even smaller group of people whose art and science coincide. Meet Mark J Stock. Mark is an artist, scientist, and programmer. His work is heavily influenced by his own research, and he uses scientifically-accurate soware to explore “the tension between the natural world and its simulated counterpart, between organic and inorganic, digital and analogue, structure and fluid”. He first started working on simulations and visualisations when working with Moiré paerns—caused by overlaying similar images on top of one another, each offset from the others by some small amount—on a Commodore computer. Mark has a PhD in Aerospace Engineering from the University of Michigan and, unlike some of his colleagues, renounces the use of Spherical Dendrite by Mark J Stock commercial soware for his work. Since much of the reality in the natural world surrounding us is influenced by fluid flow, people are unconsciously tuned into his paerns. Hence one can oen see in his art some natural arrangement that we are familiar with. However, unlike in real-life structures, one can run accurate calculations from physically impossible initial conditions, thus creating visually stunning images.



Chaotic Escape 1 by Mark J Stock

Mesh #3 Iso by Mark J Stock

On our cover we have a piece called Spherical Dendrite. It’s one of a series of 50 unique 3D-printed models. This particular model was created from a simulation of “diffusion-limited aggregation constrained to a sphere”. The key element of this process is Brownian motion. This describes the random motion of particles suspended in a fluid, a phenomenon named aer botanist Robert Brown who first observed it when looking at pollen grains in water. The mathematical representation of this effect is called the Wiener process. This is a stochastic process that starts at zero, is continuous, and has independent increments that are normally distributed with zero mean and a variance equal to the time steps. It is also used to model noise in electronic engineering, as well as appearing in the Black–Scholes model in the field of finance. To build the object on the cover, virtual particles were introduced into a sphere where they diffused through 3D Brownian motion until they came into contact with an existing part of the structure, sticking to it and eventually producing the final object. Another recent work by Mark is his Chaotic Escape series. In these images, fluid tries to escape from the centre of a supernova. He describes this process as “like pushing on a string when everything is a string. Chaotic Escape is a series of works from perfect virtual simulations of an impossible condition, created by intricate algorithms, and performed on a desktop supercomputer.” Mesh #3 Iso again exploits the freedom of fluid dynamics calculations to work with initial conditions that do not exist in the real world. Hence once again we create a structure that has a familiar behaviour, ‘despite its obvious artificiality’. The Structure series, from where this piece is taken, strips the fluids portrayed from all of their surrounding visual context, thereby exposing their computational origins. Thus through Mark’s work, we see the ability of mathematics to free us from the constraints of our physical world; allowing us to use our imagination to merge science and art to create stunning images. Mark J Stock has been showcasing his work since 2000 and has been in over 80 curated and juried exhibitions since 2001. For more of his art visit Rudolf Kohulák (pictured) is a PhD student at UCL working on the modelling of freeze-drying processes.


spring 2016


My favourite shape

Light cone Matthew Wright

My favourite shape is the light cone. It is a fourdimensional shape lying in space-time, and it is the path travelled by beams of light emied from a single point. Although a simple concept, it turns out to be of fundamental importance: it determines the entire notion of causality. Everything that can be causally affected by an event at one point in space and time must lie within that event’s light cone, since nothing can travel faster than the speed of light. Einstein realised that gravity wasn’t a force in the conventional sense, but rather distorts the structure of space and time, tipping and deforming the light cones in the process. This is why nothing can escape a black hole: the light cones are tipped over so much that everything in the future of the light cone must lie inside the black hole.

Chalkdust wants YOU! Do you have an idea you’d like to share? Why not send us your article? It could be published on our weekly blog, or even in the next Chalkdust magazine! Get in touch with us at

Back cover puzzle answer On the back cover, you were asked to make a cube from five matches. Here is one way to do it:



Analogue computing:

Fun with differential equations

Bernd Ulmann


 it comes to differential equations, things start to get prey complicated—or at least that’s what it looks like. When I studied mathematics, lectures on differential equations were considered to be amongst the hardest and most abstract of all and, to be honest, I feared them because they really were incredibly formalistic and dry. This is a pity as differential equations make nature tick and there are few things more fascinating than them. When asked about solving differential equations, most people tend to think of a plethora of complex numerical techniques, such as Euler’s algorithm, Runge–Kua or Heun’s method, but few people think of using physical phenomena to tackle them, representing the equation to be solved by interconnecting various mechanical or electrical components in the right way. Before the arrival of high-performance stored-program digital computers, however, this was the main means of solving highly complicated problems and spawned the development of analogue computers.

Analogies and analogue computers When faced with a problem to solve, there are two approaches we could take. The first is to recreate a scaled model of the problem to be investigated, based on exactly the same physical principles as the full size version. This is oen done in, for example, structural analysis: Antoní Gaudi first used strings and weights to build a smaller model of his Church of Colònia Güell near Barcelona to help him determine whether it 53

Analogue computers were the workhorses of computing from the 1940s to the mid-1980s.

spring 2016

chalkdust was stable. Similar techniques have been used from the Gothic period well into the 20th century, when a textile fabric was used to design the roof structure for the Olympic stadium in Munich. As powerful as this approach is (as another example, think of using soap films when determining a minimal surface), it is quite limited in its application as you are restricted to the same physical principles as those in the full problem. This is where the second technique comes into play: comparing the potentially very complex system under study to a different, but behaviourally similar, physical system. In other words, this similar, probably simpler, physical system is an analogy of the first: hence the creation and naming of analogue computers—computers that are able to study one phenomenon by using another, such as looking at the behaviour of a mechanical oscillator by using an electronic model.

Sagrada Família, licensed under Creative Commons BY-SA 3.0

Antoní Gaudi’s structural analysis model of the Colònia Güell.

Analogue computers Analogue computers are powerful computing devices consisting of a collection of computing elements, each of which has some inputs and outputs and performs a specific operation such as addition, integration (a basic operation on such a machine!) or multiplication. These elements can then be interconnected freely to form a model, an analogue, of the problem that is to be solved. The various computing elements can be based on a variety of different physical principles: in the past there have been mechanical, hydraulic, pneumatic, optical, and electronic analogue computers. Leaving aside the Antikythera mechanism—which is the earliest known example of a working analogue computer, used by the ancient Greeks to predict astronomical positions and eclipses—the idea of general purpose analogue computers was developed by William Thomson, beer known as Lord Kelvin, when his brother, James Thomson, developed a mechanical integrator mechanism (previously also developed by Johann Martin Hermann in 1814).


chalkdust Lord Kelvin realised that, given some abstract computing elements, it is possible to solve differential equations using machines: a truly trailblazing achievement. Let us try to solve the differential equation representing simple harmonic motion (perhaps of a mechanical oscillator!), d2 y + ω2 y = 0, dt 2


by means of a clever setup consisting of integrators and other devices and using the technique developed by Lord Kelvin in 1876. We can write (1) more compactly as y¨+ω 2 y = 0, where the dots over the variables denote time derivatives. To simplify things a bit we will also assume that ω 2 = 1. Hence we rearrange (1) so that the highest derivative is isolated on one side of the equation, yielding

One of the most fascinating properties of an analogue computer is its extremely high degree of interactivity.

y¨ = −y.


Let us now assume that we already know what y¨ is (a blatant lie, at least∫for the moment). If we have∫some device capable of integration it would be easy to generate y˙ = y¨ dt + c0 and from that y = y˙ dt + c1 , with some constants c0 and c1 . Using a second type of computing element that allows us to change signs, it is therefore possible to derive −y from y¨ by means of three interconnected computing elements (two integrators and a sign changer). Obviously, this is just the right hand side of (2), which is equal to y¨, assumed known at the beginning. Now Kelvin’s genius came to the fore: we can set up a feedback circuit by feeding the first integrator in our setup with the output of the sign changing unit at the end. This is shown below in an abstract (standard) notation: this is how programs for analogue computers are wrien down. y¨




The basic circuit for solving y¨ = −y. From le to right we have two integrators and a summer (with each component inverting the sign).

The two triangular elements with the rectangles on their le denote integrators; while the single triangle on the right is a summer. It should be noted that for technical reasons all of these computing elements perform an implicit change of sign, so the lemost integrator actually yields −y˙ instead of y˙ as in our thought experiment above, while the summer with the one input y yields −y. However, if one sets up the two integrators and a summer as demonstrated above, the system would just sit there and do nothing, yielding the constant zero function as a solution of the differential equation (2): not an incorrect solution, but uerly boring. 55

spring 2016

chalkdust This is where c0 and c1 come into play: these are the initial conditions for the integrators. Let us assume that c0 = 1 and c1 = 0, ie the lemost integrator starts with the value 1 at its output, which feeds into the second integrator, which in turn feeds the sign changing summer, which then feeds the first integrator. This will result in a cosine signal at the output of the first integrator and a minus sine function at the output of the second one, perfectly matching the analytic solution of (2). Such initial conditions are normally shown as being fed into the top of the rectangular part of an integrator symbol, but we have omied this in our diagrams. So if we have some computing elements, we have seen that we can arrange them to create an abstract model of a differential equation, giving us some form of specialised computer: an analogue computer! The implementation of these computing elements could be done in different ways: time integration, for example, could be done by using the integrand to control the flow of water into a bole, or to charge a capacitor, or we could build some other intricate mechanical system. Some of the most important observations to make are the following:

Setup for the predator-prey simulation.

• Analogue computers are programmed not in an algorithmic fashion but by actually interconnecting their individual computing elements in a suitable way. Thus they do not need any program memory; in fact, there is no “memory” in the traditional sense at all. • What makes an analogue computer “analogue” is the fact that it is set up to be an analogy of some problem readily described by differential equations or systems of them. Even digital circuits qualify as analogue computers and are known as Digital Differential Analysers (DDA). • Programming an analogue computer is quite simple (although there are some pitfalls that are beyond the scope of this article). One just pretends that the highest derivative in an equation is known and generates all the other terms from this highest derivative by applying integration, summation, multiplication, etc until the right-hand side of the equation being studied is obtained, with the result then fed into the first integrator. As a remark it should be noted that Kelvin’s feedback technique, as it is known, can also be applied to traditional stored-program digital computers.

Examples of analogue computers Analogue computers were the workhorses of computing from the 1940s to the mid-1980s when they were finally superseded by cheap and (somewhat) powerful stored-program digital computers. Thus without them, the incredible advances in aviation, space flight, engineering and industrial processes aer the Second World War would have been impossible. A typical analogue computer of the 1960s was the Telefunken RA 770, shown on the next page.



The Telefunken RA 770 analogue computer.

The most prominent feature of such a machine is the patch field, which is on the far right of the picture above. Here all of the inputs and outputs of the literally hundreds of individual computing elements are brought together. Using (shielded) patch cords, these computing elements are connected to each other, seing up the desired model. In the middle are the manual controls (start/stop a computation, set parameter values, etc) and an oscilloscope to display the results as curves. On the upper far le is a digital extension that allows us to set up things like iterative operations, where one part of the computer generates initial conditions for another part. Below le are eight function generators, which can be manually set to generate rather arbitrary functions by a polygonal approximation.

A more complex example Let us now look at a somewhat more complex programming example: the investigation of a predatorprey model as described by Alfred James Lotka in 1925 and then Vito Volterra in 1926. This consists of a closed ecosystem with only two species, foxes and rabbits, and an unlimited food supply for the rabbits. Rabbits are removed from the system by being eaten by the foxes—without this mechanism their population would just grow exponentially. Foxes, on the other hand, need rabbits for food, or they would die of starvation. This system can be modelled by two coupled differential equations with r and f denoting the number of rabbits and foxes respectively: r˙ = α1 r − α2 rf f˙ = −β1 f + β2 rf

(3) (4)

The change in the rabbit population, r,˙ involves the fertility rate α1 and the amount of rabbits that are killed by foxes, denoted by α2 rf. The change in the fox population, f,˙ looks quite similar but 57

spring 2016


Computing −r.

Computing −f.

with different signs. While the rabbit population would grow in the absence of predators due to the unlimited food supply, the fox population would die out when there are no rabbits and thus no food, hence the term −β1 f. The second term, β2 rf, describes the increase in the fox population due to rabbits being caught for food. Equations (3) and (4) can now easily be set up on an analogue computer by creating two circuits, as shown in the diagrams above. The circuit for (3) has two inputs: an initial condition r0 representing the initial size of the rabbit population, and the value rf which is not yet available. The second circuit looks similar with an initial fox population of f0 (please keep in mind that integrators and summers both perform a change of sign that can be used to simplify the circuits a bit, thus saving us from having to use two summers). All that is necessary now is a multiplier to generate rf from the outputs −r and −f of these two circuits. This product is then fed back into the circuits, thereby creating the feedback loop of this simple ecosystem. The setup of this circuit on a classical desktop analogue computer weighs in at 105 kg and requires quite a stable desk! Seing the parameters α1 , α2 , β1 and β2 to suitable values (it is not too easy to find a stable ecosystem) yields an output like the one shown here. One of the most fascinating properties of an analogue computer is its extremely high degree of interactivity: one can change values just by turning the dial of a potentiometer while a simulation is running and the effects are instantaneously visible. It is not only easy to get a “feeling” for the properties of some differential equations, it is also incredibly addictive, as the following quote from John H McLeod and Suzee McLeod shows: “An analogue computer is a thing of beauty and a joy forever.”


Results of the predatorprey simulation. Prey are on the top and predators on the boom.


Analogue computers in the future Aer these two simple examples, a question arises: “What does the future hold for analogue computers? Aren’t they beasts of the past?” Far from it! Even—and especially—today there is a plethora of applications for analogue computers where their particular strengths can be of great benefit. For example, electronic analogue computers yield more instructions per second per wa than most other devices and hence are ideally suited for low power applications, such as in medicine. They also offer an extremely high degree of parallelisation, with all of the computing elements working in parallel with no need for explicit synchronisation or critical code sections. The speed at which computations are run can be changed by changing the capacitance of the capacitors that perform the integration (indeed, many classical analogue computers even had a buon labelled “10×”, which switched all integration capacitors to a second set that had a tenth of the original capacity, yielding a computation speed that was ten times higher). On top of this, and especially important today, they are more or less impossible to hack as they have no stored programs. A modern incarnation of an analogue computer still under development is shown in the header of the article. In contrast to historic machines it is highly modular and can be expanded from a minimal system with two chassis to several racks full of computing elements. When Lord Kelvin first came up with analogue computing, lile did he know the incredible amount of progress in science and technology that his idea would make possible, nor the longevity of his idea even today in an era of supercomputers and vast numerical computations. Bernd Ulmann is professor for Business Informatics at the FOM University of Applied Sciences for Economics and Management in Frankfurt-am-Main, Germany. His primary interest is analogue computing in the 21st century. If you would like to know more about analogue computing, visit and have fun with differential equations.

My favourite shape

Heptadecagon Sebastiano Ferraris

My favourite geometrical figure is the heptadecagon, a regular polygon with 17 sides. It comes with the history of a great challenge that required the efforts of almost eighty generations of mathematicians to solve. Ancient Greeks knew how to construct polygons with 3, 4, 5, 6, 8, 10, 12, 15, 16, and 20 edges using only a straightedge and compass, while 18th century algebraists knew that it was impossible to use the same tools to construct polygons with 7, 9, 11, 13, 14, 18 and 19 sides. Gauss, at 19, was the first to prove that the heptadecagon was constructible. 59

spring 2016

This issue features the top ten models of calculator. To vote on the top ten colours of chalk go to At 10, it's what we've all been searching for: the Google calculator.

Down two places to 9 this week, it's the Li ttle Professor and his very tren dy sideburns.

At 7, and guaranteeing you at least a C at A-level: the Casio FX-9750GII graphical calculator.

At 8, it's When I'm Cleaning Windows 95 calculator by George Formby.

At 5, it's not as cool as the So clever it can watch that turned the calculate how many TV on, but cooler than weeks it's been in the the Apple Watch: the ary rot charts: the Curta Casio calculator watch. calculator is at 6. At 4, it's the calculator specially designed to encourage more flamingos to do maths: the Pink Casio FX-85GT Plus.

The Casio FX-82MS is down to 3, but still popular with those who secretly meant to buy the FX-85GT Plus instead. d A new entry at 2 an out perfect for working ing ct fe how inflation is af e: at ol the price of choc ator. the Smarties calcul

And spending a record fifteenth week at number 1: the excellent Casio FX-85GT Plus.


Chalkdust, Issue 03