#05 April 2017
Mysterious cosmic radio signal
Interview JosĂŠ van Dijck
Unreliable neurons in the brain
Dear reader, You are looking at the first anniversary issue of Amsterdam Science magazine: issue number five is hot off the press and now on the streets! The science community of Amsterdam does not stop to amaze us with its diversity of exciting topics. From the front cover, showing that robots and artificial intelligence, while entering our daily lives, are still able to struggle with a tomato, to the back cover, giving a perspective on new ideas about the theory of gravity that have prompted a lively scientific discussion worldwide - issue five is once more helping Amsterdam's scientists present a beautiful bouquet of subjects containing the many colours and perfumes of Amsterdam's science. The interview with the director of the Royal Netherlands Academy of Arts and Sciences (KNAW), José van Dijck, gives us an insight into the impact of the internet and information science on the scientific arena and what the KNAW has in store for us. You will also find new contributors in this issue: the Advanced Research Center for Nanolithography (ARCNL) is well represented, with a contribution on nanolithography (page 10) and a Q&A with its director, Joost Frenken (page 23). Furthermore, the Amsterdam Institute for Advanced Metropolitan Solutions has a contribution addressing metropolitan issues (page 24), where they describe the concept of urban metabolism, “a metaphor of a city as a living organism or ecosystem”. Though science is always on the move, turn to page 13 for what's new in movement science, a field that uses sensor technology to improve walking patterns of the elderly. In short: there is much to learn about and enjoy in this fifth issue! We invited Alex Verkade to write a column on science communication and received a critical analysis of our very own attempt to create a magazine for scientists & by scientists. Verkade holds up a mirror to us, the editorial board, when he describes a common pitfall in communicating science: “we make something we and our colleagues like, and just keep our fingers crossed that someone else likes it too.” Verkade emphasizes that successful science communication requires an active community to participate in the process. Indeed, this could represent a challenge of our current effort. Are we appealing enough to the audience that we try to reach? Are we reaching everyone? There is certainly room for improvement here. Our own experience is that, although there is plenty of exciting science going on in Amsterdam, gathering sufficient contributions for every issue is still a ‘tour de force’ - one we succeed in thanks to the enthusiastic bunch of people volunteering their time and effort to the magazine. Maybe our Facebook page, newsletter and the active involvement in this process of more and more Amsterdam research institutes can smooth the road ahead. Let’s see how it goes with the next issues. Our editorial board remains full of energy; we'd like to thank Joen, Mohit, Héctor and Namrata for their past contributions and welcome the new members of the board: Nitish, Jop, Joshua, Renske and Annike. To conclude: enjoy the fifth issue and use it as inspiration to visit our website and submit your own Amsterdam Science contribution. Let’s continue to show the world that working in science in Amsterdam is very rewarding and communicate a little of that buzz by writing about it in Amsterdam Science magazine! On behalf of the Editors in Chief, Michel Haring
ABOUT THE COVER IMAGE: Researchers at the Robotics Lab of the Informatics Institute of UvA have recently developed a benchmark to test how well a commercially available humanoid robot (Nao) can support humans in a domestic environment. In their test, the Nao robot starts searching a kitchen environment for a tomato: a red circular object. More information on page 4.
14 Watching larvae grow up
05 Unreliable neurons
10 Boxing tin droplets
06 Interview with José van Dijck
20 Mysterious cosmic radio signal 15 Mercury's rotation
12 Flexible building blocks
28 Verlinde's theory of Emergent Gravity
colophon Editors in Chief: Hamideh Afsarmanesh; Mark Golden; Michel Haring; Sabine Spijker Members of editorial board: Renée van Amerongen; Sarah Brands; Eline van Dillen; Noushine Shahidzadeh; Mustafa Hamada; Saskia Timmer; Esther Visser; Ted Veldkamp; Maria Constantin; Céline Koster; Francesco Mutti; Jans Henke; Laura Janssen; Annike Bekius; Jop Briët; Nitish Govindarajan; Joshua Obermayer; Renske Onstein. Magazine manager: Heleen Verlinde E-mail: email@example.com Website: www.amsterdamscience.org
Design: Van Lennep, Amsterdam Copy Editor: DBAR Science Editor Photographer: Siesja Kamphuis Printer: GTV Drukwerk Illustration puzzle: Bart Groeneveld
4 Spotlight 11 Then & Now 13 Spotlight 16 Centerfold 18 Spotlight 19 Column 23 Q&A 24 Spotlight 25 Alumni 26 Puzzle 27 About
The Roasted Tomato Challenge
CAITLIN LAGRAND and MICHIEL VAN DER MEER, are bachelor’s student Artificial Intelligence, Faculty of Science UvA.
→ Robots, robots everywhere… certainly seems to be the trend at the moment. We know them from car production lines and other industrial applications, but their use in human service situations such as healthcare, care for the elderly or in education are areas of great current interest. In the Robotics Lab of the Informatics Institute of the UvA, we have recently developed a benchmark to test how well a commercially available humanoid robot (Nao) can support humans in a domestic environment. In our test, the Nao robot starts searching a kitchen environment for a tomato: a red circular object. We tested three different algorithms for recognising the tomato: firstly, searching for red objects that are somewhat circular, secondly for circular objects that are mostly red, and thirdly by a colour-blind method inspired by cognitive image processing (a psychological theory on how humans
understand images). In the end, searching for red objects that are somewhat circular proved to be the most successful tomato-finding method. This proof of concept included testing not only the detection of the tomato, but also all steps required to enable the robot to pick up a tomato such as approaching, planning a pick-up without disturbing other nearby objects and the execution of the final pick up, after which the robot is ready to put the tomato into the pan to make dinner. Next, we extended this study to include a method to distinguish different ingredients commonly found in the kitchen. This year, a new student team has been formed at the UvA, which will take these newly learned kitchen skills and compete in RoboCup@ Home (robocupathome.org), the largest international annual competition for autonomous service robots. Ω
→ Reference C. Lagrand, M. van der Meer, A. Visser, The Roasted Tomato Challenge for a Humanoid Robot. Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 341-346, Bragança, Portugal (2016). → Check out www.dutchnaoteam.nl to see more of what our Nao's can do!
Extinction on islands
JULIA HEINEN is Master’s student Biological Sciences, track Ecology & Evolution, UvA.
→ Reference J.H. Heinen, Master’s thesis, University of Amsterdam (2016). More information: www.juliaheinen.nl/research-poster
→ When humans first arrived on remote oceanic islands, they encountered animals that occurred nowhere else in the world, such as the dodo or giant tortoises. Unfortunately, this sometimes led to extinction of these species because they were eaten by hungry sailors or even their ship rats (Fig. A). This has in turn affected the plants these animals interacted with. For example, birds, mammals and reptiles that ate fruit could disperse the seeds of plants to new areas, especially if they were large or able to fly. There was no overview of animals that went extinct on islands or their characteristics available. Yet, if the characteristics that make animals vulnerable to extinction are known, measures can be taken to prevent new extinctions. Therefore, during my Biology Master’s project at UvA (supervised by W.D. Kissling), I made a database of 1183 fruit-eating animals that have gone extinct or still occur on 74 islands worldwide, and added their weight (1.4
g – 250 kg) and ability to fly to the database. I wanted to find out (1) if some island animals have been more vulnerable to extinction than others and (2) if there have been more extinctions on certain types of islands. From the first statistical model I made, the conclusion could be drawn that large animals have gone extinct most often, especially large birds that were unable to fly (Fig. B), such as the dodo. The loss of large species has caused the mean weight of all fruit-eating animals on islands to be reduced by 37%. Consequently, many large seeds cannot be swallowed and dispersed anymore because only small animals have remained. The second model showed that remote and small islands have had more extinctions than those that are large and closer to mainland. Islands have been impacted in a way that will likely have severe consequences for seed dispersal; they need strong conservation and restoration efforts, especially on small and isolated islands. Ω
↓ Figure (A) The Dutch causing extinctions of flying and flightless birds in Mauritius (De Bry’s Variorum Navigatiornis, 1601). (B) Relationship between size and extinction probability for flying and flightless island birds.
Given unreliable neurons, how does the brain avoid mistakes?
Activity of neuron 2
GUIDO MEIJER is PhD student in the Cognitive and Systems Neuroscience group at the Swammerdam Institute for Life Sciences, UvA.
Activity of neuron 1
↗ Figure Figure Two neurons responding to the stimuli (diagonally moving bars, neuron 1; horizontally moving bars, neuron 2) are not always accurate in their response upon visual stimulation. Nevertheless, because their responses vary in parallel with the red line the brain can still differentiate between the two different stimuli.
→ Reference J.S. Montijn, G.T. Meijer, C.S. Lansink, C.M.A. Pennartz, PopulationLevel Neural Codes Are Robust to Single-Neuron Variability from a Multidimensional Coding Perspective. Cell Reports 16, 2486–2498 (2016).
→ A human brain contains billions of nerve cells and the orchestrated activity of all these neurons together is what allows us to think, feel and experience the world around us. How the exchange of information between cells in the brain can result in rich experiences, such as the appreciation of a work of art, is one of the largest mysteries in modern-day science. A hot-debated problem of the last decade concerns the fact that neurons in the brain are rather unreliable. Imagine there is a neuron that becomes active when you see a cat. When this neuron encounters the exact same picture of a cat again, it will not give exactly the same response. Herein lies a problem; how can the brain make a right decision when it has to base it on unreliable information from individual neurons? We set out to answer this question in a well-established model system: the visual cortex of a mouse. We recorded the activity of hundreds of neurons using a tech-
“Neurons in the brain are rather unreliable.”
nique called two-photon calcium imaging, while visual stimuli were shown on a screen. We stimulated the visual cortex by presenting very rudimentary shapes; moving black and white bars that were rotated at different angles. Imagine two neurons that both become active when the mouse sees bars moving from left to right, as depicted at the top of the Figure.
Every time the horizontal bars are shown, the neurons become active and a point is jotted down. As these neurons are unreliable and will not respond in exactly the same way every time the mouse sees the same image, this results in a cloud of points. Presenting diagonally moving bars also results in a cloud of points, but in a different location because neurons respond differently to these stimuli. If the brain must now decide whether the horizontal or the diagonal bars are shown on the screen, the simplest way to do that is to draw a line between the two clouds of points and say ‘if the activity of the neurons is above the line there are horizontal bars and if it’s below the line there are diagonal bars’. Looking at the image, you can see that if activity caused by the horizontal bars crosses the line to the other side the brain will erroneously conclude that the diagonal bars were shown instead of the horizontal bars. This means that variability orthogonal to the line is harmful, because it can result in the brain making a mistake, whereas variability which is parallel to the line will not result in any errors. We have found that neuronal variability is present, but only parallel to activity elicited by other stimuli. In other words, neurons are variable, but they are variable in such a way that this variation does not interfere with the encoding of information. This example is a two-dimensional one; there are only two neurons responding to the stimuli. The brain, however, is made up of billions of neurons and therefore works in a high-dimensional space. We have shown with our analyses that this principle also holds true in this high-dimensional space. These results give us fundamental insight into the ways the brain processes information and bring us one step closer to understanding its functioning. Ω
An interview with José van Dijck, president of the Royal Netherlands Academy of Arts and Sciences (KNAW) and Distinguished Professor at Utrecht University in the area of digital society and media culture.
Serving science SABINE SPIJKER and JOP BRIËT, Amsterdam Science magazine editors
→ We meet José van Dijck in her classically decorated office at the KNAW (‘the Academy’), located in the Trippenhuis in the centre of Amsterdam. “Since this building is a national heritage site, I have no say about its interior restauration, not even my own office,” she says while telling us a few historical facts about the building as we stand under a beautifully ornamented ceiling. How did you get started in science? “I was always curious, I wanted to get to the bottom of everything, that was essentially my attitude. My initial interest was journalism. In journalism, you can research a wide range of things simultaneously of course, but I noticed that only diving into things for short periods was leaving me somewhat unsatisfied in the long run. I was already working on my PhD when I realised
this point, though I never really felt like it was my calling to be a scientist. I felt that doing a PhD was important, because it would teach me to think systematically, to methodically deal with knowledge and to write about it. Developing an inquiring mind would be invaluable for any profession, not just in academia.” How has the job perspective of PhDs changed since then? “Nowadays we train many more PhDs than we did in the past. Since employment in our sector has not been expanding proportionally, many end up working outside of academia. But I think that it’s absolutely wrong to say that those who leave academia do so out of despair. When I was finishing my PhD, I always felt like I had many great options outside of academia as well. Learning the basic academic attitude makes you
“There is almost no societal or scientific problem that can be solved by a single discipline.”
quite flexible in the job market. I increasingly encourage my own students to look beyond academia. Especially those working on digital media can find jobs almost anywhere, in any sector, if they wish.” “As president of the Academy I talk with industry as well. Did you know that ASML employs over eight hundred PhDs and postdocs; that’s more than most Dutch university departments. You have to realise that there is a real scientific environment in companies like that, and ASML isn’t the only place where fundamental research is valued and performed.” How did the rapid development of the internet in the ‘90s affect your research? “I could talk about that for hours. When I started my PhD in ‘87, the word internet didn’t even exist. My research started out on what’s now known as ‘old media’. But my mind-set and object of research
ÂŠ Miletta Roots
evolved along with technological changes, and to this very day I find media studies an extremely exciting field to be in. Every morning when I turn on my computer I realize the world has changed and so has my research. I can never leave research aside for very long and it’s very important for me to keep up with current developments. It can be frustrating to only have less than twenty percent of my time for research when important techno-societal issues continuously develop, such as the fake news issue that came up very recently. At times like this my research topic is like a moving target.” Was the way that the internet influences our daily lives now predicted early on? “No. No one has ever correctly predicted the vast technological changes that happened over the past decades. The way the computer has changed our lives is beyond anyone’s imagination. Another thing we could not have predicted was the complex way in which research itself would develop. It has fuelled massive collaborations between scientists who would have never guessed their paths would cross, myself included. In 2001, when the internet was developing extremely rapidly, I started at the University of Amsterdam as a professor of Media Studies—a fairly new field in the humanities. I was approached by a computer scientist who told me that my field would drastically change because his field was drastically changing - and that we should collaborate. Then he told me about the search engines he was helping develop. We’ve kept contact ever since and I’m forever grateful for getting to know and work with him. I could
“Developing an inquiring mind is invaluable for any profession.”
BIO JOSÉ VAN DIJCK Born 1960, Boxtel, the Netherlands Study Dutch and Literary Studies at Utrecht University; PhD in Discontinuous discourses. Mapping the public debate on new reproductive technologies at the University of California, San Diego (USA). Work 2015 – now, President of the Royal Netherlands Academy of Arts and Sciences (KNAW) 2017 – now, Distinguished Professor at Utrecht University 2008 – 2011, Dean at the University of Amsterdam Faculty of Humanities. Before that, Van Dijck also worked at the University of Groningen and Maastricht University. She is the author of recent books such as Mediated Memory in the Digital Age and The Culture of Connectivity.
have never predicted that I would someday work closely with a computer scientists!” Are you on Facebook? “I don’t use Facebook. If you read my latest books, The Culture of Connectivity and De platformsamenleving (The platform society), you’ll know why. Of course, that doesn’t mean that I don’t investigate and work with Facebook in my research, but I do have a nuanced opinion of how I want to deploy social media for personal use.” These nowadays tools could ease communication, and good communication is essential for scientific projects. The Dutch government currently finances a number of large research consortia, in which communication might get hampered. Do you think that there is an optimal size of a research community? “The right size for a research community seems to vary from field to field. Some math problems can only be solved by a team of say three devoted mathematicians. In projects that require massive infrastructure, such as those in astronomy or particle physics, you simply have to collaborate in big communities; some 15,000 people work around the particle accelerator at CERN. I can’t say what the optimal size is for collaboration. But scientists are very apt at deciding this among themselves. We witnessed this during the development of our national science agenda (‘wetenschapsagenda’), where the government (reluctantly) let scientists figure out themselves how to shape relevant and urgent research projects. In an impressive exercise of collaboration between researchers from all corners of academia, they put forth a national research design covering twenty key topics, many more than the three asked for, but still a tiny number compared to the number of existing disciplines. The Royal Academy, including the Young Academy was heavily involved in this process.” Is there a pattern in your professional activities? You have a kind of on-off thing with managerial positions, it seems. “I indeed have a somewhat odd career trajectory. I don’t see myself as a manager at all, but everyone told me that I should take
© Miletta Roots
on managerial responsibilities, so I did at a rather young age. These positions just happen to cross my path, even if becoming dean of humanities or president of the Academy was never part of my plan. After these periods of ‘community service’ I always happily return to research. Research and teaching are my core business. That’s not to say that I see these managerial positions as a burden, I really enjoy those jobs too, and it’s an honour to be asked to perform them. Probably what I enjoy most is facilitating people in doing the best possible research.” What roles does the Academy play? “Its mission is to give scientists a voice, as well as to embody the conscience of the scientific community. The Academy has three primary functions: first, it is a forum for academics; we communicate amongst ourselves, with people outside our immediate academic circles, such as professionals and policy makers, and with the public at large. Second, we are an official advisory body advising the Dutch government on science-related matters. Third, the Academy governs fifteen selected research institutes.” Are younger generations being catered to? “I think that is a very important challenge to focus on the coming years. Recently I was with Ben Feringa in Stockholm where he received his Nobel prize, and he mentioned that he had spent a whole day at primary schools in Sweden to tell them about his findings. He noticed that there was such an eagerness in these children to know more about science, a very inquisitive attitude and awareness of the importance of scientific knowledge. This is something we need in the Netherlands as well he said, and I completely agree.” “The Academy has of course no official function in education, but of course we can still contribute to a better climate for science education at all levels. Last year, for their lustrum year, Utrecht University organised an event where almost 150 full professors in their professional gown visited primary schools in Utrecht to talk about their research. The Academy has sponsored (consortia of ) universities in a
programme called ‘Wetenschapsknooppunten’ (Science Nodes) to facilitate and enhance scientific education at primary and high schools, though sadly the funding for this was recently discontinued. In many ways, the Academy contributes to science communication, for example collaborating with Science Center NEMO and Dutch public television (NPO).” What other advice does the Academy give? “We are asked to give our view on a wide range of specific topics, though unfortunately our advice is not always taken up by politicians or policy-makers. The Minister of Education, Culture and Science asked us to write an advice on teaching university courses in English, which will be issued this summer. We often also act proactively to give our view point when we believe there is a scientific and societal need. The Academy wrote a position paper on the use of the CRISPR/ Cas technology for gene editing.” “It is very difficult to measure impact of an advisory report. This is in part due to the fact that good research takes time and a lot of effort. By the time a requested report is finalised, the government has sometimes changed already, and it is not at all clear what will be done with it. We distribute advisory reports to the media to trigger general interest, fuel discussion, and get it (back) on the political agenda.” Do you have time for research yourself? “I am involved in a large digital-humanities project called CLARIAH. The humanities have several large archival collections in the form of (printed) texts, (audiovisual) pictures, historical data, and sound. We need to find out how to best store them digitally, but also how to search and present them in a sensible way. In this area, you see a lot of exciting collaborations, in particular between the humanities and computer science. An important aspect of what we are trying to do together is to optimise automated search and facilitate users to find information. The role of the humanities in computational science includes improving the collaboration between humans and machines.”
© Wim Ruigrok
Do privacy questions play a role when dealing with vast amounts of data? “Yes, that’s another important aspect of course. At the UvA, you have the Institute for Information Law (IViR), which has a strong focus on privacy issues. I serve on their advisory board. We thus have a variety of perspectives and approaches represented in the Amsterdam data science community. Computer scientists who develop search engines work with large datasets. Humanities scholars investigate users experience and provide feedback to developers. In addition, they also provide critical perspectives, examining the consequences of data-driven and platform-based technologies on society and, for instance, assess the effect of search algorithms on the private sphere and public value. That is when lawyers step in as well, to determine how we can protect personal information. This merging of fields to solve a very complex problem is very interesting, and this is something I think is needed for future research in general. ‘To perform interdisciplinary research’ might sound non-committal. Yet, I think that there is almost no societal or scientific problem that can be solved by a single discipline. In that respect, it is essential to learn to speak each other’s language and to respect each other’s views and ex-
pert knowledge. For a Master’s or PhD student to learn how to think in several academic ‘languages’ is an extraordinary asset to their education. It is so different from the way I was myself educated in the 1980s, namely in a very strong monodisciplinary way of doing science.”
“I don’t use Facebook. If you read my latest book, you’ll know why.”
How did you experience that monodisciplinary thinking at that time? “The common view during my PhD period was that you stayed within the bounds of your own discipline and certainly not to go outside your own faculty (in my case, the humanities). In fact, you were already extremely progressive when you studied Literature and proceeded to do media-related subjects. Basically, multidisciplinary research used to be viewed as a waste of time, a distraction from the real academic rigor. I’m glad that I did not continue down that path, and went ahead with multidisciplinary work. It made myself and the way I do and view science much richer. I hear the same from Ben Feringa, our Nobel laureate. He tells me over and over again that if he would not have stepped out of his comfort zone within chemistry, he would never have achieved what he has now. In his group, there are 50 people from different disciplines working together. I think that is a very exciting development” Ω
Boxing with tin droplets
OSCAR VERSOLATO is group leader of the Atomic Plasma Processes group and joint group leader of the EUV Plasma Dynamics group at the Advanced Research Center for Nanolithography (ARCNL).
→ Moore’s law predicts that the number of transistors on computer chips doubles every two years. This prediction, made in 1965 by Gordon Moore, has impressively held for over 50 years and has driven the semiconductor industry. However, fitting more transistors onto chips is challenging. ‘Chip-making’ photolithography systems use light to ‘print’ extremely small structures onto chips. The shorter the wavelength of light, the smaller the structures that can be printed, so the latest generation of these systems uses very short-wavelength, ‘extreme ultraviolet’ (EUV), light. One way to produce this light is to heat a droplet of fluid metal (tin) with a laser beam, so that it disintegrates into independent electrons and ions, thereby forming an extremely hot fourth state of matter – plasma – which emits EUV light. Simply put, the plasma glows because of its high temperature, much like an old-fashioned incandescent light bulb, in which a filament is heated to temperatures similar to that of the sun. In our ARCNL set-up, a much hotter plasma is produced, exciting highly charged tin ions, which radiate light when they spontaneously decay. The atomic structure of tin ensures that most of this light is emitted as useful EUV light with a wavelength around 13.5 nanometres. This is about 15 times shorter than the 193-nanometre-wavelength light used by conventional photolithography. The formation of such a laser-produced plasma happens in two
“A droplet of tin is heated with a laser beam, thereby forming an extremely hot plasma, which emits extreme ultraviolet light.” steps. A first laser pulse hits the molten tin droplet, creating a small plasma, which quickly expands and pushes on the droplet, causing it to accelerate. This acceleration is so powerful that the droplet deforms radically. The deformation happens at a much longer timescale of several microseconds, after which the droplet looks like a thin pancake. This shape was found to be optimal for the production of EUV light. To obtain this light, a second,
more powerful laser pulse ensures the knock-out: the deformed droplet changes into an EUV-emitting plasma. The efficiency of the entire process is dependent on the effectiveness of the first pulse in optimally deforming the target. At ARCNL, we designed an experiment with tin droplets and laser systems that imitates an industrial lithography system as accurately as possible, while still allowing us to study the physics underlying the propulsion and deformation of tin droplets in detail. In this work, we looked at the microscopically small droplets (about the thickness of a human hair) using long-distance microscopes and pulsed laser light to make high-resolution, negative images of the droplets. There are two types of physics at play: in the first few nanoseconds, plasma physics describes the plasma that propels and deforms the droplet. The deformation is described by physics of fluids. We collaborated with Dr. Hanneke Gelderblom and her team at the University of Twente and ASML to demonstrate that the deformation
↙ Figure Shadowgraph images, taken at various times, of a tin droplet (seen from the side) being hit by a (a) low-energy and (b) high-energy laser pulse impinging from the left. The laser pulse induces a plasma, which expands and pushes the drop away from the laser. This deforms the droplet into a flat pancake.
of tin droplets is very similar to the deformation of a water droplet under the influence of laser light, and that it can be described by a recently developed analytical model. Although the physics of the acceleration differs, the fluid dynamics are completely scalable from water to tin. We also found that the velocity of the accelerated tin droplet exhibits an interesting dependence on the energy of the laser pulse, which can be understood from basic momentum conservation. This dependence has two features. First, we demonstrated and explained that a minimum laser-pulse energy is required before anything happens at all: below this threshold energy, no plasma is created. Above the threshold, we find that there is an extremely ‘simple’ relation between the droplet’s velocity and the laser-pulse energy over a very large range of energies. This simple relation provides a very sensitive test of the most recent developments in plasma theory, which underpin state-of-the-art modelling of plasma EUV sources. Our results contribute to understanding the first step in the formation of laser-produced plasma for EUV light and led to the first scientific publication of the ARCNL. ARCNL was established in January 2014 and forms a public-private partnership between the Netherlands Organisation for Scientific Research (NWO), the University of Amsterdam (UvA), the VU Amsterdam and semiconductor equipment manufacturer ASML. Ω
→ Reference D. Kurilovich et al., Plasma Propulsion of a Metallic Microdroplet and its Deformation upon Laser Impact, Physical Review Applied 6, (2016) https://doi.org/10.1103/ PhysRevApplied.6.014018
Then & now
Has the era of personalised medicine arrived? ANDRÉ B.P. VAN KUILENBURG is principal investigator at the Academic Medical Center and Emma Children’s Hospital, UvA.
→Almost seventy years ago, a treatment of cancer using one or more anti-cancer drugs was developed: chemotherapy. Although it was originally developed to cure cancer, it is now also used to prolong life or reduce symptoms. However, determining the effective dosage of chemotherapy can be difficult: if the dose is too low, it will be ineffective against the tumour, whereas, at excessive doses, the toxicity (side-effects) will be harmful to the patient. Adverse drug reactions are a major clinical problem and it has been estimated that, in 1994, they accounted for over 100,000 deaths in the United States. As such, adverse drug reactions were the fourth largest cause of death in the United States, after heart diseases, cancer and stroke. For many cancer therapeutics, the dose range associated with anti-tumour efficacy is only slightly lower than the dose associated with toxicity. Thus, in treatment of cancer patients, a sense of urgency is felt to implement new concepts to improve the benefit to hazard ratio of chemotherapy treatment, one patient at a time. This approach is called personalised medicine. We have focused on antimetabolites, which is a group of anticancer drugs that have a similar structure to the building blocks of DNA and RNA. These drugs exert their effect by either blocking the enzymes required for DNA synthesis, or becoming incorporated into DNA or RNA. In this way, they slow down the proliferation of cancer cells or even kill the can-
“Adverse drug reactions are a major clinical problem.”
↓ Figure 1 In the 1950s Charles Heidelberger (pictured) created 5-fluorouracil, which is still the most widely used anticancer drug today. Photo courtesy of USC University Archives.
cer cells. The first antimetabolites were discovered in the 1950s and were subsequently introduced into the clinic for the treatment of certain types of cancer. 5-fluorouacil (5FU) is an antimetabolite whose tumour-inhibitory potential was first demonstrated 60 years ago , and it has remained the most important component of most currently applied chemotherapies. Unfortunately, a large variation in 5FU responses has been observed between patients, suggesting that some patients might receive suboptimal 5FU doses, whereas other patients will have an increased risk of severe toxicity due to overdosing. In this respect, the use of pharmacokinetic (PK) models, which can predict the levels of 5FU in blood, can be helpful for individualised dosage determination to increase drug efficacy and to minimize adverse effects. Dose adaptation based on PK information is also referred to as therapeutic drug monitoring (TDM). TDM of 5FU has shown encouraging results in achieving the appropriate concentrations of 5FU in blood and, thus, improving the efficacy and reducing the toxicity of the therapy. To date, various models to predict the pharmacokinetics of 5FU are available and some of these models require only a limited number of blood samples to predict a priori the required doses of 5FU [2,3]. The variability in the pharmacokinetics of 5FU has been attrib-
uted to various factors including a deficiency of dihydropyrimidine dehydrogenase (DPD), the enzyme responsible for the degradation and thereby inactivation of 5FU. DPD is a ubiquitously expressed enzyme in man and its normal physiological role is the degradation of uracil and thymine, building blocks of RNA and DNA, respectively. Since the structure of 5FU is comparable to that of thymine, DPD is also able to degrade 5FU (Fig. 1). Our studies have shown that at least 3-5% of the human population has a partial DPD deficiency due to mutations in the gene coding for DPD . Patients with such a DPD deficiency suffer from severe toxicity, including death, following the administration of 5FU. Recently, an extensive analysis of 5FU pharmacokinetics in patients with a partial DPD deficiency has been performed. Pharmacokinetic modelling revealed significant differences between DPD-deficient patients and controls. Therefore, therapeutic drug monitoring should be implemented as the standard of care to ensure safe and effective treatment of cancer patients with 5FU. In the new era of healthcare, it is predicted that personalised medicine will become the standard of care, resulting in improved treatment outcomes, less toxicity and reduction of costs . Personalised medicine involves identifying genetic and clinical information that enables one to make accurate and tailor-made predictions about a person's susceptibility of developing disease, the course of disease, and its response to treatment. A prerequisite for personalised medicine to be used effective-
→ References 1. C. Heidelberger et al., Fluorinated Pyrimidines, a new class of tumourinhibitory compounds. Nature 179, 663–666 (1957). 2. A.B. van Kuilenburg and J.G. Maring, Evaluation of 5-fluorouracil pharmacokinetic models and therapeutic drug monitoring in cancer patients. Pharmacogenomics 14, 799-811 (2013). 3. A.B.P. van Kuilenburg et al., Evaluation of 5-fluorouracil pharmacokinetics in cancer patients with a c.1905+1G>A mutation in DPYD by means of a Bayesian limited sampling strategy. Clin. Pharmacokinet. 51, 163-74 (2012). 4. D. Bertholee, J.G Maring and A.B.P. van Kuilenberg, Genotypes affecting the pharmacokinetics of anticancer drugs. Clin. Pharmacokinet. 56, 317337 (2017).
ly by healthcare providers is the availability of precise diagnostic tests, adequate models and targeted therapies. Unfortunately, TDM of 5FU is not yet part of routine clinical practice and we believe it is imperative to increase the general awareness of the beneficial effects of TDM-guided dose administration for the patient. In the era of personalised medicine, TDM should become the standard of care. Ω ↓ Figure 2 Dihydropyrimidine dehydrogenase (DPD) is an enzyme responsible for the degradation and thereby inactivation of nucleobases uracil and thymine, which are building blocks of RNA and DNA, respectively. Since the structure of 5FU is comparable to that of thymine, DPD is also able to degrade 5FU.
Building smart materials with flexible building blocks to produce arbitrarily complex 3D shapes that are non-periodic. It has thus become possible to create materials with arbitrarily complex properties. The key challenge is now to find out which structure to fabricate and how to design new functionalities.
CORENTIN COULAIS is assistant professor at the Institute of Physics, UvA.
→Material properties are largely governed by the atoms and molecules out of which the materials are made. In contrast, metamaterials are artificial materials, which, owing to their carefully designed architecture, can exhibit properties not found in nature. Over the past few years, scientists from widely different fields have started to develop metamaterials that can manage and channel light, heat, deformations and mechanical vibrations in unprecedented ways. Examples include thermal, mechanical and optical cloaking (think of Harry Potter’s ‘invisibility cloak’), optical super-resolution, one-way propagation of light or mechanical signals, and surprising mechanical responses. The list of the exciting new properties keeps on growing and metamaterials now start to offer possibilities for society and industry that were unconceivable not so long ago. Periodic vs. non-periodic When compressed in one direction, ordinary materials expand laterally. However, metamaterials made of a two-dimensional (2D) square pattern of pores can shrink laterally. In most situations that have been studied to date, the metamaterials’ architecture consists of a single building block, repeatedly stacked in space, similar to a crystalline lattice. Creating a material that shows unusual mechanical responses is greatly simplified by the use of such periodic architectures. However, with the advent of 3D printing, it is now possible and even becomes relatively easy
fit in 3D (Fig. C) to design a cube with no internal frustration, where all the building blocks can flex in harmony. Mechanical spin-ice Progress can be made by reducing the puzzle to its essence and representing the inwards and outwards deformations by spins, just as in a spin-ice. Even with that simplification, solving the problem in its full complexity amounts to solving the notoriously impossible 3D Ising model. Fortunately, we realised that since the building blocks have mirror symmetries (Fig.C), our mechanical spin-ice problem was tractable. Using advanced mathematical analysis, we were able to solve it and, in particular, count the number of possible configurations, where all the spins fit together. We found that the number of possibilities is astronomical: for a 14 x 14 x 14 stacking, the number of possible cubes is equal to 30625 8521902164955113643473436650212 61262812628299779541749100810 !
Flexible building blocks Inspired by the striking mechanical effect described above and its apparent simplicity, at least in 2D, we wondered how to extend it to 3D non-periodic materials. A first step was to design a cubic building block that preferably forms a flat or elongated shape (see Fig. A). A 3D periodic stacking of many of such flexible building blocks then results in a new material with an easy mode of deformation, which has the desired effect: the material folds in laterally when compressed (Fig. B). A 3D puzzle Such a 3D strategy reproduces the mechanism previously observed in 2D, yet introduces a completely new flavour. As opposed to 2D, the 3D building blocks have an orientation. In the previously described case, the blocks were all stacked parallel, but in principle they can also be stacked with different orientations. Obtaining compatible deformations in a stacking of multiple such units turns out to be a particularly tortuous combinatorial problem. One indeed has to ensure that all the local protrusions and dents of each unit cell
→ Reference C. Coulais et al., Combinatorial Design of Textured Mechanical Metamaterials. Nature 535, 529-532 (2016), http:// dx.doi.org/10.1038/nature18960
may be applications on the horizon. The principles we uncovered could be generalised; programmable textures obtained in this way could be of use to tune interfacial phenomena such as friction, wetting or adhesion. Finally, this type of programmable shape changers could be ideal for prostheses or wearable technology. Ω ↙ Figure A. Cubic flexible building block in its undeformed and deformed states. B. Silicon rubber metacube consisting of 5x5x5 building blocks undefor med (left) and under uniaxial com pression (right). Scale bar is 1 cm. C. (left) Deformed building blocks in their three possible orientations, depicted in red (x), green (y) and blue (z). (middle to right) Sketch of 2x1x1 (top and middle) and 2x2x2 (bottom) complex stackings of differently orientated, deformed building blocks. D. 3D-printed 10x10x10 mechanical metamaterial. This cube is made of 10x10x10 building blocks where the outside blocks have been decorated with square pedestals to clearly visualise surface deformations. (top) Undeformed state. (bottom) Deformed state revealing the programmed smiley texture. Scale bar is 2 cm.
Designing functionality Along the way, we also uncovered principles to orient the building blocks in order to rationally design cubes with a fully programmable texture at the surface of the cube. In order to materialise this concept, we 3D-printed a cube, which when non-deformed has flat surfaces, and when compressed reveals a carefully designed smiley (Fig. D)! Although our study is fundamental in nature, there
Walking with accelerometers
ENCARNA MICÓ-AMIGO is PhD student at the MOVE Research Institute Amsterdam, Department of Human Movement Sciences, VU.
→For most of us, walking is a common activity in daily life. Moreover, the way we walk reflects the motor computation of our brain and mirrors our health status. Thus, the evaluation of walking (i.e., gait) is important in a clinical context, especially for the diagnosis, treatment and rehabilitation of motor disorders. Quantitative evaluation of gait patterns, using parameters such as step duration, step length
and walking speed, requires objective and reliable technology. In our study, we assessed gait by means of a single wearable sensor, which is low-cost, small, lightweight and easy to use. The main aim of this research was to develop a new algorithm for the identification of steps from acceleration signals of short duration. Twenty healthy older adults (average age of 74 years) were asked to walk a distance of 5 meters wearing a sensor strapped to their lower back, which recorded linear accelerations in 3D. To test whether our accelerometry algorithms can extract useful information from such a simple system, an additional 3D motion tracking system was used for validation. Thus, the step duration obtained by the two methods could be compared.
level. This implies that algorithmic analysis of data from a single wearable sensor can be used to obtain accurate information regarding the duration of steps taken during walking. Hence, this low-cost and easily deployable method provides new opportunities to quantitatively analyse walking patterns in a clinical context. The next step (pun intended) is the objective evaluation of motor symptoms associated to movement disorders, such as Parkinson’s disease, Huntington’s disease and many others. Ω
→ Reference M. E. Micó-Amigo, Journal of Neuroengineering and Rehabilitation 13, 38 (2016). https://jneuroengrehab. biomedcentral.com/articles/10.1186/ s12984-016-0145-6
↓ Figure Wearable sensor to analyse walking patterns.
Our proposed method of using lower-back accelerometers successfully detected all the steps taken, with errors below the 5%
Gene expression: a search for stability
ANOESKA VAN DE MOOSDIJK is PhD student in the Section of Molecular Cytology, Swammerdam Institute for Life Sciences, UvA.
→Like other scientific areas, biomedical research has entered the era of big data. Nowadays, datasets can contain gene expression information for thousands of genes, and these datasets are often used to identify genes that are expressed differently under different experimental conditions. The vast majority of genes, however, don't show such differences in expression and are left out of the further analysis. Yet, these stably expressed genes also contain a wealth of information, and so we decided to examine these in data publically available in
gene-expression datasets. Our main motivation is our ongoing quest to understand how subtle changes in gene expression drive growth and differentiation of complex tissues. Such changes are often studied via a highly sensitive experimental technique called quantitative reverse transcription polymerase chain reaction (qRTPCR). It measures how much RNA is transcribed from a given gene at any point in time. Importantly, this technique only gives quantitative results if the experimental data are properly normalised. This requires not only the simultaneous analysis of the target genes but also of socalled ‘reference genes’. The latter should show a stable expression pattern in all experimental samples. It turns out that such boring (but very useful) genes are hard to find, especially in complex and dynamically growing tissue, such as the mammary gland, our tissue of interest and the organ in which breast cancer arises. In fact, we found that many traditionally used reference genes were unsuitable for normalising the subtle gene
expression changes we were after. Scrutinising previously published datasets, we identified mouse mammary gland genes that did show stable expression across puberty, pregnancy and lactation (Figure). We validated this experimentally by showing that this novel set of reference genes outperformed traditionally used reference genes in expression stability. While our study focused on the mammary gland, the same approach could easily be applied to other tissues for which proper reference genes are lacking, thus boosting the accuracy of qRT-PCR data in a variety of important applications. Ω
→ Reference A.A.A. van de Moosdijk, R. van Amerongen, Identification of reliable reference genes for qRT-PCR studies of the developing mouse mammary gland. Scientific Reports 6, 35595 (2016).
← Figure Confocal microscopy image showing the rapidly growing end-bud structure of the mammary epithelium in the midst of puberty. Green cells are at most 48 hours old, the red ones are older.
Watching larvae grow up – in real time micrometres long – developing into a one-millimetre-long adult nematode.
JEROEN VAN ZON is principal investigator at AMOLF.
↗ Figure A. Snapshots of a single C. elegans worm growing inside a microcham- ber (time since hatching indicated in hours). The nematode lights up green when a specific protein is synthesized. This protein is part of the biological clock that orche strates the rhythm of the nematode’s development. B. Measurements of protein fluo rescence vs. time in 15 individual nematodes, the black curve belong- ing to the worm pictured on the left. The length of the cycles varies, meaning some nematodes grow up faster than others.
→A developing embryo is a beehive of activity, with cells growing, dividing, migrating and deforming in exactly the right way, resulting in a much larger and incredibly complex adult organism. Time-lapse microscopy, with its ability to capture processes that take place very slowly, has become an indispensable tool to study embryonic development. For example, recent technological advances now allow researchers to follow thousands of cells over many hours during the development of zebrafish or fruit fly embryos. However, development is not finished at birth. Even though the most dramatic transformations occur in the developing embryo, all animals continue to grow after birth. This often includes striking transitions, with puberty in humans as an example. What makes this period of post-embryonic de-
velopment especially interesting is that, while the embryo develops in the protected environment of the egg or the uterus, development after birth is highly dependent on the environment, particularly on the amount and type of food available. Using microscopy to zoom in on development after birth is tricky for a number of reasons. For instance, whereas a zebrafish embryo is not only small, transparent, immobile and develops rapidly, the baby fish that emerges from its egg grows quickly in size, becomes opaque, moves around and takes a much longer time to develop into adulthood. We have developed a new microscopy approach that overcomes these challenges, allowing us to track the entire process of development from newborn organism to adult animal at the level of individual cells. As a key first step to overcoming the challenges, we turned to the nematode Caenorhabditis elegans, a small soil worm that feeds on bacteria. Over the past decades, this roundworm has become an important model organism for developmental biologists. Two crucial advantages make C. elegans uniquely suited for studying post-embryonic development. First, it is small and remains fully transparent throughout its entire life. An adult worm is only a millimetre long - small enough for it to be viewed in its entirety under the microscope. Second, it develops quite rapidly – taking only two days to go from a single fertilised egg to an adult worm. However,
“We captured the ticking of a biological clock on film for the first time.”
one major challenge remained: C. elegans larvae, once hatched from their eggs, are highly motile, making them difficult beasties to track. Moreover, preventing their movement, for instance by holding them in place, also stops them from feeding, causing their development – the very thing we want to study - to halt immediately. We solved this problem by placing C. elegans larvae inside small, purpose-built micro-fabricated chambers, no wider than the diameter of a human hair. These chambers keep the worms, that are otherwise moving normally, within the microscope’s field of view for the entire duration of their development into adulthood. Using a microscope camera with a fast shutter speed (1-10 milliseconds), we were able to capture sharp images even when the larvae were moving rapidly inside their chambers. The result is a time-lapse film of a new-born larva – just one hundred
This new approach can be used to expand our knowledge of post-embryonic development in more complex animals, including humans. There is, for instance, the still unanswered question of how the body knows when exactly to initiate major transitions, such as the shedding of milk teeth or the onset of puberty. It is thought that a biological clock controls the precise timing of these and other developmental events, by regulating the production of specific proteins. A similar clock controls the timing of development in our worms. When we labelled one such clock protein fluorescently, we could show that it was produced every ten hours during post-embryonic development - the first time that the ‘ticking’ of this clock has been captured on film. Apart from studying clock proteins, we are currently visualising how individual cells divide and communicate in feeding and moving C. elegans larvae. As a next step, we want to understand how the developmental clock impacts and controls the behaviour of cells and how this is influenced by environmental factors, such as food availability. Interestingly, the interaction between diet and development is mediated by the same signalling proteins (e.g. insulin) in both worms and humans. We hope that our tiny worms, neatly parked in their microchambers, might tell us more about how the human body deals with food abundance or scarcity. Ω
→ Reference N. Gritti, S. Kienle, O. Filina and J.S. van Zon, Long-term time-lapse microscopy of C. elegans post-embryonic development. Nature Commun. 7, 12500: 1-5 (2016).
How large impacts changed Mercury’s rotation
JURRIEN S. KNIBBE is PhD student at the Department of Earth Sciences, VU.
→During the early evolution of the inner part of our solar system, rocky planets and moons formed through a series of high-velocity collisions. As a result, most planets and moons were initially rotating rapidly about a rotational axis, while orbiting their host object (star and planet, respectively). In 1880, George Darwin showed that the gravitational interaction with their host object causes planets and moons to de-spin, meaning that their rotation slows down. In the simplest end-scenario, this implies that the same hemisphere always faces the host object. In this case, the planet or moon has synchronised its rotation with its orbital period (Fig. 1a). Bodies closer to their host object feel a stronger gravitational pull, such that they de-spin more rapidly. This is why most moons (which are relatively close to their host planet) have reached a synchronous rotation, meaning for us that we always see the same side of our moon. In contrast, planets far from the Sun (their host object), such as Earth and the planets further out in the solar system, are still in the process of de-spinning. The planets closest to our Sun have finished their de-spinning evolution, but have ended up in exceptional rotational states: Venus spins backwards and Mercury rotates three times during every two orbits around the
Sun (Fig. 1b). Mercury's so-called 3:2 spin-orbit resonance is stable due to the elliptic shape of its orbit. In order to investigate the course of events that lead to the exceptional rotational state of Mercury we can analyse ancient surface features, assessing the origin of a striking asymmetry in the distribution of large impact craters. This asymmetry was reported in 2011 based on the first global surface photography of Mercury obtained by NASA’s MESSENGER space mission (Fig. 2a). Only six of the twenty-six largest impact craters are located on Mercury’s eastern hemisphere, which is unlikely (only 1% probability) assuming a uniform impact rate [1,2]. Thus, the hemispherical asymmetry in crater impacts led to the proposal that in the past Mercury attained a synchronous rotation state with respect to the Sun , similar to our moon compared to Earth. A simple calculation shows that impacts that produce craters larger than ~600 km in diameter would provide enough energy to de-stabilize any rotational state of Mercury. In fact, NASA's data show Mercury’s surface to have 15 craters with a diameter greater than 600 km. The probability that the impact that produced Mercury’s largest and also youngest crater (Caloris, ca. 1550 km in diameter, 4 billion years old) would leave the rotational state of Mercury unchanged is less than 5%. Thus, the Caloris impact has been suggested to have destabilised the synchronous rotation state that Mercury had achieved (Fig. 1a), resulting in the present 3:2 spin-orbit resonance (Fig. 1b). However, a subsequent study  showed that it is unlikely for Mercury to have reached its current 3:2 spin-orbit resonance starting from a synchronous rotation (a lower spin rate). To avoid decelerating from a
destabilised 3:2 spin-orbit resonance to a synchronous resonance (as predicted by ), Mercury must have started in a higher spin-rate rotational state before the Caloris impact. Several spin-orbit resonances of higher spin rate are stable for Mercury’s elliptic orbit (a crater forming collision alters the spin of the planet but not the orbit). We showed that Mercury would most likely get trapped in the 2:1 spin-orbit resonance state (Fig. 1c) starting from an initial rapid rotation. We simulated the distribution of crater impacts in this rotational state and showed that a hemi-spherically asymmetric crater distribution will be produced in the 2:1 spin-orbit resonance state (Fig. 2b). These results point to a de-spinning rotational evolution for Mercury towards the current 3:2 spin-orbit resonance via a 2:1 spin-orbit resonance, and this scenario is in line with the spatial distribution of ancient large craters. This rotational evolution scheme is probably typical for planets and moons in elliptic orbits like Mercury, whereby capture takes place into spin-orbit resonances other than synchronous rotation. The number of large impacts from that point on determines whether they remain locked in a spin-orbit resonance of high spin rate or whether their spin state gets destabilised, with de-spinning as consequence. This process provides a direct link between the surfaces of (geologically dead) planets and their rotational/ orbital evolution. Our theory solves the twin mysteries of Mercury’s unusual rotational state and the origin of the asymmetry of its large craters. The approach taken also offers new avenues for understanding the rapidly expanding observational data from planets in solar systems other than our own. Ω
→ References 1. C.I. Fassett et al., Large impact basins on Mercury: Global distribution, characteristics, and modification history from MESSENGER orbital data. J. Geophys. Res. 117 (E12), E00L08 (2012). 2. S.C. Werner, Moon, Mars, Mercury: Basin formation ages and implications for the maximum surface age and the migration of gaseous planets, Earth Planet Sci. Lett. 400, 54-65 (2014). 3. M.A. Wieczorek et al., Mercury’s spin-orbit resonance explained by initial retrograde and subsequent synchronous rotation, Nat. Geosci., 5, 18-21 (2012). 4. B. Noyelles et al., Spin-orbit evolution of Mercury revisited, Icarus 241, 26-44 (2014).
↑ Figure 1 The geometry of an object (circle) in three stable rotational states as it orbits its host object (the star). Dots represent a body-fixed position on the object. In (b), the open dots represent the position of the same body-fixed point in every second orbital period. Note that the rotational states of (b) and (c) are only stable if the orbit is not a perfect circle. ↙ Figure 2 (a) The 26 impact craters on Mercury found in two crater-observation studies (dots for , crosses for ) are mostly located on the western hemisphere. (b) The simulated relative cratering rate of large craters for Mercury in a 2:1 spin-orbit resonance. This simulation used the orbital trajectories of detected present-day objects in the inner solar system.
To mimic human brain development in a dish, we derived neural tube cells. The neural tube is formed early in embryonic development as a first step towards the central nervous system. Neural tube cells appear in a rosette-like pattern and can differentiate into several brain cell types. By using growth factors, we induced neural-tube formation from human-induced pluripotent stem cells (hiPSCs). In this image you see the rosette patterns of neural tube cells with expression of neuroectoderm marker Nestin (red) and stem-cell marker SOX2 (green) with nuclear stain DAPI. In our lab, we differentiate patient-derived iPSCs into brain cells to study their function and properties in diseased state. AISHWARYA G. NADADHUR is PhD student in the Functional Genomics (FGA) department at the Centre for Neurogenomics and Cognitive Research (CNCR), VU. â†’ Reference A.G. Nadadhur et al., Multi-level characterization of balanced inhibitory-excitatory cortical neuron network derived from human pluripotent stem cells, submitted.
A new global dataset to protect the hundreds of millions threatened by rising sea levels
SANNE MUIS is PhD student at the Institute for Environmental Studies, VU.
→Extremes in sea levels can cause catastrophic floods. A more accurate measurement of these extremes allows for a better understanding of the risks faced by more than 600 million people living in low-lying coastal areas.
In our recent publication in Nature Communications, we present a new dataset of extreme sea levels that have now been mapped with greater accuracy than ever before, around the world's entire coastline. For our Global Tide and Surge Reanalysis dataset (GTSR), data of the extreme sea levels are derived from simulations with the global hydrodynamic model, recently developed by Deltares. Wind speeds and atmospheric pressures from 1979 to 2014 were used to simulate wind- and storm-driven changes in sea levels for the entire period. We then extrapolated these simulations to estimate the sea levels for different return periods (the Figure shows the outcomes for a return period of 10 years). At the regional scale, hydro-
Pixels, parcels or farms? Analysis of farm system changes in the Mar Chiquita basin, Argentina
KARINA ZELAYA is PhD student in the Environmental Geography Group, VU, and researcher at the National Agricultural Technologic Institute (INTA) in Argentina. → Reference K. Zelaya, J. van Vliet, P.H. Verburg, Characterization and analysis of farm system changes in the Mar Chiquita basin, Argentina. Applied Geography 68, 95-103 (2016).
→‘Land cover’ and ‘land use’ designate the physical material at the Earth’s surface and the conversion of wilderness into built environment by people, respectively. Keeping track of (changes in) these parameters is important for biodi-
versity monitoring, land planning, achieving food security, setting economic policies and studying climate change. Both parameters are usually identified from satellite data, yielding information down to the level of the individual pixels of the image. However, the pixel-level data is far from suitable for making decisions as regards land management and the setting of land-planning regulations. We aimed to define which resolution level is suitable for land-planning applications. As a study area, we chose an agricultural basin in the southeast of the Buenos Aires province in Argentina. Using remote sensing data recorded from
→ Figure Changes in land cover and farm system between 1999 and 2013.
dynamic modelling is a state-of-theart method to assess flood hazards, but this is the first time that such an approach has been applied at the global scale. Comparison of the modelled sea levels with observed sea levels across the globe shows that the GTSR has a sufficient accuracy. In fact, its performance is similar to that of many regional hydrodynamic models. GTSR is a big step from where global modelling was, but it is also just a first step, and over the next years the global model will be developed further. Obviously, improvements can be accomplished in terms of resolution and computation time. Currently, we are experimenting to see how the effects of surface waves and tropical cyclones could be included. We foresee that this dataset will have a huge impact on efforts to assess risks of flooding, as well as to gain insights in which areas are at risk now. As a first application of the new extreme sea levels model, we now estimate that 76 million people are potentially exposed to a
1-in-100-year flood. As we encourage other researchers and policy-makers to make use of this new data and to develop effective adaptation strategies, the GTSR is released as an open-access dataset. Ω
1999 to 2013, we compared data at three levels: the fine-grained, single-pixel level of the satellite data, the level of a parcel of land within a farm and the level of an individual farm itself. Then, we analysed the biophysical factors at work in order to describe the spatial pattern of the current land use and how these factors influenced changes in land use. We observed that over fourteen years, there were widespread changes in land cover (Figure, left image) and land use, involving an increase in soybean plantations and a decrease in grassland. In addition, the type of farming (the so-called farm system) changed as well (Figure, right image). In fact, almost half of the livestock farms disappeared, and cropland farms and mixed livestock-crop farms doubled in number. Factors such as accessibility, soil
suitability and the relatively high elevation influenced both land-cover and land-use changes, seen here as the conversion to cropland. We found that these causal relationships were all revealed with a better accuracy when the farm-level data rather than the pixel-level data were used, in part due to the reduction in noise in the former caused by crop rotation. Our results clearly show that the spatial organisation of agricultural land use is better carried out at the level of individual farms than at the pixel level, and thus farm-level data should form the basis of rural land planning. As the farms are the units at which land-use decisions are actually implemented, insight into the changes at this level is a good guide to the development of land management practices, land-use planning and related policies. Ω
↓ Figure Extreme sea levels (in m) depicted for the coastline with a 10-year return period (RP10).
→ Reference S. Muis, Nature Commun. 7, 11969 (2016). DOI: http://doi.org/10.1038/ ncomms11969 The dataset can be downloaded from http://data.4tu.nl/repository/ uuid:aa4a6ad5-e92c-468e-841bde07f7133786.
Science communication: involve your audience Some years ago, an acquaintance tipped me off about a brand-new project from abroad: a computer game to stimulate young people to choose a career in science. Ha, a game, that should be fun! To make a long story short: after 15 minutes, I realised that this was going to be nothing more than a long, long clicking and reading session. I could only assume how much fun a random 15-year-old would have with a game that put me off – a biologist and science fan. However, at the same time, the game looked good. Clearly, money was spent on it. Also, multiple scientists must have invested a lot of time. So, starting conditions for this project were actually better than average. What went wrong then? To answer that question, I’d like to look at three signs of quality in science communication projects. A first sign of quality is involving the intended audience in an early stage. In this case, audience members could have answered questions like: What gratification do they look for in games? How much time are they willing to spend? I suspect that the project initiators either did not involve members of the target group early enough – or didn’t listen to what they said. A second helpful indicator is the clarity of the project goals. In theory, this project does well: it claims to want to motivate young people towards choosing a research career. But if you really want to do that, I doubt a computer game would be the appropriate means. Maybe what the initiators really wanted, deep down, was just to make a game, period? Of course, making a game can be fun, educational and rewarding. But it is still necessary to think about realistic objectives. After all, I can’t imagine that anyone deliberately wants to make a game that nobody plays. Which brings me to a third point: the
people and skills involved. In this project, the initiators clearly recognised essential competences that they did not possess. These were mainly technical skills, to do with building and designing a game. However, this is a science communication project for young people first and foremost. And from the looks of it, precisely this professional expertise was lacking: how to set realistic goals and choose suitable media for this specific target group. This game is an existing but extreme example. However, parallels may be drawn with many science communication efforts. Take, for example, the Amsterdam Science magazine. This magazine depends on hours of volunteer work by motivated scientists; it has a budget that is spent on printing, postage and subcontracted technical skills (graphic design). It looks good. However, one might get the feeling that the intended audience was not intensively engaged early on. The website and Facebook page state “a broad public/audience,” which, in my experience, too often translates to “we make something WE and our colleagues like and just keep our fingers crossed that someone else likes it too.” The number of likes on the Facebook page does not bode well for the second part of that translation. Or the first part, for that matter. The question to ask would be: what would this intended audience be interested to hear? Many non-scientists are very interested in science. But they might not be interested in a lot of English text on details of contemporary research. They might not even be interested in a magazine. The magazine’s goals are stated on the website: give a platform to students and scientists, and show positive characteristics of the Amsterdam scientific community to each other and the world. Building a community and putting its qualities on display is a perfectly fine goal. It also requires specialist expertise
ALEX VERKADE works at the Rathenau Institute, where he develops a tool to assess and reward quality of public engagement activities by scientists. He studied Biology at the University of Amsterdam. Shortly before graduating in 2000, he cofounded De Praktijk, a project bureau for science communication and education. With that company, among many other things, he organised ten yearly editions of the Discovery Festival, a late-night science, art and music festival; and he co-initiated the national Science Communication Conference. In 2016, Alex left De Praktijk.
– not from a graphic designer, but from science or even strategic communication experts. The first thing they will tell you: community building screams for interaction. Through social media for instance. Or events. Unfortunately, the Twitter feed has produced 4 tweets and amassed all of 14 followers in a year. None of the UvA scientists and alumni I spoke to had ever heard of the magazine. To summarise: the magazine might run the risk of pouring all this time, money and enthusiasm into a flashy magazine that nobody reads. Of course, I have no intention to pick on good initiatives for not being perfect. Let’s celebrate that there is a well-designed, well-edited magazine about Amsterdam science at all. Let’s acknowledge and appreciate that many scientists, under a lot of work pressure, take time to put themselves out there, talk about their work and communicate their work to the public at all. And let’s admire the editors of Amsterdam Science, who let me criticise their magazine in their own magazine. But let’s not leave it at that. Let’s push things forward. If we enlist science communication experts; set clear goals and evaluate; choose appropriate means; talk and listen to the target group – if we do that, all our energy, time and money could yield more impact and we could help more people truly engage with science. And isn’t that what we all want? Ω
A mysterious cosmic radio signal coming from long, long ago in a galaxy far, far away JASON HESSELS is associate professor at the Anton Pannekoek Institute for Astronomy, UvA, and associate scientist at ASTRON, the Netherlands Institute for Radio Astronomy.
You might think that the night sky is largely unchanging, that stars and galaxies do not evolve much on human timescales, and that advances in astronomy are mostly driven by bigger telescopes that can peer deeper into the cosmos. That's not the whole story, however. One of the main drivers of modern astrophysics is the time domain: the exploration of how astronomical objects change in time. We call such sources ‘transients’ and our scientific interest in studying them often stems from the fact that they can probe the extremes of the Universe: e.g. the highest energy/matter densities, deepest gravitational wells and strongest magnetic fields. The information we infer from these sources can increase our understanding of fundamental physics because they provide us with natural laboratories that showcase conditions far beyond anything we can artificially create on Earth. So, what kinds of sources are being studied by astronomers? Supernovae - which herald the explosive collapse of massive stars - are a classic example of an astronomical transient (they have been recorded throughout the past centuries by astronomers and royal astrologers, a famous example being the birth of the Crab nebula and pulsar in 1054CE). Another source class are the socalled ‘gamma-ray bursts’, which were first discovered in the late 1960s and represent even more energetic processes, like the collapse of ultra-massive stars or the catastrophic collision of neutron stars. Detecting astronomical transients is technically challenging because they are often rare in occurrence and short-lived. Though supernovae and gamma-ray bursts are now routinely detected, and deep follow-up studies are done, it is very likely that there are other astronomical transients still to be discovered. Simply put, we
are still very far from scanning the whole sky, all the time, and across the full breadth of the electromagnetic spectrum, i.e., from low-energy radio waves up to high-energy gamma-rays. So to speak, if you blink, you might miss all the action. In the last 10 years, radio astronomers like myself have started to detect an enigmatic new type of transient astronomical radio signal, which are collectively termed ‘Fast Radio Bursts’, or FRBs for short. FRBs are radio flashes that last for only a few milliseconds (i.e., 100 times shorter than the blink of an eye) and which are so faint that they can only be seen using state-of-the-art detectors on the world's largest radio telescopes. The most surprising thing about the FRBs is that they appear to come from fantastically large distances: i.e. billions of lightyears away. Until now, these large distances have only been indirectly inferred by using the fact that FRBs arrive later at longer radio wavelengths, which is the result of dispersion in the intervening cosmic material between the Earth and the source of the bursts. To definitively prove that FRBs originate deep in extragalactic space, and to start understanding what produces them, we need to identify their host galaxies. Such efforts have been hampered by the fact that the sky positions of FRBs are relatively poorly known: we can detect them, but is not obvious which of the many galaxies visible in the same general direction might be responsible for hosting the source of the burst. In other words, it is impossible to unambiguously identify the host galaxy from which the emission originates, unless you can pinpoint the direction from which the bursts are coming. Last year, my colleagues and I used one of the world's largest radio telescopes, the 305-m dish in Arecibo, Puerto Rico, to discov-
↓ Figure 1. The 305-m William E. Gordon Telescope at the Arecibo Observatory in Puerto Rico and its suspended support platform of radio receivers, shown amid a starry night. From space, millisecond-duration radio flashes are racing towards the dish, where they are reflected and detected by the radio receivers. Such radio signals are called Fast Radio Bursts (FRBs).
“FRB121102 has lived up to this promise and has led to a major breakthrough.”
← Figure 2. The dishes of the Karl G. Jansky Very Large Array are seen making the firstever precision localisation of a Fast Radio Burst, and thereby pointing the way to the host galaxy of FRB121102.
“Fast radio bursts remain one of the most exciting but perplexing mysteries in modern astrophysics.”
← Figure 3. The globally distributed dishes of the European VLBI Network are linked with each other and the 305-m William E. Gordon Telescope at the Arecibo Observatory in Puerto Rico. Together they have localised FRB121102's exact position within its host galaxy.
er the first FRB source known to produce repeat bursts: FRB121102 (Fig. 1), whose name comes from the date on which the first burst was detected, i.e. 2 November 2012 . This was an important discovery for two reasons. First, the fact that it is possible to detect multiple bursts from the same source means that whatever produces the bursts survives the energetic events that create the emission. This rules out physical models in which FRBs are produced by cataclysmic explosions like the collision of two neutron stars, or a neutron star collapsing into a black hole. Secondly, with repeating bursts, it is possible to intensely monitor this sky position with other telescopes that are better optimised for localisation purposes, in the hope of directly identifying the galaxy from which the bursts are coming. A repeating source thus provides a huge practical advantage in terms of enabling deeper follow-up
studies. In the last few months, FRB121102 has lived up to this promise and has led to a major breakthrough in our understanding of the FRB phenomenon. We presented these results in three recent papers in Nature and Astrophysical Journal Letters. After nearly 100 hours of observations with the Very Large Array, a radio interferometer in New Mexico, we captured several bursts from FRB121102 and localised their sky position with 100 milli-arcsecond precision, which is an angular scale equivalent to 1/100,000 degrees (Fig. 2). The chance that FRB121102 is producing a detectable burst at any particular time is at best one in a million, thus a big part of the challenge was patience and acquiring the necessary observing time. In close to 100 hours of observing with the Very Large Array, the bursts were visible for a total of only 50 ms. Using the Very Large Array we not only lo-
calised the origin of the bursts, we also showed that at the same sky position there are also persistent sources of both radio and optical emission. In principle, these other co-located astronomical sources could represent emission from the host galaxy of FRB121102, but more work was needed to test that hypothesis. As a next step in the process, we used radio telescopes spread across the globe in order to further refine the position. If FRB121102 originated deep in extragalactic space in a certain galaxy, then we wanted to know where inside that galaxy it was situated, e.g. its position with respect to the centre of that host galaxy. Using the European Very Long Baseline Interferometry Network, in conjunction with the Arecibo Telescope (Fig. 3), we were able to detect a few more bursts and to localise their sky positions to within 10 milli-arcseconds - a factor of 10 more precise than
with the Very Large Array. This represents an angular precision of a millionth of a degree - comparable to the width of a human hair as seen from a distance of 2 km! With this level of precision, it became apparent that the bursts originated from the exact location of the persistent radio source that we had found with the Very Large Array, but it also became apparent that there was a very slight offset from the centre of the associated optical source. We needed a large optical telescope to find the last piece of the puzzle. Using the 8-meter Gemini, one of the world's largest optical telescopes, we targeted the same sky position in order to measure the spectrum of the light (Fig. 4). With an optical spectrum, it is possible to see how ‘redshifted’ the light is due to the expansion of the Universe, and to thereby determine the distance to the source. These observations confirmed that the optical source is indeed a galaxy, but it was a surprisingly puny one: a
so-called dwarf galaxy with roughly 10,000 times fewer stars compared to our Milky Way. Using the redshift of the light, we calculated that the host galaxy of FRB121102 is close to 3 billion light-years from Earth. This implies that the bursts from FRB121102 release roughly one million times more energy per unit time than the Sun does, and they started their journey to Earth back when the most primitive forms of life were first developing on our planet. So, we now know that the source of FRB121102's bursts is fantastically far away and energetic. A critical question remains: what is the physical nature of the source? One hypothesis is that it is an extremely energetic neutron star. In fact, the type of galaxy that is hosting FRB121102 has previously been linked to long-duration gamma-ray bursts and super-luminous supernovae. These events may be related to the collapse of ultra-massive and/or very rapidly spinning stars. These have been hypothesised to leave behind a super-magnetic and fast-spinning neutron star (a
← Figure 4. The 8-meter Gemini telescope on Mauna Kea is seen measuring the redshift of the host galaxy from where the FRB121102 bursts originate, revealing that the bursts have travelled 3 billion light years to reach Earth.
→ References 1. L .G. Spitler et al., Nature 531, 202205 (2016) 2. S. Chatterjee et al., Nature 541, 58-61 (2017) 3. B. Marcote et al. ApJL 834, L8 (2017) 4. S.P. Tendulkar et al., ApJL 834, L7 (2017)
so-called ‘millisecond magnetar’) after their death. Such a source would have to be roughly a million times more energetic compared to even the most extreme neutron stars we see in our own Milky Way galaxy. Another possibility is that the bursts originate from a massive black hole that is eating material from its surroundings and producing a powerful jet of radiation. In such a scenario, however, it becomes challenging to understand
We can protect our brain against Alzheimer’s disease tive functions in AD. We have recently published a paper in which we reveal a crucial link to how AD leads to memory loss and decline of cognitive functions. By removing this link, we could manage to protect mice and their brain cells against AD.
NIELS REINDERS is PhD student at the Netherlands Institute for Neuroscience (NIN).
Alzheimer’s disease (AD) is a progressive disorder starting with memory loss, followed by decline of most other cognitive functions. It affects over 250,000 people in the Netherlands and 46 million people worldwide. Thus far, no effective treatment exists. To work towards such treatment, our group at the Netherlands Institute of Neuroscience (NIN) studies the cause of the detrimental decline of cogni-
So far, it is thought that AD symptoms arise because of the accumulation of the protein amyloid-ß in the brain. Amyloid-ß damages synapses, which is dangerous because neurons (brain cells) need synapses to communicate with each other. This communication facilitates all our cognitive functions including memory. While searching for the mechanism of how amyloid-ß damages synapses, we found that it requires the protein GluA3. Without GluA3, synapses are unaffected by amyloid-ß. We also found that mice with much amyloid-ß in their brains did not show memory loss or early death when they did not have GluA3. By removing GluA3, we could protect synapses against
why the millisecond-long bursts from FRB121102 would be so short in duration. At this point we are really not sure, and deeper studies are needed to test the various theoretical scenarios now appearing in the literature to explain FRB12102's origin. It is also possible that the origin of the emission is from a type of astronomical object that has no known analogue, and is unpredicted by theoreticians.
amyloid-ß and at the same time prevent AD symptoms like impaired memory (see Fig.). We therefore concluded that synapses without GluA3 have properties that protect them from the effects of amyloid-ß. Adding this property to the synapses of the AD patients, or reducing the amount of GluA3 in their synapses, could then be a treatment for AD. Interestingly, research suggests that by learning new things, we can also reduce the amount of GluA3 in our neurons. This could be why individuals that learn frequently have a later onset and slower progression of AD. A pill that reduces GluA3 may still be far away…but learning by doing new things frequently is something that we can all do today! Ω
“By removing GluA3, we could prevent Alzheimer’s symptoms.”
Through continued observations of this and other FRB sources we hope to solve this puzzle in the coming years. FRBs remain one of the most exciting but perplexing mysteries in modern astrophysics. Ω
↓ Figure Section of three neurons, with synapses indicated by arrows. Top: a normal neuron. Middle: a neuron with fewer synapses due to high levels of amyloid-ß. Bottom: a neuron that also has high levels of amyloid-ß, but is unaffected because it lacks the protein GluA3. The neuron is green for visualisation purposes. The white bar is 0.005 mm, indicating how small synapses are.
→ Reference N.R. Reinders et al., Amyloid-beta effects on synapses and memory require AMPA receptor subunit GluA3. Proc. Natl. Acad. Sci. U. S. A. 113, E6526-E6534 (2016). DOI: 10.1073/ pnas.1614249113
The first experiment I ever did was.., ... an attempt to change part of the functionality of the old-fashioned radio that my parents gave me when I was about 12 years old. The amplifier was the really cool thing; it had big capacitors and impressive tubes. After switching the radio off, the capacitors would remain charged and one ‘experiment’ was to discharge them with a screw driver: big sparks! I figured out a little bit about the functionality, so that I could use the amplifier also for other inputs. That worked, which made me really proud.
My constant source of inspiration is... ... two-fold. Since my research focuses on atomic-scale phenomena, I give myself the assignment to simply understand why a particular atom ‘wants’ to do ‘this’ rather than ‘that’. The atom may seem to do something intelligent but it isn’t smart really, so I should be able to predict what it’s going to do and why, right? Secondly, my conviction is that by figuring out enough about the behaviour of matter on the atomic and molecular scale, we can develop smart and green technologies that will help us change the way we produce and use resources and energy. Our society is in desperate need of such breakthroughs in order to become truly sustainable.
One book that I recommend to all young scientists is.. … Ontsporing (Derailment), in which Diederik Stapel describes how he got entangled in one of the biggest cases of scientific fraud, and the books by Richard Feynman. His books, such as The Pleasure of Finding Things Out and Surely, You’re Joking, Mr. Feynman provide a good lesson in the
© JOOST FRENKEN is director of the Advanced Research Center for Nanolithography (ARCNL), which focuses on fundamental physics research in the context of technologies for nanolithography, primarily for the semiconductor industry.
deep, almost physical urge to understand things, in combination with a delicious, unconventional style. Read from both authors and learn what defines a good scientific attitude!
economics. Imagine the pure joy of recognising the essence of a complex phenomenon, of capturing that in elegant mathematics and of then even using this to make predictions and to design machines and materials with which we gain control over our world. It is all the more thrilling that this ‘physics approach’ can be applied just about ‘anywhere’.
I would like to share the following with the Science Community in Amsterdam… ... my enthusiasm!
If I headed the Ministry of Science the first thing I would change is… ... how the science budget is distributed. I would endow the universities with higher budgets, for which I would scrape away funds from our national science foundation, NWO. In fact, this would be the opposite of what the ministry actually did several years ago. In my view, the scientific community is collectively wasting part of its time and energy on writing and criticising research proposals. There’s a lot of good in this circus, but it has gone too far, which makes it inefficient.
If I had to switch roles with a famous person for one day, I would choose to be… ... an inspiring leader in the industry, such as Elon Musk. He clearly shares some elements of grand and maybe naive inspiration with me. And he inspires me with his relentless and even stubborn efforts to develop new technologies that are commercially viable and improve our world.
I am most creative when… ... I take really long showers!
If I could choose my field of study and university once again I would choose… ... again choose physics. In my arrogance, I think that this field of science is great in itself, and it is interfaced maximally with other fields, ranging from biology to cosmology and from chemistry to
PeelPioneers: a Bachelor’s project turned into business SYTZE VAN STEMPVOORT holds a Bachelor’s degree from the Betagamma programme at the UvA, majoring in Chemistry, and is currently enrolled in the Master’s programme Science, Business and Innovation (SBI) at the VU.
On Monday 10 October 2016 I found myself on a stage, handing over a small piece of orange hand soap made from orange peel to Princess Laurentien of the Netherlands. How did that come about? Well, it all started when I did my Bachelor’s project while studying Chemistry at the UvA. At that time I worked on the Orange Peel Exploitation Company (OPEC, wink) research project at the Green Chemistry Centre of Excellence at York University. The research team was developing a novel process for extracting valuable compounds from citrus peel, based on microwave extraction (thus reducing
the amount of solvents needed). The research aimed to get the extraction process working in sunny places where citrus tends to grow (e.g., Brazil, Florida & South Africa). Upon completion of my Bachelor’s project, I co-authored two academic publications on citrus peel valorisation [1, 2], but still had no clue what I wanted to do after I obtained my Bachelor’s degree. I decided to take a gap year, during which I worked as a consultant and … fell in love with orange peel (again). I was astonished to find out that the Netherlands – definitely not a sunny place - throws away 250 million kg of citrus peel every year! More than half of this is from organisations ( juicing industry, supermarkets and restaurants) and it is currently either incinerated or fermented. Both options are not favourable for further processing because of the high water content, acidity and the presence of
limonene (a terpene that disturbs the fermentation process). At the same time, I knew that citrus peel is a source of essential oils (fragrance), fibre (mostly cellulose) and pectin (natural gelling agent used in the food industry). So it came to be that, in 2016, I started my own company: PeelPioneers – the first biorefinery for citrus peel in a non-citrus producing country. Currently our team numbers six pioneers, and in March and April 2017 we are starting a smallscale pilot plant. We will process ~200 kg of peel per day and focus on extracting the essential oils using a cold, mostly mechanical, extraction process to prevent evaporation of precious volatiles from the oil. From the ‘leftovers’ of our process we’ve made a bar of hand soap and that is the soap that I presented to Princess Laurentien of the Netherlands on an evening dedicated to sustainability in Amsterdam. Ω
↓ Figure Filling the extraction vessel with orange peels
→ References 1. G. Paggiola, S.V. Stempvoort, et al. 686-698 (2016). 2. J. Bustamante, S. van Stempvoort, et al. 598-605 (2016).
Measuring Amsterdam’s pulse modifications to this method to improve its usefulness for urban policymakers and planners striving for sustainable urban resource management. The research was conducted as part of the Urban Pulse research project run under the supervision of the Amsterdam Institute for Advanced Metropolitan Solutions (AMS). ILSE VOSKAMP is PhD student affiliated to the Amsterdam Institute for Advanced Metropolitan Solutions and Wageningen University & Research.
Urban Pulse A sound understanding of a city’s resource flows is essential for sustainable management of these resources. The concept of urban metabolism, a metaphor of a city as a living organism or ecosystem, is used to describe and analyse urban resource flows. One established method for quantitative assessment of urban metabolism is the material flow analysis (MFA) method described by Eurostat, the statistical office of the European Union. Recently, we proposed
The metabolism of Amsterdam We analysed the resource flows of the city of Amsterdam for the year 2012, using the original and the modified Eurostat MFA method. Our results showed that water flows (see Figure) and resource flows that pass through the city without being used there, fossil fuels in particular, dominate Amsterdam’s metabolism. These ‘throughput’ flows are typical for a port city. In addition, our analysis revealed that Amsterdam’s environmental pressure has decreased since 1998 in terms of per-capita waste generation and drinking water consumption. Comparison A comparison between the results
of the two analyses revealed three major benefits of the modified Eurostat MFA method. First, it presents a more complete image of the flows in the urban metabolism due to a focus on resource flows rather than material flows. Second, it presents findings in more detail, which allows for an in-depth understanding of the metabolism and facilitates comparison between cities. Third, differentiating between flows that are associated with the city’s resource consumption and (trade-related) resources that
only pass through the city yields a much-improved insight into the nature of a city’s imports, exports and stock. Ω
→ Reference I.M. Voskamp et al., Journal of Industrial Ecology (2016). DOI: 10.1111/ jiec.12461 ↓ Figure Overview of the water flows (in kt) that are part of Amsterdam’s metabolism.
Palaeontologist at Naturalis Biodiversity Center (Leiden) and former PhD researcher at the Vrije Universiteit Amsterdam.
Biologist at ARTIS Amsterdam Royal Zoo.
“I graduated at the Faculty of Earth Sciences in 1998, with a curriculum focusing on vertebrate palaeontology [the subfield that seeks to discover the behaviour, reproduction and appearance of extinct animals with a skeleton]. The last months of my Master’s I spent in an internship at the Maastricht Natural History Museum, where I subsequently landed a position as a curator. The excellent collection of mosasaurs (extinct ‘sea monsters’ from the Age of Dinosaurs) caught my attention. Time travel is still a long way off in many sciences, but palaeontology makes it possible to go back in time, at least to some extent. My work in Maastricht resulted in a PhD research project at the Vrije Universiteit. I defended my thesis in 2006, and I have worked as guest researcher at the VU ever since. Next to working in the isotope laboratory on fossils at the Maastricht Museum, I co-supervised student projects on a regular basis. In 2013, I had the opportunity to start as a palaeontologist at the national museum Naturalis in Leiden. Along with the research duties, I am responsible for the content of the new dinosaur gallery. Centrepiece of the new gallery is ‘Trix’, the Tyrannosaurus rex fossil that we excavated in 2013. I will devote a lot of my research effort on this
unique fossil in the foreseeable future. What I like most about my work is that my research on fossils in a museum setting also allows for lot of outreach activities directly associated with my research. I give public lectures, do tours and contribute to television and radio programmes. And it is very rewarding to receive emails from, e.g., 6-year olds who want to become a palaeontologist! That makes my work even more meaningful and enjoyable.” Ω
Insider’s advice Advice for a ‘similar’ career path? I think ‘similar’ (or even worse: ‘same’) are boring concepts, as it implies someone else did something similar already. So here’s stating the obvious: I would rather suggest to never try doing the same; aim instead to find -or carve- your own niche, your own specialty.
“I graduated in Spider Ecology with a Master’s in education at the Vrije Universiteit Amsterdam. At the time, jobs were scarce so I tried to raise funds for a project on the role of spiders in the rice ecosystem of northern Sulawesi. My first attempt was unsuccessful, but when I redirected my research plan towards the role of egrets, funds started flowing in. The resulting 1.5-year research experience in Indonesia helped me find work as a biologist back in Holland, in many different short and longterm projects. When I applied for a job at ARTIS, my all-round work experience was an asset. My first job at ARTIS, in 1989, was head of the Education Department. Since then I have had various positions: as coordinator of education, as webmaster and as senior advisor communication & marketing. At the moment, I work as a biologist in the Education Department and give lectures at HOVO Amsterdam, the education institute for senior citizens of the Vrije Universiteit Amsterdam. I feel privileged to start the day with a stroll through the beautiful zoo, full of mature trees, sweet-smelling flowers, singing birds, beautiful statues, historical monuments and lots of native and exotic animals. As a biologist, I enjoy observing and studying all creatures great and small. The suckling of an elephant calf, the
mating call of a tiny frog in our Butterfly Pavilion, the flowering of a Chinese witch hazel in the middle of winter. To share these observations and unusual facts with my colleagues, our highly valued volunteers, the general public and companies makes my work very interesting. ARTIS is a very dynamic working environment that assures continued learning of new things, e.g., discovering invisible life forms in Micropia, or at the Groote Museum, ARTIS’ monumental museum building.” Ω
Insider’s advice Although the job as ARTIS biologist fits me like a glove, I never aimed for this kind of job. When I graduated, I hoped to continue in spider research for the rest of my life. The lack of funding, which I first considered to be an obstacle in my career path, opened up new opportunities in the end. My advice: don’t fixate on one ideal job position. It took me a while to trust my instincts, to develop my gut feeling and to go with the flow. For more information about job opportunities and internships at ARTIS, please check www.artis.nl.
Crossword Amsterdam Science issue 5# What word are we looking for vertically? 1 2 3 4 5 6 7 8 9 10 11 12 13 14
01. artificially structured materials used to control and maniplate light, sound and other physical phenomena 02. essential nutrients for the human body 03. measuring device used to quantify human movement patterns 04. systematic investigation to establish facts or principles
or to collect information on a subject 05. species that lived on the islandMauritius 06. nerve cell 07. an institution for higher education and academic 08. force of attraction that move or tends to move bodies towards the centre of a celestial body, such as the
The answer to the puzzle in issue 4 The solution to the puzzle in issue #4 of Amsterdam Science magazine was: Lisa bought gloves for € 18.
earth or moon 09. smallest planet in the Solar System 10. acceleration of a chemical reaction induced by the presence of material that is chemically uchanged at the end of the reaction 11. t ype of cancer treatment that uses one or more anti-cancer drugs
12. visual representation of information or data, such as a chart or a map 13. a prime number that differs by 2 from another prime number 14. optical instrument for making distant objects appear larger and brighter by use of a combination of lenses
cartoon Congratulations to the winners below, who sent the correct answer and won an Amsterdam Science t-shirt: Danielle Versendaal Ceriel Jacobs Anniek Elias Arnoud Visser Hiên Lê Marlinda Rijksen Marie Beth van Egmond Raymond van der Werff Eveline Hollink Lovro Trgovec-Greif Al attempts to mimic human behavior
Short information about the magazine
RENÉE VAN AMERONGEN Assistant professor Molecular Cytology, UvA ANNIKE BEKIUS PhD researcher, research institute MOVE, Faculty of Human Movement Science, VU SARAH BRANDS Master’s student Astronomy & Astrophysics, UvA JOP BRIËT Senior researcher, Algorithms and Complexity group, CWI
Amsterdam Science gives Master’s students, PhD and postdoc researchers as well as staff a platform for communicating their latest and most interesting findings to a broad audience. This is an opportunity to show each other and the rest of the world the enormous creativity, quality, diversity and enthusiasm that characterises the Amsterdam science community.
Editors in chief HAMIDEH AFSARMANESH Professor of Federated Collaborative Networks, UvA
Amsterdam Science covers all research areas being pursued in Amsterdam: mathematics, chemistry, astronomy, physics, biological and biomedical sciences, health and neuroscience, ecology, earth and environmental sciences, forensic science, computer science, logic and cognitive sciences.
SABINE SPIJKER Professor of Molecular Neuroscience, VU
MARK GOLDEN Professor of Condensed Matter Physics, UvA MICHEL HARING Professor of Plant Physiology, UvA
MARIA CONSTANTIN PhD researcher Plant Pathology, UvA ELINE VAN DILLEN Advisor Science Communication, Faculty of Science, UvA NITISH GOVINDARAJAN PhD researcher Computational Chemistry, Faculty of Science, UvA MUSTAFA HAMADA PhD researcher Neurophysiology & Biophysics, Netherlands Institute for Neuroscience (NIN) JANS HENKE Master’s student Physics, UvA LAURA JANSSEN Advisor Science Communication, Faculty of Science, VU
CÉLINE KOSTER PhD researcher Clinical Genetics, AMC FRANCESCO MUTTI Assistant professor Biocatalysis, UvA JOSHUA OBERMAYER PhD researcher, Center for Neurogenomics and Cognitive Research (CNCR), VU RENSKE ONSTEIN Postdoctoral researcher, Institute for Biodiversity and Ecosystem Dynamics, Faculty of Science UvA NOUSHINE SHAHIDZADEH Associate professor Soft Matter, Faculty of Science UvA SASKIA TIMMER Project manager Research, Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute) TED VELDKAMP PhD researcher Institute for Environmental Studies (IVM), VU HELEEN VERLINDE Amsterdam Science Magazine manager ESTHER VISSER PhD researcher Center for Neurogenomics and Cognitive Research (CNCR), VU
Contact Are you eager to share the exciting research you published with your colleagues? Are there developments in your field that we all should know about? And are you conducting your research at one of the Amsterdam universities or science institutes?
Have a look at www.amsterdamscience.org for the remit of the magazine and upload your submission for consideration for a future issue of Amsterdam Science.
https://www.facebook.com/ amsterdamscience/ firstname.lastname@example.org www.amsterdamscience.org https://twitter.com/AmsSciMag
It's hard to have missed it. Erik Verlinde of the Institute of Physics at the University of Amsterdam has proposed a new theory of gravity which might be a solution to one of the longstanding problems in physics: on the scale of galaxies, Newton’s theory of gravity doesn’t explain how we see matter moving around. The conventional explanation involves matter that we cannot see – dark matter –providing an additional gravitational force that pulls on the matter that we do see. Verlinde claims dark matter does not exist, and instead gravity itself needs re-examination. The gravity of Newton and Einstein does not need a mere tweak; rather – as Verlinde's paper  puts it – "our ‘macroscopic’ notions of spacetime and gravity are emergent from an underlying microscopic description in which they have no a priori meaning." Verlinde's microscopic bottom line is based on the thermodynamics of quantum entanglement, and how this is affected by the presence of matter.
 Erik P. Verlinde https://arxiv.org/abs/1611.02269  Margot M. Brouwer et al. https://arxiv.org/abs/1612.03034  Aurélien Hees, Benoit Famaey, and Gianfranco Bertone, https://arxiv.org/pdf/1702.04358
Verlinde is bold enough to propose formulae that astronomers have immediately taken to test against observational data. His prediction for how much extra gravitational force (or, in the 'old' picture, apparent dark matter) galaxies have compared to their observed mass survives tests involving experimental data on weak gravitational lensing , in which the bending of light from distant stars around galaxies enables the determination of how gravity is distributed. Others – interestingly including Verlinde's IoP colleague Gianfranco Bertone  – argue Verlinde's new approach only leads to marginally acceptable fits to some of the observed galaxy rotation data, and that his formula is even ruled out by observations from as close by as our own solar system. Verlinde emphasises in turn that his theory is inapplicable in this situation. To say that the jury is still out might be going too far, as Verlinde’s theory is still under development. The global media attention surrounding this topic means one thing is certain: we haven't heard the last of the end of gravity.
A platform for researchers to showcase the enormous creativity, quality, diversity and enthusiasm that characterises the scientific communit...
Published on Apr 20, 2017
A platform for researchers to showcase the enormous creativity, quality, diversity and enthusiasm that characterises the scientific communit...