EU Research Autumn 2018
21 Century Europe st
The future of research on the continent
Smarter Medical Devices
Processing the latest software
Getting to the root of the problem: a focus on health research Disseminating the latest research under FP7 and Horizon 2020 Follow EU Research on www.twitter.com/EU_RESEARCH
Editor’s N H
ealthcare provision is going to change and that’s because it has to. With a growing population living longer and healthcare institutions under strain, globally there are efforts to innovate to cope.
This innovation can be in technology that makes workflow more efficient and accurate for doctors or technology that patients can use at home. Diagnosis is certainly something that machines can do, because with AI and access to patient and medical information, a computer could potentially work out your ailment with accuracy with ‘an informed opinion’ about ideal care needs or options. Combine this with wearable technology that can monitor your health in real time or over a period, and you can understand how large amounts of personal health data can be used for an intelligent analysis. Machine learning is now at a stage where it can predict medical events for people such as whether they will be hospitalised and how long they will stay in hospital, that’s according to a Google Brain team researcher.
As a seasoned editor and journalist, Richard Forsyth has been reporting on numerous aspects of European scientific research for over 10 years. He has written for many titles including ERCIM’s publication, CSP Today, Sustainable Development magazine, eStrategies magazine and remains a prevalent contributor to the UK business press. He also works in Public Relations for businesses to help them communicate their services effectively to industry and consumers.
The major opportunities are in big data sets that span populations. When you plug AI into large data sets, such as a country’s health data and ask it to analyse it in various ways you can get a feel for the real power such AI could have in finding patterns that might have been missed by people. Of course, the problem with this is that patient data is personal, so sharing it, comparing it and analysing it is awkward territory for healthcare professionals and a health system that has respected patient privacy as a medical standard since the Hippocratic Oath. When you have digital networks comprising of personal information there is always a legitimate fear of data breach and misuse, but change is inevitable and there must be a serious weighing up of the medical and treatment advantages of what can now be achieved, against the privacy concerns, or at least a way to ensure data is used ethically.
Hope you enjoy the issue.
Richard Forsyth Editor
Contents 40 Neuronal Mechanisms
4 Research News
EU Research takes a look at the latest news in scientific research, highlighting new discoveries, major breakthroughs and new areas of investigation
We spoke to Professor Kenneth Eaton about the work of the Added Value for Oral Care (ADVOCATE) project which is investigating how healthcare systems can be modernised in line with today’s demands.
We spoke to Professor Andy Cooper and Professor Graeme Day about the work of the RobOT project in developing tools to predict the properties of molecular crystals, which could open up new possibilities in the development of functional materials.
16 ApoptoMDS Researchers in the ApoptoMDS project are exploring a new hypothesis around the progression of bone marrow failure to leukemia, as Dr Miriam Erlacher explains.
20 Molecular Psychology Professor Christian Montag
brings together different forms of data, including information on molecular mechanisms, brain structure and individual behaviour, to build a deeper picture of why each of us is the person that we are.
Professor Nick Ramsey tells us about his team’s work in developing a brain-computer interface designed to restore function and help paralysed people communicate.
Standardisation and collaboration in the electronic medical device industry is necessary so that innovations can be commercially viable. This is the focus of work in the InForMed project and projects within the Health.E Lighthouse initiative, such as ULIMPIA and POSITION.
After the discovery of the X-ray in 1895, we have since benefited from methods like Magnetic Resonance Imaging (MRI) and ultrasound. We’re now seeing a new wave of innovations that promise to dramatically advance medical scanning. By Richard Forsyth
32 Optobiology in Infection
Optogenetics will shed new light on intracellular pathogens, giving researchers new insights into pathogen-host interactions, as Dr Nishith Gupta explains.
Silke Krol, Ph.D tells us about the DiaChemo project’s work in developing a point-of-care device that will enable clinicians to monitor the concentration of drugs in blood during treatment.
36 PLASMOfab Researchers in the PLASMOfab project are leveraging plasmonics to co-develop extraordinary photonic components and electronics in a single manufacturing process, as Dr Dimitris Tsiokos explains.
We spoke to Professor Roland Wolf and Dr Colin Henderson about their work in developing a next-generation platform to help scientists monitor the progression of neurodegenerative disease and identify effective therapies.
We spoke to Dr Alexander Groh about his research into connections between the cortex and sub-cortical areas, which could shed new light on the relationship between the brain and behaviour.
42 Sound Perception
We spoke to Dr Peter Schneider about his research into the relationship between auditory skills and auditory dysfunctions.
45 Schroedinger Fellows Dr Barbara Zimmermann tells
us about how the programme helps to support Austrian science and strengthen the country’s research base.
46 ENHANCE The ENHANCE project aims to foster stronger research links with international organisations and to help boost the scientific and academic profile of the University of Agronomic Sciences and Veterinary Medicine (USAMV) of Bucharest.
48 COQHOTT Researchers in the CoqHoTT project are revisiting the theoretical foundations of Coq, aiming to improve and extend the system for today’s mathematicians and computer scientists, as Dr Nicolas Tabareau explains.
We spoke to Dr Georgios Keramidas about the work of the LPGPU2 project in developing a tool to help developers optimise the software for GPUs, opening up a path towards improved power efficiency.
BIGSSS-departs (doctoral education in partnerships) is an innovative EU funded programme designed to support early stage researchers while also widening their perspective on the social sciences.
56 Arts in Society Researchers at the Leiden University Centre for the Arts in Society are exploring the interaction between the arts and society, getting to the roots of cultural production, as Professor Anthonya Visser explains.
Professor Myron Peck of the CERES project tells us about their work in investigating the physical changes that will occur as a result of climate change, and how CERES can help fisheries and aquaculture companies adapt and thrive.
21st Century Europe The future of research on the continent
Smarter Medical Devices
Processing the latest software
Getting to the root of the problem: a focus on health research Disseminating the latest research under FP7 and Horizon 2020 Follow EU Research on www.twitter.com/EU_RESEARCH
We spoke to Dr Dror Noy about the project’s work in designing protein cofactor complexes with photosystem functionality, which could point the way towards new bioreactors for fuel production.
62 Solar Energy
Whilst many consumers choose solar energy for environmental reasons, it is the practical efficiency and lower costs that can drive wider uptake and that’s where the power of research and innovation will come into play. By Richard Forsyth
Researchers in the Chromtisol project are utilising Titanium dioxide nanotubes to develop a new physical concept of a solar cell which could help improve solar-toelectricity conversion efficiency, as Dr. Jan M. Macak explains.
Dr Narasimha Rao tells us about the work of the DecentLivingEnergy project in developing a body of knowledge to help balance the goal of eradicating poverty with climate change mitigation.
70 WOODY PIRATS
Dr Judy Simon and her team aim to build a deeper understanding of the basic mechanisms behind plant interactions with regard to nitrogen acquisition and its internal allocation, research which could hold important implications for forest management.
72 Closing the Loop
A more effective method of recovering rubber from used tyres could help close the loop between recycling and production, as Ir. Hans van Hoek and Associate Professor Wilma Dierkes explain.
The Greenrail sleeper makes use of recycled materials and could also help turn the railways into a source of clean energy, as the company’s founder and CEO Giovanni De Lisi explains.
Alongside artistic and cultural concerns, sociological considerations should be taken into account in the design of buildings, argues Dr Silke Steets.
We spoke to Francesco Ferrero and Cindy Guerlain about the SUCCESS project’s work in investigating an alternative approach to managing the construction supply chain.
Professor Vincent Crawford explains how his project BESTDECISION is advancing our understanding of central questions about economic behaviour, the design of institutions, and the governance of relationships.
83 ETHICS OF POWER
We spoke to PD Dr Gotlind Ulshöfer about her work in developing an ethics of power for the digital age.
The ESSOG project aims to exploit this huge body of information, helping to build a more detailed picture of our galaxy, as Professor James Binney explains.
Mr Daniel González speaks about the work of the project in designing and developing a new Ethernet transceiver device, work which will enhance European competitiveness in what is a fast-moving area.
EDITORIAL Managing Editor Richard Forsyth firstname.lastname@example.org Deputy Editor Patrick Truss email@example.com Deputy Editor Richard Davey firstname.lastname@example.org Science Writer Holly Cave www.hollycave.co.uk Acquisitions Editor Elizabeth Sparks email@example.com PRODUCTION Production Manager Jenny O’Neill firstname.lastname@example.org Production Assistant Tim Smith email@example.com Art Director Daniel Hall firstname.lastname@example.org Design Manager David Patten email@example.com Illustrator Martin Carr firstname.lastname@example.org PUBLISHING Managing Director Edward Taberner email@example.com Scientific Director Dr Peter Taberner firstname.lastname@example.org Office Manager Janis Beazley email@example.com Finance Manager Adrian Hawthorne firstname.lastname@example.org Account Manager Jane Tareen email@example.com
EU Research Blazon Publishing and Media Ltd 131 Lydney Road, Bristol, BS10 5JR, United Kingdom T: +44 (0)207 193 9820 F: +44 (0)117 9244 022 E: firstname.lastname@example.org www.euresearcher.com © Blazon Publishing June 2010
Cert o n.TT-COC-2200
The EU Research team take a look at current events in the scientific news
Could machine learning mean the end of understanding in science? Scientists use AI to accurately predict weather over a period time once never thought possible. Much to the chagrin of summer party planners, weather is a notoriously chaotic system. Small changes in precipitation, temperature, humidity, wind speed or direction, etc. can balloon into an entirely new set of conditions within a few days. That’s why weather forecasts become unreliable more than about seven days into the future — and why picnics need backup plans. But what if we could understand a chaotic system well enough to predict how it would behave far into the future? In January this year, scientists did just that. They used machine learning to accurately predict the outcome of a chaotic system over a much longer duration than had been thought possible. And the machine did that just by observing the system’s dynamics, without any knowledge of the underlying equations. Most scientists would probably agree that prediction and understanding are not the same thing. The reason lies in the origin myth of physics — and arguably, that of modern science as a whole. For more than a millennium, the story goes, people used methods handed down by the Greco-Roman mathematician Ptolemy to predict how the planets moved across the sky. Ptolemy didn’t know anything about the theory of gravity or even that the sun was at the centre of the solar system. His methods involved arcane computations using circles within circles within circles. While they predicted planetary motion rather well, there was no understandingof why these methods worked, and why planets ought to follow such complicated rules. Then came Copernicus, Galileo, Kepler and Newton.
Newton discovered the fundamental differential equations that govern the motion of every planet. The same differential equations could be used to describe every planet in the solar system. This was clearly good, because now we understood why planets move. Solving differential equations turned out to be a more efficient way to predict planetary motion compared to Ptolemy’s algorithm. Perhaps more importantly, though, our trust in this method allowed us to discover new unseen planets based on a unifying principle — the Law of Universal Gravitation — that works on rockets and falling apples and moons and galaxies. The implications of machine intelligence, for the process of doing science and for the philosophy of science, could be immense. For example, in the face of increasingly flawless predictions, albeit obtained by methods that no human can understand, can we continue to deny that machines have better knowledge? If prediction is in fact the primary goal of science, how should we modify the scientific method, the algorithm that for centuries has allowed us to identify errors and correct them? If we give up on understanding, is there a point to pursuing scientific knowledge as we know it? I don’t have the answers. But unless we can articulate why science is about more than the ability to make good predictions, scientists might also soon find that a “trained AI could do their job.”
Neurodegenerative Disease Breakthrough: Scientists create largest ever map of the brain Scientists have created a map of more than one billion brain cell connections in a monumental study which could completely reform how brain diseases are treated. The study from Edinburgh University is the first to illustrate the extremely complicated structure of the brain and show how brain cell connections are organised.There are two reasons why the study is a massive scientific milestone. Firstly, the human brain is, in terms of what has been discovered so far, the most complex thing in the entire universe, including the universe itself. Experts know more about the ever expanding and infinite universe than they do about what is going on inside our head.
Distinct synapses were tagged by colour, enabling the scientists to easily identify complex brain patterns. The lights lit up depending on what the mouse was doing – for example eating or running – all of which was recorded. Experts believe each synapse is linked to recalling a specific memory allowing the brain to quickly locate what it is looking for, such as running or eating.
It is known that there are more than 86 billion neurons in the human brain and more synapses – gaps between brain cells – than there are stars in the Milky Way Galaxy – 250 billion.
The most complex map to date was then created, and scientists believe it could help boost understanding about how memory and brain problems develop. Lead researcher Professor Seth Grant, of Edinburgh University, said of the study published in the journal Neuron: “There are more synapses in a human brain than there are stars in the galaxy.
Secondly, and more importantly, the study could be extremely important in how brain impairments such as autism and schizophrenia are tackled. The researchers used molecular imaging and artificial intelligence to look at synapses in mouse brains. Parts of the brain tissue were reengineered to they emit light, allowing the scientists to see the synapses in colour.
“The brain is the most complex object we know of and understanding its connections at this level is a major step forward in unravelling its mysteries. “In creating the first map of this kind, we were struck by the diversity of synapses and the exquisite patterns that they form. “This map opens a wealth of new avenues of research that should transform our understanding of behaviour and brain disease.”
Are you spending time with the right people for your health and happiness? Research suggests the company we keep could be a key factor influencing our physical and mental wellbeing. While many of us focus primarily on diet and exercise to achieve better health, science suggests that our well-being also is influenced by the company we keep. Researchers have found that certain health behaviours appear to be contagious and that our social networks – in person and online – can influence obesity, anxiety and overall happiness. A recent report found that a person’s exercise routine was strongly influenced by his or her social network. Dan Buettner, a National Geographic fellow and author, has studied the health habits of people who live in so-called blue zones – regions of the world where people live far longer than the average. He noted that positive friendships are a common theme in the blue zones. “Friends can exert a measurable and ongoing influence on your health behaviours in a way that a diet never can,” Buettner said. In Okinawa, Japan, a place where the average life expectancy for women is around 90, the oldest in the world, people form a kind of social network called a moai – a group of five friends who offer social, logistic, emotional and even financial support for a lifetime. “It’s a very powerful idea,” Buettner said. “Traditionally, their parents put them into moais when they are born, and they take a lifelong journey together.” In a moai, the group benefits when things go well, such as by sharing a bountiful crop, and the group’s families support one another when a child gets sick or someone dies. They also appear to influence one another’s lifelong health behaviours.
Buettner is working with federal and state health officials, including former US Surgeon General Vivek Murthy, to create moais in two dozen cities across the US. He recently spent time in Fort Worth, Texas, where several residents have formed walking moais – groups of people who meet regularly to walk and socialise. “We’re finding that in some of these cities, you can just put people together who want to change health behaviours and organise them around walking or a plant-based potluck,” he says. “We nudge them into hanging out together for 10 weeks. We have created moais that are now several years old, and they are still exerting a healthy influence on members’ lives.” The key to building a successful Moai is to start with people who have similar interests, passions and values. The Blue Zone team tries to group people based on geography and work and family schedules to start. Then they ask them a series of questions to find common interests. Is your perfect vacation a cruise or a backpacking trip? Do you like rock ‘n’ roll or classical music? Do you subscribe to The New York Times or The Wall Street Journal? “You stack the deck in favour of a long-term relationship,” says Buettner “I argue that the most powerful thing you can do to add healthy years is to curate your immediate social network,” says Buettner, who advises people to focus on three to five real world friends rather than distant Facebook friends. “In general you want friends with whom you can have a meaningful conversation,” he said. “You can call them on a bad day and they will care. Your group of friends are better than any drug or anti-aging supplement, and will do more for you than just about anything.”
Organs on demand become a reality Grow-your-own organs could be here within five years, as scientists prove they work in pigs. Grow-your-own organs could be available for desperately ill patients within five years, after scientists successfully transplanted bioengineered lungs into pigs for the first time. The team at the University of Texas Medical Branch (UTMB) showed that lab-grown organs were quickly accepted by the animals, and within just two weeks had developed a network of blood vessels. Previous attempts have failed with several hours of transplantation because the organs did not establish the complicated web of vessels needed for proper oxygen and blood flow. But the new experiments showed the lungs were still functioning two months after they were implanted and the animals had 100 per cent oxygen saturation, meaning all their red blood cells were carrying oxygen through the body. The method could help solve Europe’s organ donation crisis. There are around 7,000 people on the donor waiting list of which 350 need a lung transplant for conditions like cystic fibrosis and emphysema, but one quarter will die before a suitable organ is found. “Our ultimate goal is to eventually provide new options for the many people awaiting a transplant,” said Joan Nichols, Professor of Internal Medicine at UTMB. “Somewhere down the line we may be able to take stem cells from a person and produce and organ that is their organ, tissue matched to them, with no immune suppression needed that would function the way their own lung originally did.” Joaquin Cortiella, Director of Lab of Tissue Engineering and
Organ Regeneration Director, Lab of Tissue Engineering and Organ Regeneration at UTMB added: “I would say in five to 10 years you will get someone with a bioengineered lung.” To grow the organs in the lab, scientists took the lung of a separate pig and stripped it of its blood and cells using a special mix of sugar and detergent, so that only the ‘skeleton’ remained. They then created a cocktail of nutrients and lung cells from the pig which was to receive the transplant, and placed it in a tank with the organ skeleton. The lungs were grown for 30 days and implanted into four pigs who were kept alive for 10 hours, two weeks, one month and two months to see how blood vessels were developing. All of the pigs that received a bioengineered lung stayed healthy. As early as two weeks post-transplant, the bioengineered lung had established the network of blood vessels needed for the lung to survive. And there was no sign of too much fluid in the lungs, known as pulmonary edema, which can cause respiratory failure. The next steps will to keep the animals alive for longer to allow the bioengineered lungs to fully mature but the researchers say that they should be able to start trials in terminally ill patients within the next five to 10 years. “It has taken a lot of heart and 15 years of research to get us this far, our team has done something incredible with a ridiculously small budget and an amazingly dedicated group of people,” added Prof Nichols. The research was published in the journal Science Translational Medicine.
Atmospheric carbon last year reached levels not seen in 800,000 years Scientists prove that the greenhouse effect could continue for decades regardless of whether greenhouse gas emissions are halted. The concentration of carbon dioxide (CO2) in Earth’s atmosphere reached 405 parts per million (ppm) last year, a level not seen in 800,000 years, according to a new report. It was also the hottest year on record that did not feature the global weather pattern known as El Niño, which is driven by warmer than usual ocean waters in the Pacific Ocean, concludes the State of the Climate in 2017,the 28th edition of an annual compilation published by the National Oceanic and Atmospheric Administration (NOAA). Overall, 2017 ranked as the second or third warmest year, depending on which measure is used, since researchers began keeping robust records in the mid-1800s.
• Last year also marked the end of a world-wide coral bleaching event that lasted 3 years. Coral bleaching occurs when seawater warms, causing corals to release algae living within their tissues, turning the coral white and sometimes resulting in the death of the coral. It was the longest documented bleaching event.
Even if humanity “stopped the greenhouse gasses at their current concentrations today, the atmosphere would still continue to warm for next couple decades to maybe a century,” said Greg Johnson, an oceanographer at NOAA’s Pacific Marine Environmental Laboratory in Seattle, Washington, during a press call yesterday about the report.
• Warmer temperatures contributed to wildfire outbreaks around the world. The United States suffered an extreme wildfire season that burned 4 million hectares and caused more than $18 billion in damages. The Amazon region experienced some 272,000 wildfires.
The hefty document includes data compiled by 524 scientists working in 65 countries. A few highlights: • Atmospheric concentrations of CO2—the primary planetary warming gas—last year rose by 2.2 ppm over 2016. Similar levels were last reached at least 800,000 years ago, according to date obtained from air bubbles trapped in ancient ice cores. • Atmospheric concentrations of methane and nitrous oxide— both potent warming gases—were the highest on record. Levels of methane increased in 2017 by 6.9 parts per billion (ppb), to 1849.7 ppb, compared with 2016. Nitrous oxide levels increased by 0.9 ppb, to 329.8 ppb.
• Global precipitation in 2017 was above the long-term average. Russia had its second wettest year since 1900. Parts of Venezuela, Nigeria, and India also experienced heavier than usual rainfall and flooding.
• I n Alaska, record high permafrost temperatures were reported at five of six permafrost observatories. When thawed, permafrost releases CO2 and methane into the atmosphere and can contribute to global warming. • Arctic sea ice took a hit. The extent of sea ice hit a 38-year low, and was 8% below the mean extent reported for 1981 to 2010. Spring snow cover in the Arctic, however, was greater than the 1981 to 2010 average, and the Greenland Ice Sheet recovered from a record low mass reported in 2016. 2017 was also the second warmest year on record for the Arctic. • Many countries reported setting high-temperature records, including Argentina, Uruguay, Spain, Bulgaria, and Mexico.
Living on an island can shrink humans On the island of Flores in Indonesia, scientists discover natural selection has favoured pygmy elephants, giant rats—and people of short stature. Living on an island can have strange effects. On Cyprus, hippos dwindled to the size of sea lions. On Flores in Indonesia, extinct elephants weighed no more than a large hog, but rats grew as big as cats. All are examples of the so-called island effect, which holds that when food and predators are scarce, big animals shrink and little ones grow. But no one was sure whether the same rule explains the most famous example of dwarfing on Flores, the odd extinct hominin called the hobbit, which lived 60,000 to 100,000 years ago and stood about a meter tall. Now, genetic evidence from modern pygmies on Flores—who are unrelated to the hobbit—confirms that humans, too, are subject to so-called island dwarfing. An international team reports this week in Science that Flores pygmies differ from their closest relatives on New Guinea and in East Asia in carrying more gene variants that promote short stature. The genetic differences testify to recent evolution—the island rule at work. And they imply that the same force gave the hobbit its short stature, the authors say. The team found no trace of archaic DNA that could be from the hobbit. Instead, the pygmies were most closely related to other East Asians. The DNA suggested that their ancestors came to Flores in several waves: in the past 50,000 years or so, when modern humans first reached Melanesia; and in the past 5000 years, when settlers came from both East Asia and New Guinea.
The pygmies’ genomes also reflect an environmental shift. They carry an ancient version of a gene that encodes enzymes to break down fatty acids in meat and seafood. It suggests their ancestors underwent a “big shift in diet” after reaching Flores, perhaps eating pygmy elephants or marine foods, says population geneticist Rasmus Nielsen of UC Berkeley, who was not part of the study. The pygmies’ genomes are also rich in alleles that data from the UK Biobank have linked to short stature. Other East Asians have the same height-reducing alleles, but at much lower frequencies. This suggests natural selection favored existing genes for shortness while the pygmies’ ancestors were on Flores. “We can’t say for sure that they got shorter on Flores, but what makes this convincing is they’re comparing the Flores population to other East Asian populations of similar ancestry,” says population geneticist Iain Mathieson of the University of Pennsylvania. The discovery fits with a recent study suggesting evolution also favored short stature in people on the Andaman Islands, Green says. Such selection on islands boosts the theory that the hobbit, too, was once a taller species, who dwindled in height over millennia on Flores. “If it can happen in hippos, it can happen in humans,” Tucci says. “Humans are not as special as we think. This shows we evolve like all other animals.”
Taking a big bite of healthcare data The cost of caring for oral diseases in the EU is around 80 billion Euros a year, yet these conditions can be prevented through relatively simple measures. We spoke to Professor Kenneth Eaton about the work of the Added Value for Oral Care (ADVOCATE) project which is investigating how healthcare systems can be modernised in line with today’s demands. “With ever more sophisticated and expensive therapies, arising from medical-technical innovations and an ageing population, costs and complexity of care are expected to increase further in future, prompting researchers to look again at how healthcare is provided. The design of healthcare delivery systems has evolved over time and varies between countries. However, only a few countries have adopted a preventive approach to healthcare provision, and where they have it’s been only partial. Nevertheless, the pressure of economics is now such that we need to strengthen this, to prevent the occurrence of ever greater numbers of diseases and conditions,” outlines Professor Kenneth Eaton, leader of work package 6 within the Horizon 2020 programme ADVOCATE project, which is exploring how routinely collected data can be used in modernising oral health system planning in line with contemporary demands. This should in future mean a greater focus on prevention rather than treatment. “It’s generally accepted in healthcare today that you have to explain to patients – and the public in general – how to prevent disease conditions from occurring. Then it’s largely in their hands,” says Professor Eaton. This editorial will now explain the challenge preventing oral diseases and gives details of the ADVOCATE project.
Oral diseases There are essentially three groups of oral diseases, namely tooth decay, gum diseases and diseases of the cheeks, tongue, palate and lips including oral cancers, all of which can be prevented through relatively simple measures. “Tooth decay – or dental caries – is the most common disease in the world and is entirely preventable. If you restrict your sugar intake to mealtimes, use fluoride toothpaste, and don’t have sweet, sugary drinks between meals, then you can prevent it,” stresses Professor Eaton. Gum diseases are one of the most common groups of
disease and are also preventable with meticulous cleaning at the point where the tooth and the gum meet and between the teeth. Professor Eaton says diseases like oral cancer can in some cases be prevented by adopting a healthy lifestyle (not smoking and not drinking excess alcohol) which, alongside dietary changes are central elements of efforts to improve oral health. The rationale behind placing greater emphasis on prevention seems to be clear, enhancing quality of life and reducing the overall burden of care. Yet in many countries across Europe oral healthcare systems are structured rather differently. Many systems reward treatment with payment on a fee per item of treatment basis. Prevention is less easy to monitor and is often inadequately remunerated explains Professor Eaton. “If you look at Denmark, where there is a
The ADVOCATE project ADVOCATE is a collaboration between universities, public and private funders of healthcare, industry and patients from Denmark, Germany, Hungary, Ireland, the Netherlands and the United Kingdom, which seeks to develop an innovative evidenceinformed oral health care model which is patient-oriented, delivers safe and efficient care, is sustainable and resilient to crises. It has three main objectives which are to: • Design an innovative healthcare system which promotes chronic disease prevention. • Establish a set of harmonised indicators which acknowledge success in the prevention of disease and avoidance of unnecessary treatment. • Provide evidence-informed guidance to policy and decision makers for improved health systems planning.
We should establish more initiatives to improve disease prevention. There have been changes in this direction, with the UK and Irish governments recently introducing a tax on sugary soft drinks. very large public sector, about half of oral healthcare is provided within public sector clinics, and around half by private dentists. In the public clinics there is no incentive to carry out large amounts of treatment, because their staff are paid salaries. The result is that there’s more time for advice and preventive treatments to be given to patients.” Funding organisations, such as state and private insurance companies, have often been reluctant to pay for preventive work. While there will always be accidents and traumas, ideally only relatively minimal amounts of invasive dental treatment would be required. A large part of the ADVOCATE’s work centres around seeking ways to achieve this ambition, drawing on existing examples of good practice from across Europe.
The project is funded by the Horizon 2020 programme and is coordinated from University College Cork by Professor Helen Whelton. One part of the project’s work centres around analysing large volumes of data (big data) from oral healthcare insurers in European countries. “By looking at the generality of information across a region, you can spot trends and look at why some practitioners might be performing better than others,” explains Professor Eaton. This can then be used to inform the debate around the development of healthcare models, with researchers aiming to identify which approaches have proved effective. “Denmark and Scotland have done particularly well as a result of better prevention aimed at children and stand as positive examples of the benefits of
placing greater emphasis on preventionâ&#x20AC;?. â&#x20AC;&#x153;The project is using big data to see if we can help push for change towards a more preventive approach. This includes change on an individual level, so that members of the public are aware of what measures they can take to prevent oral disease. However, it is oral care professionals who give the public preventive advice and so the project aims to increase the focus on prevention in practice by supporting oral care professionals in delivering effective disease prevention. This approach uses extrinsic and intrinsic motivation to achieve the desired behaviour. Extrinsic motivators will be identified by the project through analysing the impact of historical changes in system design on oral care outcomes, whilst taking account of the context and population characteristics. The comparison of outcomes of diverse oral care systems will also be helpful. Intrinsic motivation will use a newly developed dashboard which can illustrate variation in oral health, treatment and prevention using both high level claims data and patient reported data using an innovative patient engagement app. The dashboard indicators are measures* which are considered valid, important and relevant measures for oral health and oral health care and have been developed based on a literature review, an expert meeting, the Delphi method, and a World CafĂŠ. A diverse stakeholder base was engaged in identifying and defining the most appropriate measures. As mentioned previously, groups consulted included general dental practitioners, patients, insurers, and policy makers from Denmark, Germany, Ireland, Hungary, Netherlands and United Kingdom The approach is being tested by general dental practitioners in field studies in the six partner countries.
ADVOCATE Added Value for Oral Care
Project Objectives ADVOCATE is a collaboration between universities, public and private funders of healthcare, industry and patients from Denmark, Germany, Hungary, Ireland, the Netherlands and the United Kingdom (UK),
which seeks to develop an innovative evidenceinformed oral health care model which is patient-oriented, delivers safe and efficient care, is sustainable and resilient to crises. It has three main objectives which are to: • Design an innovative healthcare system which promotes chronic disease prevention. • Establish a set of harmonised indicators which acknowledge success in the prevention of disease and avoidance of unnecessary treatment. • Provide evidence-informed guidance to policy and decision makers for improved health systems planning. Although they are based on oral health, the outputs from the ADVOCATE project should be directly applicable to other chronic diseases.
ADVOCATE (Added Value for Oral Health) “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 635183”
University of Leeds, Medical University of Heidelberg, Academic Centre for Dentistry Amsterdam, University College Cork, University of Copenhagen, Semmelweis University (Budapest), Achmea Health Insurance (Netherlands), SpectrumK (Health Insurance Germany), DeCare Dental Insurance (Dublin), NHS England, Aridhia Informatics (Edinburgh).
Professor Helen Whelton Head of College of Medicine and Health & Chief Academic Officer HSE South, South West Hospital Group University College Cork Erinville, Western Road, Cork T: +00 353 (0) 21 490 1209 E: H.Whelton@ucc.ie W: www.advocateoralhealth.com
ADVOCATE Group Photo
Aridhia (one of the partners) has provide secure electronic template for partners to work on anonymised data. The project’s work in developing an oral healthcare dashboard and a patient app hold clear relevance in these terms. The dashboard brings together data from several European countries, enabling healthcare professionals, policy-makers and the general public to gain deeper insights into important questions around oral healthcare. “We need to make people aware of how they can reduce the risk of experiencing problems,” stresses Professor Eaton. Ultimately, the primary responsibility for disease prevention rests with the individual. “That goes back to diet and individual habits, and we include not just food in that, but also drinks. So carbonated drinks and sugary drinks are pretty disastrous for the mouth if they’re consumed continuously, rather than just in short bursts,” continues Professor Eaton. This is known to be a major risk factor in terms of the incidence of oral diseases
and also obesity, so campaigners are keen to heighten awareness and encourage individuals to take greater responsibility for their own health. Over time public health campaigns have led to reductions in the number of people smoking, now Professor Eaton believes it is time to put greater emphasis on disease prevention. “We should establish more initiatives to improve disease prevention,” he says. There have been changes in this direction, with the UK and Irish governments recently introducing a tax on sugary soft drinks. As mentioned at the beginning of this interview, the wider context in this research is growing concern over the cost of healthcare provision. As costs rise and societies age, Professor Eaton says that healthcare models are likely to evolve in future. “We’re trying to put the emphasis much more on patient involvement, and disease prevention. We aim to show groups of dentists how they can encourage this in their own practices.”
The measures are published in: Baâdoudi F, et al., A Consensus-Based Set of Measures for Oral Health Care. Journal of Dental Research. 2017, Vol. 96(8) : 881 –887.
Professor Helen Whelton (left) Professor Kenneth Eaton (right)
Professor Helen Whelton is Head of the College of Medicine and Health at University College Cork (UCC) in Ireland. She has long experience in dentistry, including working as Dean of Dentistry at the University of Leeds, and as President of the International Association for Dental Research. Professor Kenneth Eaton leads ADVOCATE work package 6 - Exploitation and Dissemination. He has very wide experience in all areas of dentistry and has advised many European Health Ministries. He has been Adviser to the Council of European Chief Dental Officers since 1992.
Photo by Michael Browning on Unsplash
The Materials Innovation Factory at the University of Liverpool, where Professor Andrew Cooper’s group is based.
The properties of a material in the solid state are determined by how it assembles at the molecular level. We spoke to Professor Andy Cooper and Professor Graeme Day about the work of the RobOT project in developing tools to predict the properties of molecular crystals, which could open up new possibilities in the development of functional materials.
Crystal Engineering: A new level of ‘designability’ A lot of attention in chemistry research over
the last 10 years or so has been devoted to the development of metal-organic frameworks, a type of framework material that uses metal atoms to link together molecules, usually resulting in a crystalline structure. While there is a good understanding of how molecules fit together in these frameworks, there are still some drawbacks to existing methods of assembling molecules, a topic central to the work of the RobOT project. “We had the idea of making more designable crystals,” says Professor Andy Cooper, the project’s Principal Investigator. This work is based on the relatively well-established area of molecular crystal engineering, to which Professor Cooper and his colleagues in the project then applied two further enabling elements. “The first is crystal structure prediction, to predict, from first principles, how these molecules assemble. Then we’re also using high-throughput robotic methods, to more rapidly explore the function space,” he outlines.
The wider goal in the project is to develop a tool capable of predicting how organic molecules will assemble, rather than forcing them to assemble in a particular way. This would enable researchers to identify which molecules might be well-suited to a specific application, and so move away from trialand-error in the development of functional molecular crystals towards design. “We will take a molecule and predict how it packs. If it’s promising we might choose to work on it – if not, then we move on to predict the properties of the next one. So it’s kind of a selection process rather than engineering,” says Professor Cooper. There are multiple ways in which a molecule might pack though, and molecular crystallisation is not an intuitive process, so sophisticated predictive computational methods are required. “We’ve adopted an approach distinct from the chemical, rules-based approach. The benefit is that if you get it right, then it’s effectively generic, and can be used to predict
an enormous array of structures,” explains Professor Cooper. This research involves both experimental characterisation and synthesis of molecules, along with computational prediction of their properties. On the experimental side, Professor Cooper and his colleagues have analysed large numbers of molecules, also looking to calculate key properties. “We focused on porosity, and on the way that large surface areas can absorb gases, for methane storage for example. But the principles of the approach could be applied to conductivity, light absorption, really any property that you can calculate,” he says. The team has developed what are called energystructure-function maps to search for specific properties, which represent a valuable resource for further analysis. “For example, if you think of a new application in something like spintronics, or a memory application, then you can look back to all of the molecules that you’ve considered previously and recalculate new properties,” outlines Professor
Cooper. “To some extent, the power of this grows as you do more predictions, and you can build a database of predicted structures.” The work of Professor Graeme Day is crucial to this ability to predict the properties of a crystal structure. Based at the University of Southampton, Professor Day and his group hold deep expertise in the development of predictive computational methods. “We’ve been working on developing algorithms and software to predict, given a specific molecule, the likely ways in which it will pack into a 3-dimensional crystal structure,” he explains. Molecules can never fill space perfectly however, so they will also leave gaps when they pack together in a crystal. “We’re looking for the types of interactions between molecules that will in a way enhance that free space that you find in crystal structures. We’re using strong interactions between functional groups on molecules, testing ideas that we’ve come up with in collaboration with the synthetic chemists, and putting them through the computational algorithms to see if our expectations pan out,” says Professor Day. A molecule within a crystal structure is typically surrounded by copies of itself, so one of the major challenges in research is to understand the ways in which that molecule
interacts with itself, to essentially optimise the stability of a structure. This is a complex area of research, as for any given molecule, a computer can generate tens of thousands, or even millions, of possible structures. “We spend a lot of time calculating and assessing the relative stabilities of these structures, to give us an idea of which would actually form if we went ahead and made that particular
Cooper acknowledges that they are not yet infallible, in terms of guaranteeing that a particular molecule will pack in a certain way and give rise to the desired properties. The chemistry is extremely subtle, and it’s not yet possible to predict with certainty how a molecule will pack within a crystal structure, so Professor Cooper says the methods and tools will be used more to assess probabilities
We’ve adopted an approach distinct from the chemical, rulesbased approach. The benefit is that if you get it right, then it’s effectively generic, and can be used to correctly predict an enormous array of structures. molecule,” outlines Professor Day. While a major priority in the project has been finding promising molecules in terms of porosity, Professor Day is also interested in applying the same types of approaches to predict structures that lead to other properties. “We are also investigating the possibility of applying the methods to look for molecules where the way that they pack into the crystal structure leads to good mobility of electrons, so that material could then be used in electronic devices,” he says. The predictive methods developed within the project are very powerful, yet Professor
rather than provide guarantees. “We can assess the probability of a molecule packing in the right way, and we can certainly also get a good sense of which systems will have no chance of functioning in a particular way,” he says. “For example, if you predict the properties of a certain molecule and there are no porous structures on the landscape, then the chance of the eventual material being porous is probably very low. So you can eliminate the certain fails—the things that really have no chance—and devote more attention in research to things that are more likely to succeed.”
Materials discovery using energy-structure-function maps: In the energy-density plots each point corresponds to a computed crystal structure and each structure is colour coded to highlight a structural or functional property. These energy-structure-function maps shown the predicted pore dimensionalities, and calculated deliverable methane capacities, for the predicted T2 crystal structures. Using these energy-structure-function maps, the project team were able to find new functional materials.
RobOT Robust Organic Tectonics Project Objectives
The objective of the ERC RobOT project was to introduce a new level of ‘designability’ into the discovery of functional molecular crystals, by integrating computational prediction with synthesis, analysis, and application.
Funded by an ERC-AG-PE5 - ERC Advanced Grant - Materials and Synthesis through grant agreement number 321156 (ERC-AG-PE5-ROBOT).
• University of Liverpool • University of Southampton
Crystals of one of the materials that was discovered using the new method, as seen by an electron microscope. This is a structure with a very high methane deliverable capacity, making it promising for natural-gas-powered vehicles. Credit: University of Southampton. Read more at: https://www.eurekalert.org/pub_releases/2017-03/uos-mm032217.php
Project Coordinator, Cooper Group, University of Liverpool Crown Street, Liverpool L69 7ZD, United Kingdom T: +44 151 794 3548 E: email@example.com W: https://www.liverpool.ac.uk/coopergroup/research/ Functional materials discovery using energy– structure–function maps, Nature, 2017, 543, 657–664
Professor Andy Cooper (left) Professor Graeme Day (right)
Property prediction This is very valuable in terms of molecular design. It can sometimes take six months or more to make a new molecule, with no guarantee that it will have the desired properties, whereas predictive methods could enable researchers to work significantly more efficiently. “Instead of making six molecules and hoping that one of them works, we could computationally predict the properties of 100 molecules, or 1,000, and then focus on the most promising three or four,” explains Professor Cooper. These calculations can be performed relatively quickly, giving researchers important insights into the most promising molecules with respect to certain properties. “The idea is to design the properties in silico, in a computer, from which you can have a good expectation that you will get the property that you want,” outlines Professor Cooper. “In the past, trial-and-error methods have commonly been used, even in crystal engineering, which is based on control of crystallisation. We’re trying to move away from trial-and-error towards design.” The tool itself is not yet ready for the commercial marketplace, yet it is becoming more user-friendly over time, and Professor Cooper believes the project’s research holds clear relevance to industry. For example, there is a lot of interest in using organic semi-
conductors in mobile devices. “You could assay 1,000 candidate molecules, and from that calculate which have promising charge transport and which behave as semi-conductors,” says Professor Cooper. Cost-efficiency is of course an important issue for the commercial sector, something of which Professor Cooper is well aware. “Computation can be seen as expensive in terms of time, but lab work is costly in both time and money. The cost of the computation is more than outweighed by the saving of working on the right molecules, rather than molecules that have no chance of assembling in the way that you want,” he says. “You can also trial crazy things which you wouldn’t want to make speculatively because the chance of success might be very low.” The primary focus of this research at this stage is improving the design of the tool, rather than exploring commercial applications. While the RobOT project is coming towards the end of its funding term, researchers are keen to pursue further investigation in this area in the future. “The next step is to let the computer inform what new molecules we try out when we’re looking for a certain property,” outlines Professor Day. “We could also broaden out the scope and type of properties and types of materials where we can apply these tools and methods. We think that they could have a big impact across a broad range of application areas.”
Professor Andy Cooper Andy Cooper is a Professor of Chemistry at the University of Liverpool and Director of the Materials Innovation Factory. A unifying theme in his research is the close fusion of computational prediction and experiment to discover new materials with step-change properties. He was elected to the Royal Society in 2015. Professor Graeme Day Graeme Day is a Professor of Chemical Modelling at the University of Southampton. His research interests are the development of computational methods for predicting the structures and properties of molecular materials, and in applying these methods to a range of problems, including the design of functional materials.
Keeping “bad” cells alive to prevent leukemia Changes in apoptotic signalling are a major factor in the development of bone marrow failure, which can leave people susceptible to developing leukemia. Researchers in the ApoptoMDS project are exploring a new hypothesis around the progression of bone marrow failure to leukemia, as Dr Miriam Erlacher explains. A certain proportion of cells in our bodies die every day through a process called apoptosis, which is the focus of a lot of attention in research, with scientists investigating a number of questions around the relationship between apoptotic signalling and specific diseases. This is a major area of interest to Dr Miriam Erlacher, a paediatrician at the Freiburg University Medical Centre, who is investigating different bloodborne diseases, including myelodysplastic syndromes (MDS), juvenile myelomonocytic leukemia (JMML) and acute leukemia. “MDS and acute leukemia can arise de novo from the bone marrow. They can also occur as a secondary event, in patients with inherited bone marrow failure syndromes,” she outlines. “Fanconi anaemia, a type of bone marrow failure syndrome, causes problems with blood cell formation. Patients with Fanconi anaemia face two types of problems. Firstly, patients with bone marrow failure do not have enough cells. Then it also leaves people more susceptible to developing certain types
of cancers and blood-borne diseases, such as MDS and leukemia. The relationship between these two types of problems is unclear.” When Hannahan and Weinberg established their ‘hallmark of cancer’ model, they included cell death resistance as an essential factor for a tumour to emerge. “Typically, when a cell gets into a pre-malignant stage, the cell realises that it should die, and then it undergoes apoptosis,” outlines Dr Erlacher. Cells need to survive such stress signals for a malignancy to develop, so a cancer cell would be apoptosis-resistant. “The conventional model says that cancer can only be avoided if all pre-malignant cells die,” says Dr Erlacher. The team in the ApoptoMDS project are now exploring a very different hypothesis however, that apoptosis within tissue does not unambiguously prevent cancer formation, but rather can promote tumorigenesis. “It’s actually better to have many pre-malignant cells, as killing too many pre-malignant cells puts a high proliferative and selection pressure on the
remaining cells, which leads to further malignant progression when coupled to genetic instability,” explains Dr Erlacher.
Unexpected findings in a mouse model This hypothesis has its roots in an experimental mouse model developed earlier in Dr Erlacher’s career, in collaboration with Dr Andreas Villunger at the Medical University of Innsbruck, who supervised her PhD. “We wanted to understand whether apoptosis of DNA damaged cells was sufficient to prevent leukemogenesis induced by repeated cycles of irradiation. We hypothesized that inhibiting apoptosis by genetic deletion of the proapoptotic gene PUMA would lead to more rapid leukemia development. But surprisingly, PUMA deficient mice did not develop any T cell lymphomas, while all other mouse strains rapidly developed lymphomas,” she outlines. “We could show that this was due to a better survival of cells, and as a consequence, reduced proliferation and selection stress.
We concluded that it is better that damaged and maybe pre-malignant - cells survive. This reduces the necessity of proliferation and the selection pressure.”
Bone marrow failure model Since leaving her position in Innsbruck, Dr Erlacher has started working as a paediatric haematologist, a role in which she is investigating whether this hypothesis that has been described also holds true for some diseases in humans. While mice repeatedly subjected to irradiation develop T cells lymphoma, human patients treated with chemo- or radiotherapy are at risk of developing secondary MDS. “I realized that T cell lymphoma in mice and MDS induced by chemotherapy might have many things in common,” says Dr Erlacher. Due to the nature of her job as a paediatric haematologist and oncologist, Dr Erlacher and her team decided not to focus on treatment-related MDS, but on other, similar diseases: MDS and leukemia occurring secondary to bone marrow failure. “We are now working to test whether the hypothesis holds true in such cases,” she says. Researchers in the ApoptoMDS project are testing this hypothesis in a mouse model of bone marrow failure called dyskeratosis congenita, using genetic means to inhibit apoptosis in mice. Dyskeratosis congenita differs from Fanconi anaemia. “Patients with dyskeratosis congenita have cells with very short telomeres. The telomeres get shorter and
shorter, and once they are critically short, the cells either die, or they stop proliferating and undergo senescence,” explains Dr Erlacher. “Cells from patients with Fanconi anaemia accumulate DNA damage, and as a consequence die or stop proliferating.” This can eventually lead on to the development of leukemia, yet it is difficult to predict the rate of progression. The idea is that in an empty bone marrow, the surviving cells have to strongly proliferate to compensate for cell loss. “Under this situation, a malignant cell can arise and outgrowth can occur. So the very first step is to inhibit bone marrow failure,” says Dr Erlacher.
First of all, you would like to try and delay bone marrow failure, then as a consequence you expect that leukemia develops later. So it’s a kind of delaying mechanism. The goal in research at this stage is not to completely eradicate bone marrow failure, but rather to extend the period over which individuals don’t experience any problems. “First of all, you would like to try and delay bone marrow failure, then as a consequence you expect that leukemia develops later. So it’s a kind of delaying mechanism,” says Dr Erlacher. Fundamental research into apoptotic mechanisms is central to this wider goal; Dr Erlacher and her colleagues are comparing healthy cells with cells from mice with bone marrow failure, aiming to gain deeper
a point that Dr Erlacher plans to investigate, using samples collected from patients at the Freiburg University Medical Centre. “We have a lot of material, collected over the last twenty years or so. We can then compare the results from mouse models with human cells,” she says. There are significant challenges around this work, in particular investigating the factors that might pre-dispose an individual patient to developing bone marrow failure and then MDS, which is an important part of the project’s agenda. While mice can be bred
First malignancy Repeated bone marrow injury by chemo - and/or radiotheraphy
Damaged Stem Cell Bone marrow failure syndromes
insights. “We compare the cells in vivo, in the bone marrow of mice. And we compare the cells ex vivo, in vitro,” she continues. The wider aim here is not just to measure rates of apoptosis in these mice, but to also inhibit it in bone marrow cells. From this, researchers can then look to assess whether inhibiting apoptosis helps to delay the development of bone marrow failure, and then leukemia later on. “We already have some evidence that bone marrow failure occurs later on, so the mice are much less sick and they do not die so quickly. But we do not know yet whether this inhibits leukemogenesis,” outlines Dr Erlacher. This is
FANCONI ANEMIA: DNA damage
Premalignant Cell Proliferation pressure
Clonal advantage/ differentiation block
AML Cells Mutation conferring Apoptosis resistance
DYSKERATOSIS CONGENITA: short telomeres
Dysplasia + Cytopenia
[modified: How cell death shapes cancer, Labi and Erlacher, Cell Death Dis, 2015]
ApoptoMDS Hematopoietic stem cell Apoptosis in bone marrow failure and MyeloDysplastic Syndromes: Friend or foe?
Inherited bone marrow failure syndromes (e.g. Fanconi anemia or dyskeratosis congenita) predispose to malignancies such as myelodysplastic syndroms and acute leukemia. It is postulated that increased susceptibility to apoptosis in hematopoietic stem and progenitor cells (HSPCs) contributes to hematopoietic failure. We hypothesize that HSPC apoptosis does not only lead to bone marrow aplasia but paves the way also for leukemogenesis. It is the aim of ApoptoMDS to identify deregulated apoptosis pathways. The team is testing whether inhibiting apoptosis is sufficient to improve HSPC function, mitigate bone marrow failure and prevent leukemia. This will open doors for novel therapeutic approaches to expand the less severe symptomatic period for affected patients.
Funding scheme: ERC-StG-2014 - ERC Starting Grant. Total cost: EUR 1 372 525 EU contribution: EUR 1 372 525
Project Coordinator, Dr. Miriam Erlacher, MD PhD Division of Pediatric Hematology and Oncology Department of Pediatrics and Adolescent Medicine Freiburg University Medical Center Mathildenstr. 1 79106 Freiburg T: +49 (0)761/270 43010 E: firstname.lastname@example.org W: https://www.uniklinik-freiburg.de/index. php?id=15941&L=1 Miriam Erlacher, MD PhD
Miriam Erlacher, MD PhD, is attending physician in the Division of Pediatric Hematology and Oncology, University Medical Center Freiburg. She is most interested in leukemia, bone marrow failure syndromes and other syndromes with predisposition to myeloid malignancies. Her ApoptoMDS project brings together her research interests in apoptosis signaling and her clinical interests
with a particular defect, allowing researchers to analyse them even before they get sick, it is more difficult to look at the very early, initial stages of disease in human patients. “We usually get the first material when a patient actually gets sick. So it’s very difficult to analyse the first stages of disease,” explains Dr Erlacher. Researchers have been able to test children from families with a history of a particular genetic problem however, from which some new insights can be drawn. “Sometimes we can analyse patients before they get sick,” says Dr Erlacher.
Leukemogenesis A second part of the project centres around studying the relationship between MDS and leukemia. Researchers aim to build a deeper understanding of how the disease develops, from which point scientists could then potentially look at intervening and developing new therapeutic approaches. “The old-fashioned way of understanding leukemogenesis is that inhibiting apoptosis in pre-malignant cells will lead to rapid development of leukemia, because we prevent cells from dying when they get malignant. We hope that by inhibiting apoptosis, we can help delay the development of bone marrow failure and, later on, also leukemia,” says Dr Erlacher. This also relates to patients with what is called treatment-induced MDS. “This is very similar to secondary MDS in patients with bonemarrow failure,” continues Dr Erlacher. This is not a syndrome that patients have from birth, but rather it develops as a result of chemotherapy treatment. Many bone marrow cells die following chemotherapy, which can leave patients vulnerable. “When patients are treated for breast cancer, they undergo chemotherapy. In every cycle the bone marrow cells die, they undergo apoptosis. After every cycle the cells have to proliferate, and then they are again killed,”
explains Dr Erlacher. Preventing bone marrow cells from dying during treatment could help to delay or inhibit the development of treatment-induced MDS, believes Dr Erlacher. “We should look to inhibit apoptosis. This is not caused by an actual cell defect, but rather apoptosis is caused by treatment, by the chemotherapeutics,” she says.
Inhibiting cell death The wider objective in this research is to demonstrate that inhibiting cell death in these syndromes can delay the development of myelosuppression and also prevent further progression on to leukemia. This could potentially open up new therapeutic approaches to prevent MDS, yet there are still some hurdles to overcome in these terms. “The difficulty with translating this finding to clinics is that we would have to find a drug or other product that inhibits apoptosis. However, currently companies tend to prioritise finding drugs to kill cells,” explains Dr Erlacher. “When we have such drugs, we would aim to target them at the bone marrow. We don’t want to keep all cells alive, we would want just to keep the bone marrow cells alive while the chemotherapy has to remain effective in treating the tumour cells.” This could be an effective way of protecting the bone marrow cells, either from intrinsic defects in bone marrow failure syndromes or from the effects of chemotherapy, preventing the development of secondary MDS. A deep understanding of the fundamental mechanisms behind the progression of a disease is essential to the development of these types of drugs, and this is the priority in research for Dr Erlacher at this stage. “The first step is to understand the diseases, and then you can look to see what can be translated,” she outlines. “I’m interested in translation, but I think we need robust basic research before we can consider translation.”
Probing personality differences We all meet a wide variety of personality types on a day-to-day basis, yet the underlying basis behind these differences is not clear. Professor Christian Montag brings together different forms of data, including information on molecular mechanisms, brain structure and individual behaviour, to build a deeper picture of why each of us is the person that we are. A wide range of personality traits can be observed in any classroom, workplace or social setting, from the introvert to the extravert, the conscientious to the blasé. Based at Ulm University in Germany, Professor Christian Montag and his research group aim to probe the underlying basis behind these personality differences, using data from both Germany and China. “We try to pinpoint the areas on the genome which are of relevance for explaining individual differences in personality traits, such as being extraverted, open, or conscientious,” he explains. These particular traits may not be displayed consistently however, and an individual’s behaviour may depend to a degree on the situation in which they find themselves. “The case made by the psychologist Walter Mischel is that we also have to consider context in order to grasp stability across situations. An extravert is not an extravert every day of their lives,” points out Professor Montag.
information about how people perceive themselves is important to psychologists, it is not always entirely reliable, and more precise data can help researchers to build a more complete picture. “The gap between what an individual thinks they are doing on the phone and what they are actually doing is really interesting,” says Professor Montag. By looking at the gap between these variables, researchers can investigate the roots of a range of problem behaviours, for example gaming disorder, which has recently been recognised as a specific disorder by the World Health Organisation in ICD-11. “This has not happened for other specific areas of Internet addiction however. There are many different patient groups, and they are attracted to many different online channels,” says Professor Montag.
Image of the human brain with the nucleus accumbens highlighted Thanks to Sebastian Markett for providing the image.
Personality differences This strand of Professor Montag’s work follows in the long tradition of investigating twins to gain deeper insights into the influence of genetics and the environment on individual differences in personality and intelligence. While this is an important part of the group’s research, Professor Montag is keen to stress that their work also addresses several other areas of interest, including the way people use the Internet. “We are trying to understand individual differences in social media behaviour and Internet addiction. We use a number of different methods in this, from molecular genetics, to MRI, to tracking an individual’s behaviour in everyday life on their smartphone,” he outlines. “We’ve conducted several studies, showing for example that extraverts spend more time on WhatsApp. We are now looking at bringing together data on real-world behaviour from smartphones with neuroscientific data.” The hope is that this real-world data on individual behaviour will give researchers additional insights beyond what can be gleaned from self-reporting. While
This is an issue that affects a large proportion of the population, to varying degrees. While most people would not consider themselves to be ‘addicted’ to their smartphone, their usage patterns may still be having detrimental effects on their everyday life and well-being. “We have done studies which show that individuals with a tendency towards smartphone addiction are less productive, because they are constantly distracted by incoming messages and aren’t really concentrating on what they should be doing,” explains Professor Montag. It’s not just the extreme forms of smartphone usage that are of interest to Professor Montag, but also what we might think of as healthy or normal levels of usage. “I’m a strong advocate of dimensional models, ranging from healthy
Screen results of a gelelectrophoresis (hence genotype results of different persons).
behaviour, over problematic behaviour, through to an individual who is addicted to his/her smartphone,” he says. An individual who is addicted to their smartphone or the Internet may also be more likely to experience depression, another topic that Professor Montag touches on in his research. Both genetic and environmental factors are involved here, says Professor Montag. “Together, this significantly heightens the risk of an individual becoming depressed. So it shows how strongly genetics and the environment interact to produce a certain psychological phenotype,” he says. The group are using a number of different methods to investigate individual differences, including digital phenotyping and analysis of smartphone usage data. “We have data indicating that a depressed state co-varies with certain smartphone variables,” he says. “We can hypothesise that when an individual is starting to get better they will start to contact their social network again, so their call behaviour will change, and they will have a more active lifestyle.” There may also be changes in the language a person uses in text messages as their mood improves, with text mining revealing a greater frequency of positive words as they recover. This data can be combined with other information to build a deeper understanding of an individual’s mood. “We aim to understand, on a longitudinal basis, how a person feels and behaves in everyday life,” explains Professor Montag. This is of course an important question for employers, who typically want to assess a prospective employee’s personality before deciding whether to hire them or not. “Employers usually want to see whether you are a conscientious person for example, as metaanalysis shows that it is a good predictor of job performance,” outlines Professor Montag. “On the other hand, high scores for neuroticism are associated with a higher probability of experiencing depression. So personality matters, in terms of predicting
certain variables, although this brings along ethical challenges in many ways.”
Core personality An individual’s core personality typically remains fairly stable over time, yet there may be some changes over the course of the average lifespan. For example, while an individual may have a tendency towards either introversion or extraversion throughout their lives, as they gain life experience and professional knowledge they usually become more assertive. “As you get older, you have more of a success story behind you and this might result in higher self-esteem,” explains Professor Montag. This is an area of deep interest to Professor Montag, and he plans to continue his research into the molecular roots of individual personality differences in future. “I will continue to dig deeper on the molecular level, which is clearly very basic, fundamental research,” he says. “On the other hand, I’m also very interested in practical issues, and trying to identify effective interventions for certain mental disorders.” A good example is encouraging people to wear a wrist watch or use an alarm clock in the bedroom instead of relying on their phone. While this may seem a relatively simple measure, Professor Montag says it can help prevent excessive smartphone use. “People who don’t wear a wrist watch usually get out their phones to get the time. Often they then notice they’ve received a message, fiddle around with their phone, then put it away 20 minutes later, still without finding out the time,” he outlines. By removing smartphones from the bedroom, people can boost both their sleep quality and quantity, while Professor Montag is also looking at other potential applications, including using data gathered in research to inform psychological counselling. “I’m pretty convinced that trait and state data from the Internet of Things can be meaningful as a guide to psychological counselling,” he says.
We try to pinpoint the areas on the genome which are of relevance for explaining individual differences in personality traits, such as being extraverted, open, or conscientious.
Molecular Psychology Investigating the molecular (genetic) basis of individual differences in human behavior Project Objectives
The aim of the Heisenberg Program is to enable outstanding scientists, who fulfill all the requirements for a long-term professorship, to prepare for a scientific leadership role and to work on further research topics during this time. In pursuing this goal, it is not always necessary to choose and implement project-based procedures. For this reason, in the submission of applications and later in the preparation of final reports, unlike other promotional instruments, no “summary” of project descriptions and project results is required. Thus, such information is not provided in GEPRI
The position of Christian Montag is funded by a Heisenberg grant awarded to him by the German Research Foundation (MO 2363/3-2).
• Please visit link below for full details of the Molecular Psychology team. https://www.uni-ulm.de/en/in/psy-mp/team/our-team/
Project Coordinator, Professor Christian Montag Heisenberg Professor Institute of Psychology and Education Center for Biomedical Research Ulm University Helmholtzstr. 8/1 89081 Ulm Germany T: +49 731 50 26550 E: email@example.com W: https://www.uni-ulm.de/in/psy-mp/ W: https://www.researchgate.net/profile/ Christian_Montag : @ChrisMontag77 Dr Christian Montag
Dr Christian Montag is Heisenberg Professor for Molecular Psychology at Ulm University, Ulm, Germany. He is interested in the molecular genetics of personality and emotions. He combines molecular genetics with brain imaging techniques such as structural /functional MRI to better understand individual differences in human nature including personality and Internet addiction. Adding to this he conducts research in the fields of Neuroeconomics and addiction including new approaches from Psychoinformatics.
Reading brain signals for decoding speech With elderly people set to form an ever-greater proportion of the population in future, more of us will live with the consequences of diseases that may paralyse some muscles and lead to function loss. Professor Nick Ramsey tells us about his team’s work in developing a braincomputer interface designed to restore function and help paralysed people communicate. A stroke in the brain stem is one of the main causes of total paralysis, damaging the connections between the brain and the muscles, and so impairing an individual’s ability to communicate. A second major cause is amyotrophic lateral sclerosis (ALS), a neurological disease that affects motor neurons, nerve cells which control voluntary muscle movement. “The signals are intact in the brain, and it generates the impulses, but basically the wires to the body are no longer working,” explains Nick Ramsey, a Neuroscience Professor at the Brain Center of the University Medical Center of Utrecht. As the Principal Investigator of the iCONNECT project, Professor Ramsey is working to develop a brain-computer interface (BCI) that helps paralysed people communicate, building on earlier research into the brain. “We have been working with epileptic patients, who have electrodes implanted for their diagnosis. This gives us the opportunity to pursue basic research into how the brain works,” he outlines.
Brain signals This research led to important insights into how to interpret the brain’s signals, from which the idea of working with implants to decode inner speech was developed. The foundation of this work is a detailed understanding of how signals are transmitted between the brain and muscles. “There are many muscles in your body, and they are all stimulated by a particular part of the brain, the primary motor cortex. That’s where the neurons reside, that get the pulses to the muscles,” says Professor Ramsey. Different parts of the body are organised in an orderly fashion, and their movements can be related to signals from specific parts of the brain. “The cortical homunculus starts in the middle at the top of the head, and it goes to either side of the body – where the left part of your brain is connected to the right side of your body, and the other way round,” continues Professor Ramsey. “If we look at one side of the sensorimotor cortex, we can delineate which part of the brain maps onto which part of the body.”
Implantation of the Utrecht Neural Prosthesis in a Locked in patient. (www.neuroprosthesis.eu)
Researchers in the iConnect project have produced evidence that the movement of different muscles in the face leads to different patterns on the cortex. So for example if an individual purses their lips or clenches their jaws, then researchers monitoring their brain activity would see a pattern, where very small patches of
cortex become active. “That pattern allows us to identify what kind of movement you’re making, we can even identify different spoken letters such as ‘p’ or ‘ah’. We’ve also proved that if you cannot make a movement – but try to – then you still get the same patterns on the cortex. This supports the idea that in cases of paralysis the brain is intact and the pulses are still generated, but they don’t actually arrive at the muscles,” says Professor Ramsey. This is central to the project’s work in developing an intracranial BCI, designed for use in the home. “We decided to first try and accomplish something relatively simple, but which really helps patients who are locked-in, who are unable to communicate,” outlines Professor Ramsey. The long-term, ambitious goal in this research is to interpret brain signals so accurately that it becomes possible to develop implants that translate attempted speech to a speech computer in real-time, and implants that make muscle movements possible again. “We aim to offer a system to people with locked-in syndrome that will help them to communicate again,” explains Professor Ramsey. A core part of this work centres around developing an implant that will record brain signals, interpret them, and send the
On the left the Utrecht Neural Prosthesis is shown. On the top right a 7 Tesla functional MRI scan of the 5 fingers of the right hand (thumb/orange to little finger/red). On the bottom right the electrode grids for the next generation BCI for decoding speech and gestures.
The iCONNECT team
iCONNECT Intracranial COnnection with Neural Networks for Enabling Communication in Total paralysis Project Objectives
interpreted signals to another small computer. The second computer will then instruct the muscles to make particular movements, so that motor functionality can be restored in paralysed people. The system itself is relatively basic at this stage however, enabling users to generate a click when they scroll through a drop-down menu and to communicate in that way. Researchers have achieved a high level of reliability, which Professor Ramsey says is very important in terms of moving the technology into the homes of people who need it. “Our success in achieving this constituted a major breakthrough for BCI, since for the first time a locked-in person could use the implant at home any time it is needed, without requiring expert help.” (www.neuroprosthesis.eu)
Aging population The backdrop to this research is Europe’s changing demographic profile, with the elderly set to account for a greater proportion of the population over the coming years. The risk of suffering a brain haemorrhage or stroke increases dramatically with age, and with life expectancy increasing, Professor Ramsey believes there will be significant demand for this technology in future. “If you suffer a stroke at the age of 70, you may still live for another 20 years, so we want to develop technology to help people live in a dignified way,” he says. There is also the possibility of helping people with less severe disabilities, helping widen the market for this technology, which is an important consideration for potential commercial partners looking to develop it further. “We
We’ve also proved that if you cannot make the movement – but try to – then you still get the same patterns on the cortex. This supports the idea that in cases of paralysis the brain is intact and the pulses are still generated, but they don’t actually arrive at the muscles. This provides the foundation for further research into brain signals, opening up the possibility of adding functionality in future. The existing implant has four amplifiers, so it can record and transmit data from four electrodes on the brain, now Professor Ramsey is planning a study which he hopes will lead to deeper insights into brain signals. “As we get more electrodes, they get smaller and we can put them closer together, so we can look more deeply into the features of brain signals. It’s a constant process of improving the quality of the de-coding, by building a deeper understanding of brain signals,” he outlines. A key goal in the near future will be to achieve what is called point-and-click functionality. “Currently users can generate a click, but they have to wait until the icon or letter of interest lights up, so it’s quite slow. If we can manage point-and-click, where a user can move the mouse by brain signals and generate a click, then that will be a major upgrade,” explains Professor Ramsey.
must get better at decoding the brain’s signals if we are to also help people with less severe disabilities, which will expand the functionrestoring capabilities of this technology. The more sophisticated the technology, the more people you can help,” says Professor Ramsey. This is a new area of research, as we are only just discovering how a person can be taught to use their brain signals to control a computer, while it also raises ethical questions. A patient at risk of getting locked-in syndrome may well want to make their own decisions about their future, based on their own quality of life, something of which Professor Ramsey is well aware. “We are aware of the need for an ethical framework around this kind of work,” he says. Professor Ramsey publishes work widely in medical journals, aiming to heighten awareness of the potential of BCI systems. “We publish as much as we can in medical journals, so that we can make neurologists aware that these patients actually report a good quality of life,” he continues.
Many people suffer from partial or full loss of control over their body due to stroke, disease or trauma, and this will increase as elderly people account for a greater proportion of the population. With both duration and quality of life beyond 60 increasing in the western world, more and more people will suffer from the consequences of function loss, and will stand to benefit from the development of restorative technology. iCONNECT aims to give severely paralyzed people the means to communicate by merely imagining themselves talking or making hand gestures. Imagining specific movements generates spatiotemporal patterns of neuronal activity in the brain which Professor Ramsey explains.
Funded by an ERC Advanced Grant Systems and communication engineering.
Project Coordinator, Professor Nick F. Ramsey Brain Center Rudolf Magnus University Medical Center Utrecht Room G03 1.24 Huispostnummer G03 1.24 PO Box 85500 3508 GA UTRECHT T: +31 (0)88 755 6862 E: firstname.lastname@example.org W: http://www.nick-ramsey.eu
Professor Nick Ramsey
Nick Ramsey is full professor in cognitive neuroscience at UMC Utrecht’s department of neurology and neurosurgery, a position he has held since 2007. His primary goal is to acquire and translate neuro-scientific insights to patients with neurological and psychiatric disorders, with a focus on braincomputer interfacing.
Health.E Lighthouse illuminates a path to market for smart health innovation Standardisation and collaboration in the electronic medical device industry is necessary so that innovations can be commercially viable. This is the focus of work in the InForMed project and projects within the Health.E Lighthouse initiative, such as ULIMPIA and POSITION. Medical technology is changing in nature. It’s becoming smaller, smarter and even disposable. Future devices should be less invasive and easy to use, whether they are used at home or in a hospital. With a rapidly growing population that is living longer and healthcare institutions stretched to capacity, it is widely understood that healthcare innovation and smart medical device technology will come into its own as an answer to healthcare provision. Much of the smart technology we are expecting to see is miniaturised, so it’s versatile and mobile. Microelectronic and micromechanical devices of the future will be attached to the body or placed inside the body to take measurements and make an analysis, as opposed to the traditional large, fixedin-position machines we see installed in hospitals. Micro fabricated devices could revolutionise medical care with equipment like ultrasound, bioelectronic medicines, continuous monitoring, Organ-on-Chip and eHealth. If Europe is to embrace the development of these new technologies, it would mean securing its stake and market share in the medical equipment sector. However, to do so requires not only consideration of the technology that is possible but also how such technology will be manufactured. The numbers game There is a very noticeable ‘elephant in the room’ anyone involved in miniaturised, or smart health technology innovation will understand. The problem is that the huge amount of technological innovation that is possible, is not appearing in our homes and hospitals. Too often, innovation remains a concept on paper and doesn’t make it past the first challenges toward a useable, beneficial technology. Whilst there is no shortage of ideas and research for this new pedigree of medical technology, there is a realisation that manufacturing it can fast
become impractical and overly expensive. This is one of the biggest challenges Ronald Dekker at Philips Research, identified. “At the moment, every university and every research institute is investing a lot of effort into research into advanced technological medical devices because it’s assumed everyone will see the necessity of that. However, if you comprehend the sheer amount of money that is going into smart healthcare and then you see what is reaching the market, in reality it is very, very little,” Dekker explains. There are significant challenges that need to be tackled for this vision of our healthcare future to take shape. Whilst
turnover to justify those developments,” said Dekker. “In consumer markets there are those high volumes needed – for example, making microphones for mobile phones or accelerometer sensors. In markets with high consumer demand, those high volumes make it justifiable and if you have a good business plan it is not so difficult to get the money to do that innovation but if you compare that with the medical domain, the volumes for applications are relatively small. That’s why there is this challenge in the sector.” A significant problem is justifying relatively small volumes with a disproportionately expensive and technical production line. A new approach is needed.
To accelerate innovation in the medical domain, we should move toward open technology platforms, to share the platforms of technology among the many different end users. People can then use those open platforms to build innovative applications. If we can work together towards technology platforms that can be shared, innovators can put all their energy into thinking about the application.
“Moore for Medical”
smart healthcare technology is feasible, for it to become a reality depends on combining standard semiconductor manufacturing with materials like polymers, uncommon metals and proteins. These devices will use new packaging techniques involving advanced moulding, micro-fluidics and heterogeneous integration. Therefore, to make these devices requires specialist knowledge and adherence to strict regulations. Manufacturing such devices can also be prohibitively expensive when starting from scratch, which is offputting for start-ups, innovators and entrepreneurs. “Where microfabrication is involved, developing the basic underlying technology becomes very expensive, very quickly, and you can only afford to do it if you have enough
Sharing the platforms, saving the costs If a stumbling-point for progress is the ‘how to’ with manufacturing and the large expense of creating a specialist pilot line, what’s needed is a pre-existing pilot line open for third parties, with the express purpose to help develop these kinds of innovations and to manufacture technologies that can be shared for different applications. The Health.E Lighthouse initiative that was established by the ECSEL Joint Undertaking, ensures there is a broad scope of innovations that can benefit from standardisation of underlying technologies. It also helps with the extra complications beyond the technology itself, like legislation and IP management.
The Health.E Lighthouse carries forward insight brought about during the InForMed project, where the initial focus for a demonstrator product was to develop a new kind of smart catheter, for better treatment of heart arrhythmias, that could measure the depth of ablation. Whilst working on this, the team involved had a ‘light bulb’ moment about how the nature of innovation in the medical device sector needed to change. Dekker said: “We developed smart catheters but we thought about it and decided the way to go forward was to open these technologies and offer them to other companies, so they can use them for their products. This would generate volumes, making it feasible to do sustainable, continuous innovation. It was not an idea in our original plan and this kind of offer is something very new among medical device
manufacturers, it’s not customary to do this in the medical domain. It can be hard to persuade people that this really is an open technology, but this is a way to bring innovation to the market.” As one of the biggest issues with development is generating volume to justify the cost of creating the pilot line, a good way to tackle this is to create open platforms, to share the technology – meaning the pilot line will continue to be in use with various innovative projects, that can use the same initial technology, as a standard requirement. This will shift the uniqueness of a device from the technology inside it, to the application of it and the design of the device around it. This is another way for innovation to speed up, to be more efficient with the processes toward a product launch.
There were several demonstrator products the InForMed project focused on. For example, advanced devices for electrophysiology that make advanced drug safety testing available at earlier stages in a drug’s development, deep brain stimulation via minimally invasive neurosurgical therapy, a nano-electronic platform for detecting bacterial infections and smart body patches, amongst others. An example of the success and far reaching potential in this approach can be demonstrated when we look at the project that’s developing body patches that conform to and monitor the body. Whilst we are used to seeing technology that can sense things that are on the surface of the body, like a pulse – with the arrival of affordable ultrasound devices, we can create devices that can look inside the body.
NovioScan is the first company to launch an ultrasound smart patch product designed to help young and elderly fight incontinence.
Minimal invasive interventions aided by smart catheters have drastically reduced hospitalisation with improved outcome for patients.
Novioscan’s SENS-U ultrasound body-patch supports people who do not sense a full bladder. (website: novioscan.nl)
Companies working with the InForMed pilot line have benefited through the development of their products which in turn will reduce the strain on healthcare provision. For example, take the partner that created a smart dressing with integrated sensors and electronics to monitor the acidity, humidity and temperature of chronic wounds to detect infection. This dressing will reduce the need for regular inspections by hospital workers. Each of these innovations has long-term implications for transformation in healthcare provision and the results are tangible. “To accelerate innovation in the medical domain, we should move toward open technology platforms, to share the platforms of technology among the many different end users. People can then use those open platforms to build innovative applications.
“The difference is in the design and also in the algorithms and software. This is another example where you are trying to move away from every company making its own solution, where you can have open technology platforms that are offered to multiple end users, and where the IP is shifting from this specific technology implementation to software algorithms and design.” The InForMed project and the Lighthouse initiative that are now underway, essentially remove the barriers to manufacturing and offer a service to help innovators take the first steps to turn their inventions into viable businesses. The wider impact this approach will have, is to accelerate innovation in a sector that has traditionally suffered from being unable to manufacture the devices that have been conceptualised.
Rather than having ten
different companies all developing their own patch technology with ultrasound, what we are trying to do with this, is that we are developing a platform with a programmable device, where you can put one in or two in or three in, depending on what you want to do. If we can work together towards technology platforms that can be shared, innovators can put all their energy into thinking about the application,” Dekker explains. “Rather than having ten different companies, all developing their own patch technology with ultrasound, what we are trying to do with this, is that we are developing a platform with a programmable device, where you can put one in or two in or three in, depending on what you want to do. For instance, you might have one company investigating bladder control and they have one ultrasound device that looks at the bladder. Another company, however, might want to carry out baby monitoring and so they need to image the baby in the best position to pinpoint the heartrate. In this case you might conclude, you need to have multiple transducers in such a patch – but the technology platform is the same in both devices. Organ-on-Chip devices like this one developed in InForMed and produced by BI/OND will result in better and safer medicines (www.gobiond.com).
A different approach to innovation As InForMed draws to its conclusion, a series of European projects, grouped in the Health.E Lighthouse initiatives are now beginning. The foremost is the POSITION (ECSEL) project. Whereas in InForMed researchers realised the infrastructure and manufacturing networks needed coordination to make medical devices, the POSITION project goes further to develop the TRL8 platform technologies for the next generation of smart catheters and implants. It is much more the ‘open platform’ that was envisaged. The same applies for the ULIMPIA (PENTA) project where an open technology platform will be developed for ultra-sound body patches. A supporting Lighthouse project is ORCHID (H2020) where a European roadmap for Organ-on-Chip is being defined, and in the future, other projects will be added In the InForMed project (ECSEL) an ablation catheter has been developed that monitors the depth of the lesion during an ablation to treat heart arrhythmias.
The Flex-to-Rigid (F2R) miniature assembly platform is designed to bring complex electronic sensing functionality to the tip of smart catheters. (http://informed-project.eu/downloads/F2R.pdf)
Health.E lighthouse Objectives
Smart bandages and plasters monitor the condition of chronic wounds eliminating unnecessary painful manual inspection.
to the Lighthouse initiatives. Key to success for progress in the sector, the projects provide a way to accelerate innovation, facilitating methods that work. They pull designs off the drawing board and push them into production.
Lighting the way for all industries Open platforms have clear advantages for developing technology faster and getting it to market and The Health.E Lighthouse Initiative provides the means to advance and refine the methodologies. The Lighthouse concept is introduced by ECSEL Joint Undertaking to signpost specific subjects of common European interest that call for coordinated activities. At the moment the Lighthouse initiatives include Health.E, Mobility.E and Industry4.E. Whilst Health.E stimulates development of open technology platforms for medical devices and systems, Mobility.E is focusing on the deployment of zero emission/zero accident mobility systems for intelligent vehicles, and the Industry4.E initiative is working on platforms for ‘digitalisation’ of industry. There is no better way to justify the value of research into new technologies, than bringing innovation to market. This can only be accomplished with improved methods such as those proposed by The Health.E Lighthouse initiative, that champions a coordinated approach to produce new technologies more consistently. CMUT MEMS ultra-sound transducers like these on the tip of a 2.5 mm diameter catheter are key in the realisation of small and affordable ultra-sound products. (www.innovationservices.philips.com/cmut)
Health.E will stimulate the development of open technology platforms and standards for medical devices and systems, thereby moving away from the inflexible and costly point solutions that presently dominate electronic medical device manufacturing. Open technology platforms, supported by roadmaps, will generate the production volumes needed for sustained technology development, resulting in new and better solutions in the healthcare domain. In this way Health.E will accelerate innovation along the whole medical instrument supply chain enabling “Moore for Medical”.
Health.E lighthouse projects
• InForMed: 39 partners in 10 countries total budget M€ 58 • POSITION: 46 partners in 12 countries total budget M€ 41 • ULIMPIA: 18 partners in 6 countries total budget M€ 17
Ronald Dekker Principal Scientist Philips Research System in Package Devices High Tech Campus 4 5656AE Eindhoven The Netherlands T: +31 40 2744255 E: email@example.com InForMed project website: W: http://informed-project.eu/ POSITION project website: W: www.position-2.eu ULIMPIA project website: W: www.ulimpia-project.eu ORCHID project website: W: https://h2020-orchid.eu/
Ronald Dekker received his MSc in Electrical Engineering from the Technical University of Eindhoven and his PhD from the Technical University of Delft. He joined Philips Research in 1988 where he worked on the development of RF technologies for mobile communication. Since 2000 his focus shifted to the integration of complex electronic sensor functionality on the tip of the smallest minimal invasive instruments such as catheters and guide-wires. In 2007 he was appointed part time professor at the Technical University of Delft with a focus on Organ-on-Chip and bioelectronics medicines. He published in leading Journals and conferences and holds in excess of 50 patents.
New Developments In Medical Imaging Technology Avoiding invasive surgery or guesswork to diagnose a problem inside the body has long been a priority for healthcare professionals. After the discovery of the X-ray in 1895, we have since benefited from methods like Magnetic Resonance Imaging (MRI) and ultrasound. We’re now seeing a new wave of innovations that promise to dramatically advance medical scanning. By Richard Forsyth
edical imaging is an area of healthcare provision that is transforming with new advances. It’s an important step in patient treatment as it facilitates a fast diagnosis and helps with evidence-based decision making and personalised treatment. It also minimises complications in surgery and gives healthcare providers a better understanding of diseases and conditions. The technologies that physicians utilise for scanning our skeleton and internal organs is about to shift up a gear in capability, as we are in an exciting era of development that is opening up new possibilities in terms of what we can see with imaging in this context. Before we look at the new kinds of scanning technology, we should take time to recognise interesting improvements to existing technology, more over in the way the imaging is used and what can be achieved when it is coupled with other types of advanced technology.
AI and imaging The hottest topic in medical imaging has to be its splicing with Artificial
Intelligence (AI) applications. State of the art machine learning software has the capability to behave, evaluate, organise and recommend courses of action to the same degree of expertise as an experienced clinical assistant. Take Agfa’s IBM Watson, it can review X-ray images and image order to determine serious health issues, highlight prior examinations, differentiate between the less and more relevant parts of patient’s medical history and find out what drugs are being taken. With intelligence it can find out the right kind of information to point a doctor to the relevant courses of action and solutions. This kind of assistance means workflow is made much more efficient and accuracy is assured. Whilst there is a lot of data available for many patients, overworked clinicians can have their workload greatly sped up with AI efficiency, which means it’s possible to identify and present the most relevant information from images and associated data, at the right time. Mixing big data, health analytics and imaging is a powerful alchemy for healthcare and will show how eHealth really can connect the dots that people take so much time to, in an instantaneous way.
New approaches The first MRI scan was way back in 1977 and took five hours. Today it can take anywhere between 15 and 90 minutes depending on the parts being scanned and the amount of images that need to be taken. Whilst MRI is totally painless, if you are a patient with a potentially serious medical problem, taking an MRI scan can, for some people, be a daunting and challenging experience. Many find it an oppressive process. It can be noisy, and you need to keep perfectly still as you are inserted into a large cylindrical drum. Some of these fears can have a financial impact too, if for example examinations have to be scrapped or repeated but MRI technology is changing and becoming faster and also less intimidating for patients. A company that is refining and improving 3D imaging but also patient experience in MRI scans, is Phillips. One of the company’s latest developments is an MRI scanner that can carry out examinations 50% faster than the norm. It uses newly designed user interfaces, new patient sensing technologies and Philips’ SmartExam AI-driven analytics for automatic planning, scanning and processing of exams. But it’s not just the machine itself that is developing, it’s the humanmachine interface. As some patients find MRI scans frightening, they
Photograph courtesy of Philips.
Accurate imaging of internal structures from MRI and CT scans is also leading to new opportunities in healthcare when combined with 3D printing technology. It’s now possible to tailor-make model replacement parts for the body, such a bone, muscle and cartilage. www.euresearcher.com
“Using this technique, we’ll hopefully be able to identify osteoarthritis earlier, and look at potential treatment before it becomes debilitating,” Dr Turmezei said. “We’ve shown that this technique could be a valuable tool for the analysis of arthritis, in both clinical and research settings. When combined with 3D statistical analysis, it could also be used to speed up the development of new treatments.”
Print out body parts
have created an ambient audio-visual experience to calm patients and guide them through the scan during the process. This includes relaxing imagery – of waves on a beach for example and a bar that shows progress during the exam. It’s a relatively simple addition but it’s had great impact. A study at Herlev Gentofte University Hospital in Denmark shows that Philips’ Ambient Experience in-bore Connect solution helped them reduce the number of rescans by up to 70%, which is a significant improvement, based purely on making the experience of the scan itself more manageable for the patient. Another new approach to medical imaging is seen in a method pioneered at the University of Cambridge in the UK, that uses an algorithm and 3D images to monitor joints of patients with arthritis, the results of which were published in Scientific Reports. The technique, called Joint Space Mapping (JSM), was developed by Dr Tom Turmezei and his colleagues using images from a standard computerised tomography (CT) scan, that’s not normally used to monitor joints. The detailed 3D images identify changes in spaces between the bones of joints under study. It was found to be twice as effective in detecting small structural changes compared to X-rays.
Accurate imaging of internal structures from MRI and CT scans is also leading to new opportunities in healthcare when combined with 3D printing technology. It’s now possible to tailor-make model replacement parts for the body, such a bone, muscle and cartilage. The breakthrough was published in Nature Biotechnology where living tissues were used to repair bodies of animals. The Integrated Tissue and Organ Printing System (Itop) mixes a biodegradable plastic with a gel that contains cells. When the structure was implanted in animals the plastic broke down to be replaced with the natural structure made up of proteins produced by the cells, allowing blood vessels and nerves to grow into the implants. This presents a potential new chapter of bodily reconstruction that relies on precise 3D imaging in conjunction with 3D printing.
New devices A great opportunity for versatile imaging without the need for a large fixed MRI scanner is in using ultrasound. Already there are systems in development that will give significantly more flexibility in using ultrasound for diagnostic purposes. Science Daily recently reported a prototype all-optical ultrasound imager to demonstrate video-rate, real-time 2D imaging of biological tissue. This is believed to be a first step to making all-optical ultrasound practical for clinical use. A very exiting step in the industry can be witnessed from quite extraordinary new technology that has made a milestone in healthcare bodily imaging. A company called MARS bioimaging Ltd.
All around the world, imaging techniques are advancing to assist physicians in seeing with their own eyes the nature of physical ailments that are hidden in the darkness of our bodies. For diagnosis, imaging will continue to be the most invaluable tool available. 30
in New Zealand has successfully generated the first 3D colour X-Ray images of the human body with a medical scanner. The device has been in development for a decade and relies on CERN’s Medipix technology, tracking individual sub atomic particles to produce high resolution images. A version of the technology will be made to study cancer and vascular diseases. A licensing agreement between CERN and MARS will mean that this will become a commercial device. Phil Butler, who was part responsible for developing the scanner has said: “Its small pixels and accurate energy resolution mean that this new imaging tool is able to get images that no other imaging tool can achieve.”
Micro scale imaging Imaging can go further than simply taking a peek under the skin for finding broken bones. A team of researchers in Purdue University in the US have developed what is being referred to as a ‘super-resolution nanoscope’ that shows a 3D view of the molecules in the brain at a resolution ten times higher than anything prior to this imaging tool. It is seen as a way to study the plagues that form in a brain to pinpoint for example, the origins of Alzheimer’s disease. As Professor of Anatomy and Cell Biology at the Indiana University School of Medicine’s Stark Neurosciences Research Institute, Gary Landreth, said: “It gives insight into the biological causes of the disease, so that we can stop the formation of these damaging structures in the brain.” Another breakthrough in the US, this time at the University of Illinois, is a scanner that has been developed that can image living tissue in real time in molecular detail, which does not need dye or chemicals for the scan to be effective. This scanner gives cancer researchers a means to track the progression of tumours and is a useful tool for tissue pathology. It uses timed pulses of light to simultaneously image at multiple wavelengths. The method, covered in the journal Nature Communications, has been called simultaneous label-free autofluorescence multi harmonic microscopy.
Dr Stephen Boppart, a professor of bioengineering, pioneering the technique, stated: “The way we have been removing, processing and staining tissue for diagnosing diseases has been practiced the same way for over a century. With advances in microscopy techniques such as ours, we hope to change the way we detect, visualise and monitor diseases that will lead to better diagnosis, treatments and outcomes.” The imaging technique is expected to complement or even replace standard histopathology processing that is time intensive and is done with removed dead tissue. All around the world, imaging techniques are advancing to assist physicians in seeing with their own eyes the nature of physical ailments that are hidden in the darkness of our bodies. For diagnosis, imaging will continue to be the most invaluable tool available. The amount of information that we can glean from such scans is nothing short of astounding. The cumbersome, time and labour heavy nature of imaging will improve and when paired with software that can help with the diagnosis, we’ll be able to find, track and predict internal processes effortlessly, in order to manage healthcare issues in a streamlined and highly effective way.
Scans courtesy of MARS imaging.
A very exiting step in the industry can be witnessed from quite extraordinary new technology that has made a milestone in healthcare bodily imaging. A company called MARS bioimaging Ltd. in New Zealand has successfully generated the first 3D colour X-Ray images of the human body with a medical scanner. www.euresearcher.com
Host-pathogen interplay in the limelight Parasite vs Host Optogenetics is established as a valuable tool in neurobiology, now researchers at Humboldt University in Berlin are striving to apply it in infection biology. This work will shed new light on intracellular pathogens, giving researchers new insights into pathogen-host interactions, as Dr Nishith Gupta explains. The development of
optogenetics has opened up new possibilities in biological research, enabling scientists to use light to selectively turn specific cellular pathways on or off, and then assess the physiological response. However, while optogenetics has been widely applied in neurosciences, this is not yet the case in infection biology, something which Dr Nishith Gupta and his colleagues at Humboldt University in Berlin aim to change. “Having pioneered the applications of optogenetics, our goal now is to consolidate them for investigating the biology of intracellular pathogens and pathogen-host interactions,” he says. This research involves the application of light to activate or deactivate chosen cellular pathways. “We do this by deploying lightregulated proteins in transgenic cells,” explains Dr Gupta. “We engineer photoresponsive parasites or host cells, and switch the system by illuminating them with LEDs of specific colours, and then study the resulting phenomena.” With a similar approach involving compatible gene-encoded biosensors, Dr Gupta’s group can also monitor the dynamic oscillations of the subcellular signalling mediators and metabolites.
Images show activation of cyclic GMP and cyclic AMP signalling in Toxoplasma gondii by light exposure, which in turn cause the parasites to exit host cells (top panel), or differentiate into dormant stage (bottom panel).
Illuminating parasites Dr Gupta’s team is applying this approach to investigate three parasites in particular, Toxoplasma, Plasmodium and Eimeria, all belonging to the protozoan phylum apicomplexa. These unicellular parasites are obligate intracellular pathogens, hence cannot survive without a host cell. Dr Gupta is looking at parasite-host interactions in a molecular context. “We have the host cell, and we have a parasite living inside that host cell. It is a complex intertwined system, where the challenge is to regulate
both entities independently of each other. We’re investigating how the parasite co-opts, subverts, or modulates, the host cell, and also how it uses its own metabolic designs and signalling pathways to survive,” he outlines. Some parasites, such as Toxoplasma, can cause either acute or chronic infection. “In acute infection, a parasite multiplies and kills the tissue,” says Dr Gupta. “In chronic infection, the parasite hides within infected host cells, and the immune response cannot see it, so the parasite can live for a long time waiting for the right moment - that is, a decline in host immunity, when it can switch back to the acute stage.” The goal for the parasite is to replicate, or to undergo a dormant stage, so that it can survive for long periods before transmitting further. When a host organism is infected, the parasite has to adapt to a changing environment to ensure its continued existence. “It’s constantly responding to environmental cues, and manipulating host-cell machinery,” explains Dr Gupta. The strategic survival of the parasite is a topic of great interest to Dr Gupta, and lies at the heart of his group’s research. “We would like to understand the molecular mechanisms for the reproduction,
adaptation, persistence and transmission of the parasite,” he continues. “Once we know them better, we can then look forward to target underlying pathways to prevent or cure infections.” A central topic in research at this point is the acute cycle of a parasite, which typically involves invading a host cell, replicating inside it, and then exiting it by lysis. Calcium-, cyclic GMP-, cyclic AMP- and lipid-mediatedsignalling play major roles in controlling the acute cycle. “Using optogenetic tools, we can activate or repress cyclic AMP or cyclic GMP signalling by light. Likewise, we can monitor calcium, cyclic AMP, cyclic GMP or lipids using gene-encoded biosensors,” says Dr Gupta. Such light-sensitive proteins can be engineered in a parasite (or host cell), from which more can be learnt about the pathogen-host interactions. “We activate
being the natural victim of cancer.” A parasite’s ultimate aim is to survive and reproduce, similarly to cancer cells. The point where the comparison seems to fall down is in the inter-host transmission, yet Dr Gupta says that recent research clearly shows that some forms of cancer have in fact evolved to transmit between hosts. “A series of very interesting high-profile original articles have been published over the last decade, and 3-4 cancer types that can transmit from ‘infected’ to healthy individuals have been found in nature,” he describes. The scenario is intriguing as well as frightening at the same time. The parallels between cancer and parasite biology will be an important part of Dr Gupta’s agenda in future, work which has important implications, from the laboratory bench to the hospital bedside.
The research group of Dr Nishith Gupta continues to play a prominent role in expanding our understanding of intracellular parasitism and beyond. He believes there is a pressing need to introduce innovative technologies into parasitology and bridge it with other disciplines. signalling by light, and then look at which proteins downstream are (de)phosphorylated, and/or which genes are modulated. Once we’ve identified those mediators, we can then knock them out and see what happens to the parasite. The optogenetic approach becomes even more enlightening when combined with reverse genetics and pharmacological modulators,” adds Dr Gupta.
Bridging with cancer biology The group is also involved in other strands of research, including exploring parallels between parasites and cancer cells. While parasitology and cancer biology are typically thought of as entirely separate areas of research, Dr Gupta says a convergence between them can be drawn. “When we look at which metabolic and immune pathways contribute to the growth of cancer cells, there are a lot of conceptual similarities between parasites and cancer cells. We would like to decipher these equivalences in molecular terms,” he states. There are two ways to discern this, the first of which involves thinking of cancer cells as being a kind of new parasite species. “They also eat up your body, inside-out; they make use of certain metabolic pathways to fuel their growth; and they have developed ingenious strategies to dodge the immune system,” explains Dr Gupta. “Alternatively, we can think of parasites as
Going translational While Dr Gupta continues to pursue further basic research in the aforementioned areas, his team is also seeking translational applications, which range from diagnostics to treatment and prevention. In the course of their research, Dr Gupta and his colleagues sometimes find that a particular enzyme or metabolite is used by a parasite but absent in humans, which could open up new possibilities in diagnosis. “We could potentially use those enzymes or metabolites as biomarkers to detect parasitic infections,” he says. Another translational possibility arising from research is cases where a parasite cannot survive without a particular gene product essential for reproduction, which would provide an excellent drug target. “We can collaborate with chemists to synthesise certain chemicals that can selectively inhibit that essential protein and thereby kill the parasite,” continues Dr Gupta. “Another aspect is prevention and prophylaxis. We can construct the parasite mutants that are heavily attenuated in their growth and virulence but potent enough in eliciting the immune response against challenging infections, so we can generate metabolically-attenuated vaccines.” Future clinical developments will be based on this kind of fundamental research.
Optobiology in Infection Exposing pathogen-host intimacy by light Project Objectives
Dr Gupta’s group studies the survival strategies of a eukaryotic cell inhabiting another eukaryotic cell. Specifically, he has been investigating the metabolic interactions between intracellular parasites (namely Toxoplasma, Eimeria, Plasmodium) and mammalian host cells. The main objectives are to reveal the metabolic determinants and signalling mediators that underlie successful reproduction, adaptation and pathogenesis of these pathogens. His group has also pioneered the application of optogenetics in entwined models, particularly to study the cyclic nucleotides and calcium signalling in intracellular pathogens.
• German Research Foundation (DFG) • Helmholtz Foundation, Germany • National Institute of Health, USA • Novartis Pharmaceuticals, Switzerland • European Society of Clinical Microbiology and Infectious Diseases (ESCMID) • European Molecular Biology Organization (EMBO) • Boehringer Ingelheim Foundation, Germany • German Academic Exchange Service (DAAD)
• Peter Hegemann, Humboldt University, Germany • Sergio Grinstein, University of Toronto, Canada • Bang Shen, Huazhong Agricultural University, Wuhan, China • Takeharu Nagai, Osaka University, Japan
Nishith Gupta, PhD, DSc Institute of Biology, Faculty of Life Sciences Humboldt University, Berlin, Germany T: +49 30 20936404 E: Gupta.Nishith@hu-berlin.de W: https://orcid.org/0000-0003-1452-5126 Selected References (1) Arroyo-Olarte RD, Thurow L, Kozjak-Pavlovic V, Gupta N (2018) Illuminating pathogen-host intimacy through optogenetics (Pearl article). PLoS Pathogens, 14(7): e1007046 (2) Kuchipudi A, Arroyo-Olarte RD, Hoffmann F, Brinkmann V, Gupta N (2016) Optogenetic monitoring of the parasite calcium identifies a phosphatidylthreonine-regulated ion homeostasis in Toxoplasma gondii. Microbial Cell, 3(5), 215-23 (3) Hartmann A, Arroyo-Olarte RD, Imkeller K, Hegemann P, Lucius R, Gupta N (2013); Optogenetic modulation of an adenylate cyclase in Toxoplasma gondii demonstrates a requirement of parasite cAMP for host-cell invasion and stage differentiation. Journal of Biological Chemistry, 288(19): 13705-17
Dr Nishith Gupta
Dr Gupta is currently working as a Heisenberg Research Group Leader in Berlin (Germany). He obtained his MSc in Biotechnology from Banaras Hindu University (India), and subsequently completed his PhD in Biochemistry from the University of Leipzig (Germany). He then undertook a postdoctoral training at the National Jewish Medical Research Centre in Denver (USA), followed by Habilitation in Biochemistry at the Humboldt University of Berlin.
Revolutionising chemotherapy monitoring
The dose of chemotherapeutic drugs is calculated on the basis of the patient’s body surface area, without considering their metabolism, making it difficult to assess the concentration of the drugs in blood. Silke Krol, Ph.D tells us about the DiaChemo project’s work in developing a point-of-care device that will enable clinicians to monitor the concentration of drugs in blood during treatment. The drug dose for a patient undergoing chemotherapy is typically calculated on the basis of their body surface area, without taking their metabolism into account. This approach has significant drawbacks which can limit the effectiveness of treatment, as Silke Krol, PhD explains. “A patient who is tall and thin is likely to have a completely different metabolism to someone who is small and fat, yet they may get the same drug dose. This means that one person is likely to receive a toxic dose while the other is perhaps underdosed, and so the tumour will not be treated effectively,” she points out. This is an issue Krol and her colleagues in the DiaChemo project are addressing by developing a pointof-care device to monitor the concentration of chemotherapy drugs in blood during the treatment. “Toxic concentrations are quickly visible in the patient. The higher risk is undertreating the patient, where the concentration of the drug in blood never reaches the level required to treat the tumour effectively,” she explains. DiaChemo project This is central to the motivation behind the DiaChemo project, an EU-funded initiative which brings together partners from across Europe. The idea behind the project is to develop a hand-held device which is able to monitor and measure, within a maximum
period of 30 minutes, the concentration of different chemotherapeutics in blood. “We are trying to measure the actual concentration of a drug in blood during treatment, in order to ensure that the concentration is always in the therapeutic window. So, this means higher than the lowest necessary concentration, and lower than the toxic
bound to nanoparticles, one for the drug itself, and others for the different metabolites. By measuring the concentration of the bound molecules on the different nanoparticles, researchers can then determine the ratio between the drugs and the metabolites. “This gives us an idea about the metabolism of the patient, and also the drug concentration,”
We are trying to measure the actual concentration of a drug in blood during cancer treatment, in order to guarantee that the concentration is always in the therapeutic window. So, this means higher than the lowest necessary concentration, and lower than the toxic concentration. concentration,” outlines Dr Krol. The device is based on a type of platform technology, a modular system. “The idea is to have a very simplified system, in which we will be able to measure concentrations of doxorubicin and irinotecan for example, two drugs commonly used in chemotherapy,” continues Dr Krol. “These drugs have specific properties, like fluorescence or light absorption, which allow us to have a very simplified, cheap, easy and fast system for the read-outs.” Another module of the device includes a type of extraction system, in which the drug molecules or their metabolites are selectively
outlines Dr Krol. The modular structure of the device means it can be used to measure the concentrations of different drugs in the same sample, an important point given that chemotherapy treatment often involves combinations of several drugs. “We are developing separate approaches for each drug, with nanoparticles which are selective for different drugs. By using a microfluidic read-out system, we are able to detect different drugs in the same blood sample in parallel,” explains Dr Krol. A system developed to measure the concentration of an antibody will not be able
to detect the concentration of irinotecan for example. However, with different modules in the device, Dr Krol says it will be possible to measure the concentration of irinotecan in the presence of the antibodies. “In one channel, we can have the detection system and read-out system for irinotecan and so determine the concentration, and in the next one we can determine the concentration of an antibody,” she says. Attention in the project has been focused on four different drugs commonly used in chemotherapy, yet the device could be applied to measure other drugs. “We aim to develop a kind of platform technology, so that we can measure the concentrations of different drugs in a very easy and timely manner,” says Dr Krol. “It’s extremely important to have a clear idea about the free drug concentration in the blood.” This could allow clinicians to adjust the therapy to the needs of a particular patient, which will help improve outcomes and improve efficiency in healthcare systems. At the moment, somewhere between 30-60 percent of the drugs administered to patients don’t have the desired effects; Dr Krol says the DiaChemo device can help medical doctors to adapt the therapy to the individual patient. “With this device you will be able to identify, at a very early stage, those patients who cannot be treated with a specific drug, maybe because their metabolism is so fast that the toxic concentration is never reached. In that case you can switch to another drug, which means reduced side-effects for the patient, and lower costs for the health system,” she outlines. “This system will allow
us to both optimise the outcome of therapy, reducing toxic effects on patients, and also to ensure that drugs are used more effectively. With this system you can see very quickly if a patient is ‘treatable’ with a particular type of drug.”
Drug monitoring The potential applications of this device are not limited just to measuring concentrations of chemotherapeutic drugs, as this topic is of interest across many areas of medicine. While attention in the project has mainly been focused on developing a device to measure chemotherapeutic drugs, Dr Krol says their research also holds wider relevance. “We are aiming to develop a modular system and platform technologies. With some relatively minor modifications, such as developing different types of nanoparticles or adding other modules for more sophisticated measurements, we can adjust the whole system to measure other drugs, not just chemotherapeutic drugs,” she outlines. Two prototypes will be developed and validated by the clinical partners of the project, after which researchers will then look towards the application of the device. The clinical partners are presenting to the two endusers of the device – a hospital for cancer in children and a hospital treating adults – and Dr Krol says there is a lot of interest in the project’s work. “There is high demand for this type of device. I work in a hospital, and when I speak to medical doctors, the questions are always about the progress of development and when the device can be utilized by clinicians,” she stresses. The DiaChemo team.
DiaChemo Point-of-care microfluidic device for quantification of chemotherapeutic drugs in small body fluid samples by highly selective nanoparticle extraction and liquid crystal detection Project Objectives
We propose to develop a point-ofcare device for quantification of chemotherapeutic drugs in small body fluid samples by highly selective nanoparticle extraction and liquid crystal detection incorporated in a microfluidic lab-on-a chip device (optofluidics based)allowing the realtime drug monitoring. This will improve the therapeutic outcome and reduced health care costs.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 633635.
• Please see project website for full partner information.
Fondazione I.R.C.C.S. Istituto Neurologico Carlo Besta, via Amadeo 42, 20133 Milan, and I.R.C.C.S. Istituto Oncologico Giovanni Paolo II, Viale Orazio Flacco, 65, 70124 Bari, Italy T: +39 02 574 89 838 E: firstname.lastname@example.org W: http://www.diachemo.eu Dr Silke Krol, Ph.D
Dr Silke Krol is a Group Leader at the Istituto Tumori “Giovanni Paolo II” in Italy. She has helped establish a new nanomedicine centre for translational cancer research, while she also works in developing new point-of-care devices for the early detection of lung cancer.
Integrated photonic biosensors for superior blood diagnostics at the point of care.
The future of CMOS-based PIC technology with plasmonics Researchers in the PLASMOfab project are leveraging plasmonics to co-develop extraordinary photonic components and electronics in a single manufacturing process. This could not only open up new functionalities and opportunities in the medical, communications and consumer sectors, but also help to reduce the cost of manufacturing photonic components, as Dr Dimitris Tsiokos explains. The demand for photonic components for use across a wide variety of devices continues to rise, yet mass manufacturing photonic integrated circuits (PICs) remains a major challenge. While over the last three decades the development of innovative technologies has supported the growth of the electronics industry, there is not yet an effective integration platform that can support mass manufacturing of multifunctional photonic components, a topic central to the work of the PLASMOfab project. “We need to achieve something similar in photonics to what’s happened with electronics. At the same time, we want to effectively merge photonics and electronics, so that we can look to reap the benefits of both technologies,” outlines Dr Dimitris Tsiokos from the Aristotle University of Thessaloniki, the coordinating partner of the project. The motivation behind the project’s work is to develop technologies that will enhance the performance of photonic components and systems. “We’re developing new integration technology, that will exploit plasmonics in the crossroad of photonics and electronics in a common manufacturing line. This will, in parallel, reduce the cost of manufacturing multi-functional and high performance devices,” explains Dr Tsiokos. Integrated approach This more integrated approach is built on optical and electrical nanostructures, that can be used to manufacture both electronic and photonic components in a single process.
Plasmonic modulator integrated with electronics in a 100 Gbps optical data transmitter.
This is designed to be compatible with CMOS, the basic process used to manufacture most commercial electronics. “The majority of the chips in laptops, mobile phones and other commercially available electronic appliances, use the CMOS process. We want to develop a technology that will eventually be adopted by CMOS manufacturing lines,” says Dr Tsiokos. Combining plasmonics, photonics and electronics will extend the frontiers of integration technology, also widening the functional portfolio of these circuits, says Dr Tsiokos. “On the one hand we open up new functionalities and we boost performance by using plasmonics, in combination with the more commonly used photonics and electronics circuits. At the same time, we also use CMOS-compatible, mass manufacturing equipment, to reduce the cost,” he outlines. “We want to increase the functionality and at the same time reduce the cost. We’re trying to demonstrate this by using plasmonics as a bridging technology, that can effectively complement photonics and electronics.” The underlying principle here is that
plasmonic waveguides can confine light into very small dimensions (nano-scale), even smaller than photonic waveguides can, giving them unique light-matter interaction capabilities and chip-scale functionalities. This means the dimensions of optical components can be further reduced, an important issue when increased chip integration density and energy efficiency are targeted. “In some cases, plasmonic waveguides may even perform a dual function and simultaneously carry both optical and electrical signals, giving rise to exciting new capabilities,” says Dr Tsiokos. “We aim to show that PLASMOfab technology can be used to increase the rate at which information is generated and transferred in cables, computer boards or even within processor chips, at low power and low cost.” This represents quite a radical approach to developing photonics and electronics devices, yet at the same time Dr Tsiokos is keen to stress that it does not require significant investment. Millions have been invested over the last few decades in new electronics manufacturing technology; Dr Tsiokos says the project aims to build on these foundations. “We want to make sure that we can use already existing facilities, to advance photonics in parallel with electronics,” he explains. Researchers are working to demonstrate how this integration technology can improve two main market applications. “One is medical diagnostics, and the other is optical communications for data centers, both growing markets,” continues Dr Tsiokos. “We chose very basic components to demonstrate
how performance in these applications can be enhanced using already established fabrication technology. For the data communication application, we have an electro-optic transmitter, which is a fundamental component used in optical communications – for example in data centres – to inter-connect two computers or servers. The faster these computers can communicate, the more highbandwidth applications can be served, like big data for example.” A second application researchers are exploring is how this new technology can be used to fabricate a bio-sensing chip, to be used in point-of-care applications
already completed the basic technology development, meaning investigations into the materials, as well as the design, fabrication, and development of the photonic integration technology,” he continues. “Now we’re focusing very intensively on assembling and demonstrating our prototypes – the transmitter and the biosensor. By the end of the project, we aim to demonstrate a fully-integrated transmitter, operating at a speed greater than 100 gigabits per second per optical channel, as well as a lab-on-chip diagnostic for reliable and instant detection of critical inflammation biomarkers even at ultra-low densities.”
We’re developing new photonic integration technology, that will exploit plasmonics photonics and electronics in a common manufacturing line. This will unleash unprecedented functional benefits in health and ICT applications while reducing the cost of manufacturing. and for on the spot molecular diagnostics. Researchers will look at how to combine different technologies like photonic circuits, plasmonic sensors, microfluidics and on-chip chemistry to demonstrate a fully functional biosensor for the instant detection of specific biomarkers currently used in diagnostics. “These biomarkers are used to identify certain diseases and assess an individual’s medical status. So, we are looking at using our biosensor to detect and quantify very basic inflammation biomarkers for example,” outlines Dr Tsiokos. Once this has been demonstrated, it may in principle be possible to monitor multiple biomarkers, leading to real-time decoding of complicated biological and medical conditions. The project recently entered the final year of its funding term, and Dr Tsiokos says great progress has been made. “We have
This would have a significant impact on the performance of photonic components and systems, opening up new technological possibilities across different every day applications. However, while fully aware of the wider picture, Dr Tsiokos says the project is focused on exploitable research, and on laying the foundations for further development in future. “We want to pave the way and to identify the right direction. On the one hand we want to open up new functionalities by using plasmonics in combination with the more commonly used photonics and electronics circuits,” he says. “At the same time, we also want to use CMOS-compatible, mass manufacturing equipment, to reduce the cost and maximize technology exploitation. So we want to increase the functionality and at the same time reduce the cost.” Plasmonics in CMOS foundries.
PLASMOfab A generic CMOS-compatible platform for co-integrated plasmonics/photonics/ electronics PICs towards volume manufacturing of low energy, small size and high performance photonic devices
PLASMOfab aims to develop CMOS compatible plasmonics as the means to effectively consolidate photonic and electronic integration. Wafer scale integration will be used to demonstrate volume manufacturing for low cost, powerful PICs. The new integration technology leverages plasmonics to unravel a series of innovations with unmatched benefits in biosensing and electro-optic transmitters.
The PLASMOfab project is funded under the Photonics Public Private Partnership and the European Commission Horizon2020 framework with Grant Number 688166.
• Aristotle University of Thessaloniki (GR) • Universite de Bourgogne (F) • Swiss Federal Institute of Technology in Zurich (CH) • AMO GmbH (DE) • ams AG (A) • Micram GmbH (DE) • Saarland University (DE) • Mellanox Technologies (IL) • PhoeniX BV (N) • AIT Austrian Institute of Technology GmbH (A)
Project coordination team Prof. Nikos Pleros Dr Dimitris Tsiokos
Dimitris Tsiokos, PhD Senior Research Fellow PhosNET Research Group Aristotle University of Thessaloniki Center for Interdisciplinary Research and Innovation Balkan Center - Building A 10th Km Thessalonikis-Thermis Av, 57001 GREECE T: +30 2310 990590 E: email@example.com W: http://www.plasmofab.eu Dr Dimitris Tsiokos, PhD Dr Dimitris Tsiokos, PhD is a Principal Researcher at the Aristotle University of Thessaloniki in Greece. He previously held various research positions in Greece and a visiting researcher position at the University of Wisconsin, USA. His research interests focus on photonic integrated components and systems for optical sensors and optical interconnects.
A new age in drug development Improving treatment of neurodegenerative diseases is among the biggest challenges facing modern medicine, yet it is difficult to assess the effectiveness of therapeutic interventions. We spoke to Professor Roland Wolf and Dr Colin Henderson about their work in developing a next-generation platform to help scientists monitor the progression of neurodegenerative disease and identify effective therapies. The development of therapies to combat neurodegenerative diseases is widely recognised as a research priority, with conditions like Alzheimer’s and Parkinson’s set to place an ever-heavier burden on healthcare systems in future. Current treatments are limited in their impact however, while it is difficult to assess the effectiveness of therapeutic interventions, issues central to the work of the New Age project. “One goal of our project is to develop biomarkers that can reflect the progression of a disease at a much earlier timepoint, and give an earlier and more definitive read-out on the efficacy of any therapeutic intervention,” says Professor Roland Wolf, the project’s Principal Investigator. A core aim in the project is to evaluate how informative different stress pathways are as early biomarkers of degenerative disease. “The life and death of cells is dependent on a variety of different pathways. When cells are subject to toxic injury, leading to death, one of these pathways is invariably activated,” explains Professor Wolf. “One major mechanism of the deleterious effects which lead to cell death is through the induction of oxidative stress. That causes damage to the components of cells, leading eventually to cell death.” This is not the initial cause of neurodegenerative disease, yet oxidative stress or DNA damage are thought to play important roles in its progression. Cells are continuously subjected to a certain level of oxidative stress, but normally deleterious effects are prevented through anti-oxidant pathways; in cases of neurodegenerative disease, it is thought that these pathways are overwhelmed by pro-oxidant effects, including free radicals, causing cell death. “The anti-oxidant pathways cannot cope with the level of damage, the level of free radicals that have been generated as a consequence of the toxic effects,” explains Dr Henderson. A reliable method of monitoring levels of oxidative stress could help researchers assess the effectiveness of neurodegenerative disease therapies. “Oxidative stress or DNA damage are integral to the progression of these diseases rather than the initiation, but they go hand-in-hand,” continues 38
Thanks to Rumen Kostov & Francisco Inesta for the preparation of this image.
Professor Wolf. “The main disease models that we are studying are Hutchinson-Gilford Progeria syndrome, Alzheimer’s disease, and Parkinson’s. There’s a significant body of evidence suggesting that oxidative stress and/or DNA damage is an important driver of these diseases.”
disease they become activated,” explains Professor Wolf. The models were developed on the basis that certain genes are known to be regulated by either oxidative stress or DNA damage. “A protein called heme oxygenase 1 is constitutively only expressed at low levels in cells – but is highly inducible by oxidative
One goal of our project is to develop biomarkers that can reflect the progression of a disease at a much earlier timepoint, and give an earlier and more definitive read-out on the efficacy of any therapeutic intervention. Disease models These models are built on earlier research in which genes associated with these three specific diseases were identified. It was previously shown that aberrations, alterations, or mutations in certain pathways result in these diseases; Professor Wolf, Dr Henderson and their colleagues are working with mouse models that reflect these known susceptibilities. “We’ve crossed those disease models with the reporter models of DNA damage, to evaluate whether those pathways are involved in the pathogenesis of the disease and at what time point in the etiology of the
stress,” says Professor Wolf. “Another gene of interest is p21, a marker which responds to DNA damage in cells – another mechanism of cell death. This gene is again only expressed at low levels in many tissues, but it’s highly inducible by DNA damage.” Researchers have introduced a reporter enzyme into either heme oxygenase 1 or p21, from which more can be learned about levels of oxidative stress. When the gene is activated, for example using a compound like paracetamol which is known to cause oxidative stress, a reporter enzyme is produced that allows scientists to monitor
NEW-AGE A next-generation platform for catalysing pre-clinical development of drugs against Alzheimer’s and other degenerative diseases of old-age Project Objectives
The goal of our project is to develop biomarkers that can reflect the progression of a disease at a much earlier timepoint, and give an earlier and more definitive read-out on the efficacy of any therapeutic intervention, which can subsequently be tested in clinical trials.............................
The work described in this project has been funded by the European Research Council: Advanced Investigator Award ERC-294533 (REDOX) Proof of Concept ERC-2016-737534 (New Age)
• Dr. Francisco Inesta-Vaquera, University of Dundee, United Kingdom • Prof. Carlos Lopez-Otin and Dr. Clea Barcena. University of Oviedo, Spain • Prof. Bettina Platt. University of Aberdeen, United Kingdom • Prof. Dario Alessi and Dr. Miratul Muqit, Medical Research Council, Protein Phosphorylation and Ubiquitylation Unit, University of Dundee, United Kingdom.
activity levels. “The enzyme activity is reflected in either a fluorescence signal, or by an enzyme-induced colour change to generate blue cells in a tissue section,” outlines Professor Wolf. This provides a visual indicator, so that the level of oxidative stress can be monitored and compared with that in a healthy mouse. “We can look at the signal in a particular target tissue in a control mouse, and compare it to the level of the signal in a mouse carrying the degenerative disease,” explains Professor Wolf. “In control animals you get a low signal, but as the disease progresses, the signal becomes much more intense and affects many more cells in a tissue-specific manner. This demonstrates that the oxidative stress reporter has been activated, so you can conclude that there has been a level of oxidative stress in that cell.”
Disease progression This could enable clinicians to effectively monitor the progression of a disease even before the symptoms become apparent, and potentially assess the effectiveness of therapies in inhibiting oxidative stress or DNA damage. It is not clear whether this would definitively prevent the progression of disease; Dr Henderson says the model will allow researchers to evaluate this by monitoring status over several time-points. “We have selected a number of time-points, ranging from the mid-to-late stage. The late time-points have been chosen to demonstrate that the models do give you a read-out of the disease. The early time-points were to show
that we have an early biomarker, which reflects the progression of the disease and allows intervention studies to be carried out,” he says. This latter point holds important implications for drug companies in the development and testing of new therapies, something which Professor Wolf is keen to explore further in future. “We have gained some very promising results in the project, and the commercial exploitation of the model is certainly something we would aspire to,” he continues. A number of interventions have already been proposed in the prevention of different degenerative diseases, and the project’s models could be used to assess their efficacy and bring them closer to practical application. While the data that has been gathered in the project is restricted to HutchinsonGilford progeria syndrome, Alzheimer’s and Parkinson’s disease, Professor Wolf says the model systems hold broader relevance beyond degenerative disease. “For example in toxicology, in understanding the safety of man-made or environmental chemicals on the pathogenesis of a disease,” he outlines. The project’s work is more exploratory at this stage however, with Professor Wolf and his colleagues laying the foundations for future research. “Our primary goal in this work is to take the model systems to a position where we can get them into the public domain and get peer-reviewed publications out, which will give us a platform for further development,” he says. “We have had signs of interest from some commercial entities in using the models, which we are currently pursuing.”
Project Coordinator, Professor Roland Wolf University of Dundee United Kingdom Nethergate DD1 4HN DUNDEE United Kingdom T: +44 1382 383134 E: firstname.lastname@example.org W: https://cordis.europa.eu/project/ rcn/207916_en.html Prof Roland Wolf and Dr Colin Henderson
Professor Wolf and Dr Colin Henderson’s research has focussed on the pathways which have evolved to protect cells for the deleterious effects of chemicals and other environmental agents. These pathways are of central importance in the pathogenesis of human disease, in disease prevention and in the development and use of drugs. His research has involved the characterisation of the enzymes systems involved, genetic variability in their expression and the pathways which become activated in response to chemical and toxic insult.
New light on cortical connections The cortex connects to numerous subcortical areas via cortico-subcortical synapses (green and magenta clouds). Some of these are thought to function as sensori-motor interfaces. Image provided by Dr. Anton Sumser.
Much has been learned over recent years about how sensory signals are processed in cortical networks, yet the transformation of those signals into behaviour is still not well understood. We spoke to Dr Alexander Groh about his research into connections between the cortex and subcortical areas, which could shed new light on the relationship between the brain and behaviour. The development of sophisticated brain imaging techniques has allowed researchers to investigate how sensory signals are processed in cortical networks in greater depth than previously possible. The next question is what happens to these signals after they have been processed in the cortex, a topic that Dr Alexander Groh and his colleagues at Heidelberg University are working to address. “We’ve been focusing on a specific cortical interaction with sub-cortical areas, the connection between the cortex and the thalamus. Now we are in the process of extending this work to other subcortical areas,” he explains. This work involves trying to model brain functions, particularly with respect to communication between the cortex and the rest of the brain. “We try to do that by relating neuronal activity to sensory, motor and cognitive processes. We’re interested in complex functions – for example, understanding how an organism can identify what is important in a sensory scene,” outlines Dr Groh. A range of different techniques are being applied on transgenic mice in this work, including electrophysiology, optogenetics and cell-type specific approaches. “Functions and dysfunctions of the brain rely on neuronal interactions, organized across several temporal and spatial scales, ranging from synaptic interactions to local and long-range interactions between networks. We face two challenges to understand these processes. First, we need to record from different parts of the brain while maintaining the temporal and spatial resolution to understand how single neurons integrate 40
signals from multiple upstream neurons and in turn feed into their downstream partner neurons,” explains Dr Groh. “In addition, we need to be able to probe the function of specific, embedded pathways in order to understand their role in cognitive processes.” By using optogenetics, Dr Groh and his colleagues can activate specific circuits in the brain. “Optogenetics uses the expression of light sensitive membrane channels. As a result, you can control the activity in specific brain pathways of interest,” he says. These pathways can be either activated or inactivated while the brain is processing sensory information. “As a functional read-out we mainly record fast electrical signals from neurons, and lately we’ve also been using deep-brain functional imaging techniques, both of which serve as a proxy for how neurons talk to each other,”
explains Dr Groh. “On the anatomical level, we use microscopy to see how neuronal circuits are physically wired to each other; In fact, this work started on the anatomical level, when Anton Sumser and I looked at the connections that are formed between a cortical area and its sub-cortical target structures.”
Cerebral cortex The mammalian cerebral cortex itself is comprised of six layers, each with a specific set of cell types with certain morphological, electro-physiological and connectivity characteristics. The deep cortical layers, layers 5 and 6, are the output layers of the cortex, which connect the cortex to other sub-cortical areas. “Layer 6 connects the cortex to the thalamus, while there is also a very interesting output pathway in layer 5.
Cortical whisker maps in the thalamus (Image provided by Anton Sumser).
Emilio Isaias-Camacho and Dr. Jesus MartinCortecero, both in my team focus on the role of these two pathways in behaviour,” outlines Dr Groh. The cortex consists of many networks, which play a central role in sensory and motor functions. “We think of networks in terms of connected neurons. One helpful distinction is to differentiate between local networks and long-range networks,” continues Dr Groh. “For example, these layer 5 output pathways are a good example of long-range interactions. Some of these connections in humans go all the way from the cerebral cortex down to the spinal cord, where they control movements.” Researchers aim to build a deeper picture of these connections and the interplay between the cortex and sub-cortical target networks. An anatomical map has been developed in the project, looking at how the sensory cortex is connected to subcortical motor circuits, from which several potential targets with a motor function have been identified. These findings have been made on the anatomical level, now Dr Groh aims to investigate these pathways on the functional level. “Previous experiments, including from our own lab, showed that when the sensory cortex is stimulated, it actually evokes a motor movement in the mouse’s whiskers. This really shows that the sensory cortex has a motor function,” he outlines. “The next important step is to understand this sensori-motor function on the cognitive level. We don’t believe that these circuits that involve the cortex control simple reflexes.”
the hypothesis that these sub-cortical areas are part of the salience network. Therefore, we joined the research consortium ‘SFB 1134’ in Heidelberg (http://sfb1134.uni-heidelberg. de/), in which around 20 labs focus on this one overarching question of the role of functional neuronal ensembles in brain function.” Professor Groh’s team is part of another research consortium (SFB 1158, https://www. sfb1158.de/), which is focused on investigating the neuronal mechanisms of pain processing. The team is looking at how these corticalthalamic interactions control how pain is transferred through the thalamus-cortical system. “This work is ongoing,” he says. Dr Groh and his colleagues are specifically interested in pathological pain, so when pain becomes a burden to the individual affected. “It’s widely thought that this is controlled through a central mechanism. The pain experience happens in the brain, and not in the periphery, where maybe the original incident happened,” he explains. “It almost feeds on itself and then changes the networks in the central brain - we are trying to understand this. The thalamus is a key structure here, as pain signals have to pass through the thalamus before they reach the cortex and become an unpleasant experience.” These signals become conscious (i.e. as feelings or experiences) in a process that is not understood. Further research in this area could yield more detailed information about the emergence of pain and how it is experienced, and even enable scientists to look at
We’ve been focusing on a specific cortical interaction with sub-cortical areas, the connection between the cortex and the thalamus. Now we want to extend this work to other sub-cortical areas. Cognitive functions This strand of research centres around investigating how the brain detects or selects those signals which are important or salient at a particular point in time, which Dr Groh and his colleagues plan to model in mice. The mice will be presented with sensory stimuli, with differing levels of saliency. “While the mouse is doing a task, we want to see whether corticosubcortical networks are detecting salient events,” explains Dr Groh. These cognitive processes are thought to be quite complex, so Dr Groh says that neuronal ensembles, rather than single neurons, are thought to be involved. “Information is captured in the activity of groups of neurons that have spatial and temporal relationships, that are activated in a certain sequence for example,” he continues. “Together with Melina Castelanelli, a new member in my group, we are testing
suppressing pain signals in the thalamus. “This is something that Sailaja Antharvedi-Goda, a postdoc in my team is pursuing at the moment, with the potential of identifying targets for therapeutic strategies. For example, using cortical stimulation to suppress pain,” outlines Dr Groh. This research also holds relevance to our understanding of certain conditions in which sensory inputs are processed differently. “For example, there are possible connections to attention deficit disorders and conditions in which subjective experience is pathologically altered, for example in schizophrenia or depression. We’re not specifically addressing this at the moment, as we don’t have a framework, a model of attention deficit disorder in mice,” continues Dr Groh. “The aim at the moment is to understand the basics, the function of it, before then looking to develop hypotheses of how these circuits behave in brain diseases.”
Neuronal mechanisms Neuronal mechanisms of cortico-subcortical communication in the mammalian brain Project Objectives
The cerebral cortex is viewed as the cognitive headquarter of the brain, accommodating specific cortical circuits for decision making, conscious perception and coordination of behavior. But how does the cortex communicate with the rest of the brain to fulfill these functions? The proposed project investigates the structure and functional mechanisms underlying the interplay between the cortex and subcortical target networks by leveraging a combination of in vivo deep-brain electrophysiology, optogenetics, and cell-type-specific approaches in the mouse model system.
Funding: Deutsche Forschungsgemeinschaft (grants: SFB 1158-B10) and GR 3757_3-1
Project Coordinator, Professor Alexander Groh University Heidelberg Im Neuenheimer Feld 364 D-69120 Heidelberg, Germany T: +49 6221 54868 E: email@example.com W: https://www.researchgate.net/profile/ Alexander_Groh
2014 Staying focused: Cortico-thalamic pathway filters relevant sensory cues from perceptual input: 13 May 2014, by Stuart Mason Dambrot 2015 Thalamic Relay or Cortico-Thalamic Processing? Old Question, New Answers: Cereb. Cortex-2015-Ahissar-845-8 2016a Corticothalamic Spike Transfer via the L5BPOm Pathway in vivo: Mease et al. 2017 Organization and somatotopy of corticothalamic projections from L5B in mouse barrel cortex: Sumser et al PNAS
Professor Alexander Groh
Alexander Groh is Professor of Neurophysiology at Heidelberg University’s Institute for Physiology and Pathophysiology. Previously, he was the Heisenberg Research Group Leader Institute for Anatomy and Cell Biology at Heidelberg, while he has held other research positions at institutions in Germany and the USA.
It takes many hours of dedicated daily practice to learn to play a musical instrument, yet certain pre-dispositions may also play an important role in musical ability, including the way we perceive sound. We spoke to Dr Peter Schneider about his research into the relationship between auditory skills and auditory dysfunctions. Sagittal and transversal view of the myelinated brain structure of a professional guitarist.
Sound perception between outstanding musical abilities and auditory dysfunction Many hours of
daily practice and intrinsic motivation are required to learn to play a musical instrument, however musicians still rely to a certain degree on their innate aptitude or predispositions, in particular their ability to perceive sound. This is an area of deep interest to Dr Peter Schneider, Director of the Brain and Music research group at Heidelberg University Medical School in Germany. “We run several different projects looking at the relationship between auditory skills and auditory dysfunctions,” he outlines. This work includes investigating physiological and neurological differences between musicians and non-musicians, using for instance magnetic resonance imaging (MRI) to look at the shape, size, asymmetry and cortical thickness of Heschl’s gyrus, an important part of the auditory cortex. “Heschl’s gyrus is closely involved with sound perception and its morphology also influences musical performance,” explains Dr Schneider. “We’ve published papers describing the relationship between the overall size of Heschl’s gyrus and
an individual’s perception and overall musicality, as well as papers describing the relationship between left and right Heschl’s gyrus.”
listeners, yet the findings of a paper published by Dr Schneider over a decade ago seem to run contrary. “This study showed that there are also many people who are able to break
size, asymmetry and cortical thickness of Heschl’s gyrus is closely involved with sound perception, and its morphology also influences musical performance. Pitch perception This latter point is closely correlated with an individual’s pitch perception, the way in which people perceive sound. Researchers have found that people with a larger Heschl’s gyrus on their right-hand side are spectral pitch listeners, while people with a larger Heschl’s gyrus on their left are fundamental pitch listeners. “Spectral listeners break up sound into its different components, whereas fundamental pitch listeners perceive sound as a whole,” says Dr Schneider. Previously, it was believed that the majority of people are fundamental pitch
sounds down into their different components,” he outlines. “Amongst musicians, we see both fundamental and spectral pitch listeners. It seems to depend to a degree on the individual listening profile, what instruments they play. For example, musicality related to rhythm perception or drum playing is very different to musicality related more to melody perception or singing.” Researchers in Dr Schneider’s group studied musicians in both the Royal Liverpool Philharmonic Orchestra and the Orchestra of the Mannheimer National Theatre, with the
aim of gaining a fuller picture of the neural basis of sound perception. A key part of this work centres around using different psychoacoustic and neurological methods to get new insights into the musical brain, and the relative importance of predispositions and learning in terms of determining musical ability. “There are several different measurements. One approach we are using is designed to measure different hearing or sound perception abilities. In this case, we mainly used tests I developed on spectral and fundamental pitch perception. We see a wide variability of pitch perception,” explains Dr Schneider. “In the Liverpool Philharmonic orchestra, we observed a large majority of spectral listeners, whereas in the Mannheim Orchestra, we observed mainly fundamental listeners.” There are many different possible reasons why individuals have different pitch perception modes, and they may not necessarily be related to their musical aptitude. Alongside looking at data on professional musicians, Dr Schneider and his group are also investigating amateur musicians and non-musicians, partly based on asking people about their own musical experience. “We have a long questionnaire, with questions about an individual’s musical background, what they were taught and about the musicality of their parents. There are also questions about the different musical instruments they played at different times during childhood,” he explains. Childhood is a critical period in personal development, and musical training at a young age helps children to develop their skills, leading to a lifetime’s enjoyment, a topic that is central to Dr Schneider’s research. “We are investigating whether musical ability is genetically determined, or if it can be taught,” he says.
Longtitudinal study The group is conducting a large longitudinal study named ‘Audio- and neuroplasticity of musical learning (AMseL)’, using data gathered on children and adolescents over a period of nine years to probe deeper into the roots of musical ability and look at the importance of training. In this part of the group’s research, Dr Schneider and his colleagues are investigating the developmental factors visible in imaging data, and also in the psycho-acoustic data. “We have been testing our findings for these pitch perception modes. No real changes have been found, so this pitch perception seems to already be present at the beginning of musical training, at the age of around eight years old. Therefore, these pitch perception modes are preserved over time,” he says. The brain is of course still developing during childhood, yet evidence suggests the size
Musicians that play higher or lower pitched music instruments are seated apart.
Top view on the left and right auditory cortex (left panel), embedded within the Sylvian Fissure (right panel).
Projection of the brain structure of a pianist on a sphere (gyri in yellow, sulci in blue), demonstrating that the primary auditory cortex (marked with a circle) has a superior, central position.
and shape of Heschl’s gyrus does not change dramatically over time. “We have been collecting longitudinal data of children with four measurement timepoints from primary school age (~7-8 years old) to adolescence (16-17). In this study we found no differences in the shape or size of Heschl’s gyrus during childhood,” continues Dr Schneider.
This research complements the group’s work with professional musicians and nonmusicians, enabling Dr Schneider and his colleagues to draw wider comparisons. While the pitch perception mode is not really thought to change over time, other changes can be observed. “With electroencephalography techniques we can see changes in the activation of the auditory cortex, for example,” says Dr Schneider. Researchers are also looking at plasticity and maturation effects. “We can look at how network activity evolves in an individual, looking at the developmental phases in network activation,” continues Dr Schneider. “With functional MRI data, we can investigate
Five examples of individual gyration and location of primary (red) and secondary (green) auditory cortex of musically talented children.
Sound perception Sound perception between outstanding musical abilities and auditory dysfunction
Prof. P. Schneider has developed a unique framework to explore the neural basis of auditory processing. His combined transdisciplinary expertise as a brain researcher, physicist and musician has enabled him to develop a battery of new auditory tests to reliably measure elementary and complex hearing abilities. This new approach may succeed in explaining neural foundations of both outstanding auditory skills and auditory dysfunction, with considerable potential for pedagogic, therapeutic, diagnostic and clinical applications.
Total promotional volume of funding (BMBF and DFG): € 2,78 Mio.
• Prof. Narly Golestani, University of Geneve (Switzerland) • Prof. Annemarie Seither-Preisler, university of Graz (Austra) • Prof. Valdis Bernhofs, Musikacademie of Riga (Latvia) • Prof. Maria Blatow, University of Zurich.
Project Coordinator, Dr. rer. nat. Peter Schneider Department of Neuroradiology Department of Neurology, Section of Biomagnetism Heidelberg Medical School Im Neuenheimer Feld 400 69120 Heidelberg Germany T: +49-6221-5639180 E: Peter.Schneider@med.uni-heidelberg.de W: www.musicandbrain.de Prof. P. Schneider
Prof. P. Schneider has developed a unique framework to explore the neural basis of auditory processing. His combined transdisciplinary expertise as a brain researcher, physicist and musician has enabled him to develop a battery of new auditory tests to reliably measure elementary and complex hearing abilities. This new approach may succeed in explaining neural foundations of both outstanding auditory skills and auditory dysfunction, with considerable potential for pedagogic, therapeutic, diagnostic and clinical applications.
activation in the auditory cortex and explore functional connectivity and network plasticity of the musical brain. Additionally, we can look at activity in relation to other psycho-acoustic signals. We have other behavioral tests that look at the frequency sensitivity for example and more specific skills such as absolute and relative pitch, from which we can learn more about important aspects of auditory and neural plasticity.” While the primary focus within the group is on investigating the neural basis of sound perception and auditory skills, this research also touches on other areas, for example how people learn languages. Evidence suggests that individuals with an aptitude for music also tend to be good at picking up languages, another topic of interest to Dr Schneider. “Currently, we are also observing the relationship between language aptitude and music aptitude,” he says. It’s important to distinguish between different types of musicality in this sense. “Certain aspects of musicality are related more to language aptitude,” he says. “We see relationships between language aptitude and musical aptitude for these processes that occur in the right hemisphere. Singing activates networks related to the right hemisphere, too.”
the development of specific hearing therapies. “The idea is to have a more compact module of hearing tests, a battery of neuro-imaging procedures. We can then use these and other similar methods to evaluate which sorts of teaching, training and therapy are effective,” continues Dr Schneider. “We work together with ear training teachers in conservatories and also with clinical audiologists to observe learning strategies and relate it to the student’s or patient’s hearing mode.” The age at which an individual starts learning a musical instrument is an important consideration in this respect. While some children may have a pre-disposition that means they can start playing the violin at quite a young age, for others it might be better to spend more time on general musical activities. “They can then start learning a specific instrument later on,” says Dr Schneider. Further data is required in order to reach rigorous conclusions, which will remain a priority for Dr Schneider’s group in future, including gathering more data on adults and older people. “Our aim is to build a more generous, larger data pool, so that we can look more closely at individual neuroauditory profiles,” he says.
Musical brain There is not a clearly defined musical brain, nevertheless researchers can observe a general pattern of musicality in the brain, and Dr Schneider and his colleagues aim to build further on what has been achieved so far. One important outcome from the group’s research will be to distinguish between pre-disposition factors and training factors in terms of determining musical ability. “We aim to have a clear idea about what factors pre-dispose an individual towards musicality, and what factors affect how they learn music,” explains Dr Schneider. This could be particularly useful for musical education and
Psychoacoustic testing with children in my lab in the University Hospital Heidelberg.
A COFUND to prevent the Austrian brain drain The Erwin Schroedinger programme gives researchers the opportunity to work abroad, develop their skills, and build relationships with international partners, which can lead on to an academic career. We spoke to Dr Barbara Zimmermann about how the programme helps to support Austrian science and strengthen the country’s research base. The Austrian Science Fund (FWF)
research themes are identified through a bottom-up approach, which Dr Zimmermann believes is central to addressing major social and economic challenges. “Without basic research, you can’t engage in applied research,” she points out. “If you work with a bottom-up approach, you enable scientists to ask important questions and identify the major challenges facing society.” A high degree of knowledge is required for this kind of work, underlining the wider importance of the Schroedinger programme in helping to equip researchers with the skills and experience they need for an academic career. More than half of ex-Schroedinger fellows now hold a chair or professorship, and Dr Zimmermann says the FWF plans to continue the programme and build further on its success. “We saw after the last evaluation that the programme is really having a great impact on career development and Austrian Science, so we plan to continue to run it in future,” she says.
has run the Erwin Schroedinger fellowships since 1985, offering researchers across all academic disciplines the opportunity to work abroad, gain experience, and develop their skills. This is part of the FWF’s work in supporting basic research and strengthening Austria’s scientific base. “We want to strengthen Austria’s international performance and capabilities in scientific research. We aim to develop Austria’s human resources for scientific research, in both qualitative and quantitative terms,” outlines Dr Barbara Zimmermann, head of strategy at the FWF’s career development department. While the Schroedinger fellowships have proved successful in these terms, with many fellows going on to pursue rewarding careers in academia, the authorities are also keen for researchers to eventually bring their knowledge and expertise back to Austria. “In an earlier evaluation we saw that the programme is very effective in terms of career development, the most problematic phase is in encouraging fellows to return,” says Dr Zimmermann.
Return phase This issue is now being addressed, with the FWF looking to improve the Erwin Schroedinger programme further by including a return phase, to encourage fellows to come back to Austria following their time abroad. While this is important to the wider goal of strengthening Austrian research, those Schroedinger fellows who decide to stay on at institutions outside Austria also have a significant role to play, helping to build research relationships and networks with international partners. “The Schroedinger programme has helped to internationalise Austrian research,” stresses Dr Zimmermann. The Schroedinger fellows who stay abroad can act almost as research bridgeheads, helping their compatriots integrate into international networks, which Dr Zimmermann says is essential to a career in academia. “Nowadays you can only do good science if you are involved in international cooperations. It’s not possible to build a real research career purely on the national level,” she stresses.
Without basic research, you can’t engage in applied research. If you work with a bottom-up approach, you enable scientists to ask important questions and identify the major challenges facing society. The programme itself is open to postdoctoral researchers from all disciplines, giving them the opportunity to work abroad at a leading institution, then return to Austria to continue their studies. All proposals are subjected to a peer review procedure by scientists from outside Austria, and the only assessment criteria is the quality of the research. “We base our decisions solely on the quality of the proposals. It doesn’t matter to us whether it’s a history, physics, medicine or archaeology proposal - the only point is that it must be of excellent quality,” says Dr Zimmermann. The
SCHROEDINGER FELLOWS Erwin Schroedinger Fellowships Dr Barbara Zimmermann Head of Department Strategy – Career Development FWF Austrian Science Fund 1090 Vienna, Sensengasse 1, Austria T: +43 1 505 67 40 8501 E: firstname.lastname@example.org W: www.fwf.ac.at W: scilog.fwf.ac.at @fwf_at @fwfopenaccess
Dr Barbara Zimmerman administers with her team at the Career Development Department the Erwin Schrödinger programme and she has managed the four Cofund Grants since 2009 together with Susanne Woytacek.
Enhancing research in Romania The Twinning Projects financially supported by the European Commission put collaboration at the very focus of activity. The ENHANCE project aims to foster stronger research links with international organisations and to help boost the scientific and academic profile of the University of Agronomic Sciences and Veterinary Medicine (USAMV) of Bucharest. Collaboration is central to progress in any endeavour, especially in research, with scientists sharing expertise and knowledge to create a better environment for the overall development of society as a whole. Close cooperation with various partners, especially with those at an international level, can also help enrich the skill base at participating institutions, a central goal of the ENHANCE project, an initiative funded under the Horizon 2020 programme. “The ENHANCE project is designed to increase the capabilities and the visibility of our university in the field of agricultural economics research,” explains Professor Gina Fintineru, Vice-Rector of the USAMV on Scientific Research and the coordinator of the ENHANCE project. “The Twinning program is an extraordinary opportunity for universities that have significant development potential, to enable existing skills to be strengthened within the framework of responsive partnerships,” she continues.
Research competencies The main priority in the project is to enhance the competencies of the research groups at USAMV and help them develop and build their knowledge of cutting-edge methodologies, which can then be applied to the benefit of Romanian agriculture, an important part of the national economy. The project’s partners are providing training and sharing knowledge across a number of different areas, including econometrics, economic modelling and qualitative methods. “Through this project, we have had the exceptional opportunity to contribute to the development of the research capacities of our partner university on a medium-term basis. By providing teaching and exchanging staff, also IAMO gained with respect to teaching skills and research portfolio,” outlines Prof. Thomas Herzfeld, head of the Agricultural Policy Department at IAMO, one of the advanced partners of the project. This work is already bearing fruit in terms of raising the profile of the partners involved, especially of the USAMV, and helping staff develop their skills. The project
Enhance project’s Kick-off meeting, Bucharest, January 2016.
activities are targeted equally at higher level academic staff and academics at an earlier stage of their careers, thus, representing an important opportunity for young researchers to participate in the exchange of knowledge, ideas and methodologies with the project partners. “We encourage our PhD and Master’s students to participate in various
while also considering those topics in which USAMV’s staff members were interested, often by integrating them into common ongoing projects. A matching process of these variables, set at the beginning of the project, has generated interesting studies such as an environmental impact assessment using the Life Cycle Assessmen method, while also
“We will apply this knowledge in our teaching, so our students will be beneficiaries of the training and actions that have been developed during the project, we aim that these competencies created through research will contribute to the enhancement of education, not only in feeding international rankings” activities inside the project,” continues Prof. Fintineru. Agricultural economists need deep knowledge of the use of both micro and macroeconomic tools to address the complex challenges of today’s agriculture, ranging from a thorough understanding of farm families to the challenges of agricultural policies. The project partners approached these tools by applying them on the one hand to the main research topics on which the partners had experience and competencies,
considering questions around the sharing economy, land consolidation, burnout rate of farmers, Common Agricultural Policy (CAP) evaluation and consumption patterns. “The staff exchange program within our twinning project enabled a lot of thrilling comparative studies between Switzerland and Romania, documented in valuable publications,” emphasised Dr. Stefan Mann, head of the Socioeconomics research group at Agroscope.
The project includes addressing topics central to the future of agriculture and rural areas, such as the impact of the CAP on the attractiveness and vitality of rural areas, the impact of payments on remote areas and CAP simplification. Other topics addressed include generational renewal and in-depth research into land consolidation patterns and precision agriculture by using high resolution satellite and UAV (Unmanned Aerial Vehicles) imagery to map land use and crop damage. “The exchange of staff members enabled us to establish medium to long-term cooperation in specific research topics which will continue after the end of this project. This project gave us the opportunity to buildup long-term relations with new partner institutes,” added Prof. Herzfeld. The project partners share their expertise in these areas through staff exchanges, training sessions and summer schools. Two summer schools have been held over the course of the project so far, with a third planned for September 2018 which will focus on institutional economics and agricultural development, and Prof. Fintineru says these Summer Schools have proved to be very popular. “Most of the participants in the first edition came from our own university, but we have been pleased to see how attractive the curricula has been to external students. For example, last year, the participants who attended the five days of courses represented 15 nations, comprising Chile, Croatia, Germany, India, Iran, Italy, Lithuania, Nepal, the Netherlands, Poland, Romania, Russia, Switzerland, Turkey and Vietnam.” For this year’s edition, 38 applications have been received from 13 countries, but unlike previous years, there has been a consolidation of the number of applications received from Central and Eastern Europe. This can only be a positive sign in terms of strengthening research links and building relationships with international institutions, which is a central part of the project’s overall agenda. “These actions gave us the opportunity to increase our international visibility by acting as a regional hub for prestigious scientific events,” continues Prof. Fintineru.
Reputation and profile The wider goal in the project is to raise the profile of USAMV and strengthen research links, so that the institution can increasingly attract high-level funding and play a more prominent role in international collaborations. There have been positive signs in this direction, indicated by an increase in the number of research papers published by researchers at USAMV, more participation
in international conferences, and a stronger involvement of faculty staff in competitive project applications. “We aim to raise the visibility of the institution, both amongst our partners in the European project, but also to consolidate our role inside Romania, as a driver of local and regional development. We have very good connections in this project with stakeholders in Romania, such as representatives of farmers, the Pro Agro Federation, for example,” says Prof. Fintineru. These close links can help ensure that research is relevant to the issues facing the agricultural sector. “We have also used the ENHANCE-project to intensify our links with the National Ministry of Agriculture,” stresses Prof. Fintineru. As Head of the Institute of Agriculture and Forestry Economics at BOKU, Prof. Jochen Kantelhardt says his institution has also benefitted from the project. “The BOKU scientists who are involved in ENHANCE also form a pivotal point for all scientific endeavours that extend across the Danube region to Romania. Existing initiatives and platforms can be used and further developed. Being both students and teachers broadens the perspectives of all participants in a professional exchange process,” he explains. “Especially for a university, the cooperation in ENHANCE offers the best conditions for an intensive reflection on our own routines and processes, and thus enables improvements in the participating institutes, which far exceed the expectations before project start.” This work is very much in line with recent changes in the Romanian academic system, which have put a greater emphasis on research. While the USAMV is home to great expertise in agricultural education, Prof. Fintineru says it is important for researchers to build networks with international partners, which will help enhance the research capacity of the University. “It found fertile ground at USAMV, as it is obvious that research plays an important role in creating prosperity. The basis of outstanding research results are researchers with outstanding training,” she stresses. The benefits of this will trickle down to students, who will learn about cuttingedge methodologies and emerging areas of interest, equipping them with the skills and knowledge they will need after graduating. “We will apply this knowledge in our teaching, so our students will be beneficiaries of the training and actions that have been developed during the project. We aim that these competencies created through research will contribute to the enhancement of education, not only in feeding international rankings,” says Prof. Fintineru.
ENHANCE Building an Excellency Network for Heightening Agricultural ecoNomic researCh and Education in Romania Project Objectives
The aim of ENHANCE project is to fully realize and to further develop the currently existing scientific potential of the agricultural economists of the USAMV, particularly with respect to quantitative methods like modelling, simulation, econometrics as well as mixed methods research like institutional economics.
CSA (Coordination & Support Action) project, funded by the European Commission under the HORIZON 2020 Framework Programme Funding: European Union Horizon and innovation programme under grant agreement No 691681, 1097020 Euro. Duration: 01/01/2016-31/12/2018 (36 months)
•W BF - Federal Research Centre for Agriculture (Switzerland) / Contact: Stefan Mann. • IAMO - Leibniz Institute of Agricultural Development in Transition Economies (Germany) / Contact: Thomas Herzfeld. • BOKU - University of Natural Resources and Life Sciences (Austria) / Contact: Peter Walder.
Project Coordinator, Gina Fîntîneru University of Agronomic Sciences and Veterinary Medicine of Bucharest 59 Mărăşti Boulevard District 1 Bucharest code 011464 Romania T: +40756136321 E: email@example.com W: http://www.usamv.ro Gina Fîntîneru
Gina Fîntîneru has more than 25 years of experience in teaching, research and business consulting. With extensive experience in managing over 12 international collaborative in roles as research director and project leader, and partaker in several EU, WB and national funded projects. Publishing over 40 articles in scientific journals, conference proceedings and books/ book chapters.
A new proof assistant to stop software bugs from biting
Implementation of Homotopy Type Theory as a compilation phase into Type Theory.
Proof assistants like Coq are an important tool in mathematics research and software development, yet there are weaknesses in the current version of the system. Researchers in the CoqHoTT project are revisiting the theoretical foundations of Coq, aiming to improve and extend the system for today’s mathematicians and computer scientists, as Dr Nicolas Tabareau explains. The mathematical community commonly uses proof assistants to formally prove theorems, while they are also an important tool for software companies, who use them to prove that a particular program meets its specification. One of the major proof assistants currently in use is Coq, a proof management system with its roots in research dating back to the early ‘80s, and it has since grown in prominence. “Coq is quite a popular proof assistant, yet it still lacks some facilities and features which would make it convenient and easy to use,” says Dr Nicholas Tabareau, a researcher at Inria in France. This is an issue central to the work of CoqHoTT, an ERC-backed project which is revisiting the theoretical foundations of Coq using ideas from Homotopy Type Theory (HoTT). “The goal is to improve the proof assistant and to include more properties in the logic, so that it can offer more reasoning principles to the user,” outlines Dr Tabareau, the project’s Principal Investigator. A major priority in the project is making Coq more useable for mathematicians in particular, which will help to simplify the development of new proofs and improve
efficiency. While the system itself has been around for over thirty years now, Dr Tabareau says that most mathematicians are still reluctant to use Coq. “It should be a help for them in developing and proving their theorems, but at the moment it’s still a bit more of a burden,” he explains. The Coq proof assistant has two main weaknesses in particular, says Dr Tabareau.
using other paradigms in Coq. This means it will be possible to directly prove elements of Coq that have been written in mainstream languages like C or Rust for example. “It will be an extension of Coq, but more from the programming language point of view,” says Dr Tabareau. Coq is not just a programming language, but also a proof assistant, so Dr Tabareau says that extensions need to
Part of the project is about trying to manipulate this assistant, using ideas from HoTT, in order to provide a universal equality that we hope will be
more useful for mathematicians. “One is that it is too rigid for mathematical reasoning with respect to equalities and how objects are defined,” he outlines. “The second main weakness is the fact that you are in a pure functional setting, it’s a pure language. So it’s quite restricted, and it’s very different from mainstream programming languages.” By extending and improving the system, researchers will open up the possibility of
be dealt with in a logical way, taking into account the impact on the system. “That’s the major challenge that we face, in terms of extending the system,” he continues. “It’s not like a traditional programming language, where if you want a new feature, you just implement it. The challenge is to extend the power of the language, while also filtering out the possibility of any fake proofs being introduced.”
Mathematical proof This work holds important implications for industry, potentially providing a smoother path to proving that software is correct and minimising vulnerabilities, while Dr Tabareau says a lot of emphasis in the project is being placed on helping mathematicians develop proofs more efficiently. There is an internal notion of equality within the Coq system, which Dr Tabareau says is an important part of building and developing a mathematical proof. “Mathematicians reason about this equality,” he explains. This notion of equality within Coq is currently not ideal for mathematicians’ purposes however, as it’s too rigid. “In mathematics, we are used to a notion of equality more semantic than that currently in Coq,” outlines Dr Tabareau. “Objects and structures are not considered to be equal in the system, even though they are isomorphic.” The univalence principle, which allows researchers to derive equality principles used in mathematics, is a key concept in this respect. However, the univalence principle is not currently satisfied within the Coq proof assistant, an issue that Dr Tabareau and his colleagues are working to address in the project. “Part of the project is about trying to manipulate this assistant, using ideas from HoTT, in order to provide a universal equality that we hope will be more useful for mathematicians,” he explains. Disruption will be kept to a minimum during this process, so that mathematicians
and programmers can continue to use the system. “Coq is currently being used in very large and complex projects. If we modify it and specify it to too great a degree, it may negatively affect the performance of the system,” says Dr Tabareau. “So there is a trade-off between providing a more powerful system, and ensuring performance levels are at an acceptable level for users.” A researcher may want to implement a piece of code in Coq for example, prove it, then extract it into a more traditional language. There has been some work on
trying to build an automatic bridge between functional programming languages such as Haskell and Ocaml, which are quite commonly used by major companies. “There has been some work by academics on automatically importing code written in these formal languages into Coq, and then we can prove them in Coq,” outlines Dr Tabareau. This work is expected to have a significant impact in both the mathematical and computer science fields, raising the profile of Coq as a proof assistant and encouraging its wider use. “This is designed for experts who are defending their proofs,
Various possible extensions of the Coq proof assistant as distinct compilation phases.
CoqHoTT Coq for Homotopy Type Theory Project Objectives
The goal of the CoqHoTT project is to provide a new generation of proof assistants based on the fascinating connection between homotopy theory and type theory. It may promote Coq (a proof assistant developed at Inria) as a major proof assistant, for both computer scientists and mathematicians, as it should become an essential tool for program certification and formalization of mathematics.
The CoqHoTT project is funded by an ERC Starting Grant
• Inria Rennes Bretagne Atlantique (IRBA) • Laboratoire des sciences du numérique de Nantes (LS2N) • IMT atlantique
Project Coordinator, Dr Nicolas Tabareau Département Informatique École des Mines de Nantes 4, rue Alfred Kastler F - 44307 Nantes cedex 3 France T: +33 (0)2 51 85 82 37 E: firstname.lastname@example.org W: http://coqhott.gforge.inria.fr/
Dr Nicolas Tabareau
Project Coordinator Nicolas Tabareau is Chargé de Recherche (junior researcher) at Inria. He conducts research on programming languages and proof assistants in order to provide better tools for proofs formalization both to computer scientists and mathematicians.
Extensions of CoqHoTT using model transformations of mathematical logic.
or for software developers,” continues Dr Tabareau. “One of the goals is to help a traditional software developer to prove the correctness of their system, without needing to rely on academic experts.” This could help to improve software security and reduce the impact of software bugs, which currently cost companies millions of euros a year. It is however difficult to guarantee a certain level of security, as it is necessary to make some basic assumptions in the development of
this area, who are more familiar with this type of application.” The project’s work is more exploratory in nature at this stage however, with researchers exploring fundamental questions and looking to modify and improve the theoretical foundations of Coq. With the project over half-way through its funding term, Dr Tabareau hopes their work will eventually lead to the release of a new version of the Coq system. “This version should contain important improvements,
The goal is to improve the proof assistant and to include more properties in the logic, so that it can offer more reasoning principles to the user. a protocol. “It’s very hard to anticipate all the possible uses of a protocol,” acknowledges Dr Tabareau. There are still vulnerabilities in software, which can be exploited. “Currently it is possible that people can steal money, for example from block-chain technology, by finding a flaw in the mechanism. Making code more reliable is the key to building trust in the system,” continues Dr Tabareau. “We are collaborating on these issues with colleagues who are more well-versed in
like making it easier to reason on equality, and with facilities to make the logic more powerful,” he outlines. This work is progressing well, and researchers in the project are confident that a new, improved version of the system will be available in two to three years time, beyond which Dr Tabareau is looking towards further development. “I am planning to work on a second version of the project, a CoqHott 2, which will be less closely connected to HoTT,” he continues.
Main building of the Laboratoire des sciences du numérique de Nantes (LS2N).
Higher performance for low-power system-on-chips Graphics processing units (GPUs) are an important part of many modern technologies, including smartphones and automotive systems. We spoke to Dr Georgios Keramidas about the work of the LPGPU2 project in developing a tool to help developers optimise the software for GPUs, opening up a path towards improved power efficiency. A type of
electronic circuit, low-power graphic processing units (GPUs) are essential to the performance of many modern technologies, including smartphones, wearable technology and certain automotive systems. With GPUs used ever-more widely across a range of different applications, demand is growing for improved performance and power efficiency, an issue central to the work of the recently-concluded LPGPU2 project. “The target of the project was to build a development environment to optimise code, the software for embedded GPUs,” says Dr Georgios Keramidas, the project’s technical coordinator. This work holds clear relevance to the commercial sector, so the project consortium included both academic and industrial partners. “The industrial partners provided some commercial software use cases they wanted to build, which were
LPGPU2 power measurement testbed.
developed as part of the project,” continues Dr Keramidas. “The project delivered an opensource toolsuite, that can be downloaded and used to optimise the software in GPUs. The target was embedded GPUs.” This research was driven to a large extent by the needs of the commercial sector, with companies looking to improve the
performance of low-power devices like smartphones or certain processing devices used in cars for example, and to ensure they can meet evolving market needs without consuming more power. While modern smartphones can of course perform a wide variety of tasks, the market never stands still and more complex applications continue to emerge; one area of interest in the sector is augmented reality applications, which Dr Keramidas says raises new challenges. “If you are going to run augmented reality applications on your smartphone, then you have two main problems. The first is that it’s going to be very slow. The second problem – the more important one – is that if you try to run virtual reality or augmented reality applications on your smartphone, then the battery will drain in a couple of hours,” he outlines. The LPGPU2 Team.
Example Visualization Features of the LPGPU2 tool.
LPGPU Tool The LPGPU2 tool itself is designed to help extend the battery life of GPU software, providing programmers a means to analyse performance and power consumption. A monitoring framework collects information from the smartphone (or other mobile device), giving programmers a basis on which they can then look to analyse performance. “This includes information from the hardware, from the GPU, from the operating system, from the API, and from the application itself. We have a standardised way of collecting this kind of information,” says Dr Keramidas. This information is then rapidly transferred to a host machine for processing. “We developed various different algorithms in order to process this information. There is a lot of data here, we are talking about big data sets,” stresses Dr Keramidas. “We developed sophisticated algorithms to process this information, and to identify certain parts of the application and visualise their performance. The next step was to provide feedback to the programmers, to enable them to then optimise those parts of the application.” There is a lot of data to gather, so Dr Keramidas and his colleagues in the project developed sampling techniques in order to collect it efficiently. Post-processing tools were developed to analyse the data, along with highly accurate power measurement tools and power models, mainly developed by TUB, from which developers can gain important insights into the power consumption of different components of the GPU. “These
developers are developing applications for embedded devices, for battery-operated devices. For example, this could be people working on smartphone applications,” outlines Dr Keramidas. The project’s research holds relevance for industrial sectors beyond smartphones however, so Dr Keramidas says the consortium also included partners from several other areas, including Codeplay and Spin Digital, a company based in Berlin. “Spin Digital looked at applications for ultrahigh resolution video decoding, rendering, and media players. Then Codeplay provided applications related to machine learning,” he says.
reality applications and augmented reality applications – running for example in S9 or S8 Samsung smartphones – as well as certain computationally intensive applications,” says Dr Keramidas. For their part, Think Silicon are providing applications related to image processing and computational photography. “We are developing the LPGPU2 tool, and we will try to optimise these different applications using this tool. All those applications are commercial applications,” continues Dr Keramidas.
Rigorous framework This
partners have provided some commercial software use cases they want to build, which could be developed as part of the project. The project will deliver an open-source tool, that can be downloaded and used to optimise the software in embedded Graphics Processing Units. The other two technology companies involved in the project were Samsung and Think Silicon, a company specialising in the provision of high performance, low power graphics IP semiconductor modules, at which Dr Keramidas is the Chief Scientific Officer. As a major player in the technology industry, Samsung are keen to maintain their position at the forefront of development. “Samsung are using the tool to optimise virtual
to a rigorous framework that allows programmers to optimise applications in these commercial use cases. With minor modifications, the tool could also be used to optimise code from parallel architectures and parallel processors, for example in a desktop machine, yet Dr Keramidas says this is not a priority in research. “In this project we targetted the low-power domain,” he stresses. The tool can be used
to optimise software for low-power GPUs, even those built by different vendors, which Dr Keramidas says is an important point for the project’s industrial partners. “If a code is developed for the GPU that is running on your smartphone, and a developer takes this code and then supports it in another GPU, then this code is going to be slow,” he explains. The LPGPU2 tool however is vendoragnostic, which means that it can understand what is running in the other GPU in this type of situation and deal with the required documentations, widening its potential applications. Another important part of the project’s overall agenda related to their work in standardisation, part of the wider goal of enabling cross-platform technology. “We play a role in standardisation efforts. So, we are involved in various standardisation activities, to standardise the way that we collect the data from the device,” outlines Dr Keramidas. In particular, the project played an active role in the Khronos group, an industrial consortium dedicated to creating open standards for various different applications, including graphics and general purpose applications. “We are promoting these standardisation activities in the Khronos group. The Khronos group develops open standards, like OpenGL,
OpenCL, and SYCL for example,” says Dr Keramidas. The wider goal in this research is to develop an open-source tool, available to developers to download, who can then use it to optimise the software running on a GPU. This could help to extend the lifetime of a battery and to improve efficiency, which of course is an important issue in the smartphone sector, where consumers increasingly want to use advanced graphics applications. “For example, a smartphone battery would now last for longer, according to the current power consumption model, as our tool can help to reduce the overall power consumption of the graphics applications,” outlines Dr Keramidas. Important advances have been achieved in these terms, and with the project nearing the end of its term, the focus now is on improving the tool. Open Access Repository: HYPERLINK “https://github.com/codeplaysoftware/ L P G PU2- Co d e X L”h t t p s ://g i t h u b.co m/ codeplaysoftware/LPGPU2-CodeXL Acknowledgement: “The LPGPU2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688759.”
LPGPU2 Low-Power Parallel Computing on GPUs 2 Project Objectives
The objective of the LPGPU2 project is to offer a toolset which delivers significantly longer battery life in mobile devices, while delivering high performance and graphics quality. The LPGPU2 tool will help programmers develop powerefficient code for GPUs by identifying bottlenecks relating to performance (for example in terms of frames-per-second) and power (for example in terms of energy per instruction). The tool supports various state-of-the-art APIs (e.g., OpenGL, Vulkan, and OpenCL) and has been validated in various commercial applications (including VR, AR, multi-API video player, and Neural Network applications) offered by the project patterns.
LPGPU2 has received funding from the European Union’s Horizon 2020 programme.
• Technical University of Berlin (TU Berlin), DE, Project Coordinator • Samsung Research UK, UK • Codeplay Software Ltd., UK • Think Silicon S.A., GR, Project Technical Coordinator • Spin Digital GMBH, DE
Technical Coordinator, Georgios Keramidas Patras Science Park, Rion Achaias, 26504, Greece T: +30 2610 911543 E: email@example.com W: www.think-silicon.com W: www.lpgpu.org/wp Ben Juurlink, Jan Lucas, Nadjib Mammeri, Georgios Keramidas, Katerina Pontzolkova, Ignacio Aransay, Chrysa Kokkala, Martyn Bliss, Andrew Richards. “Enabling GPU software developers to optimize their applications—The LPGPU2 approach,” in Proceedings of IEEE International Conference on Design and Architectures for Signal and Image Processing (DASIP), 2017.
Dr Georgios Keramidas
Dr Georgios Keramidas is the CSO of Think Silicon S.A and Technical Coordinator of the LPGPU2 project. He has a successful track record in delivering commercial projects as well as national and collaborative programmes, while he has also published one book and more than 60 scientific papers on low power processors.
Combining structure, independence and internationality in doctoral studies Studying for a doctorate is by nature very challenging, and PhD candidates benefit from the opportunity to collaborate with their peers, academic staff, and international partners. BIGSSS-departs (doctoral education in partnerships) is an innovative EU funded programme designed to support early stage researchers while also widening their perspective on the social sciences. The Bremen International Graduate School of Social Sciences (BIGSSS) was established in 2002 and since 2007 it has operated with support from Germany’s national research funding agency (DFG), offering PhD candidates the opportunity to pursue their academic interests in a supportive environment. Studying for a doctorate is by nature very challenging, with students expected to engage in independent research over an extended period; the BIGSSS-departs COFUND programme was established in 2016 to continue BIGSSS’ supportive system. “The programme implements a lot of what we have learned as a graduate school over the past few years. First of all, for its length – it lasts 42 months rather than 36, which is a much more realistic time span for what you have to do in a social science PhD,” says Dr Christian Peters, Managing Director at BIGSSS. A second key feature is the international nature of BIGSSSdeparts. PhD fellows in the programme come from 15 different countries and a mandatory stay abroad at one of the network partners is an essential part of the curriculum. A further important distinction is the structured nature of BIGSSS-departs, with PhD candidates receiving a monthly salary and regular supervision during the course of their research, while also having the opportunity to form supportive relationships with their peers. This is different to earlier doctoral training in Germany, where young researchers typically worked in close proximity to their supervisor, typically in quite small environments, without many people around for regular professional exchange. While ultimately a student is responsible for writing their own PhD, Dr Peters believes it’s still important that they have the opportunity to share problems, challenges and ideas. “That’s where structured programmes clearly have advantages over the traditional German way of pursuing a PhD,” he says. “The idea of a graduate school, and of a programme like BIGSSS-departs, is that you bring people together so that they share their working days and their experiences of the challenges of writing a PhD.”
Independent research The wider goal at BIGSSS is to encourage independent research and open up new perspectives on the social sciences through the exchange of ideas and knowledge in political science, sociology and psychology. Research in these disciplines by nature involves an element of theoretical investigation, yet Dr Peters is keen to stress that this must be built on firm foundations. “We always want to look at the provenance of the social facts - but also at their manifestation out there in the field. We want our fellows to consider theories by looking at the empirical matter,” he explains. “In that respect, research methodologies play an essential role at BIGSSS.” A balance needs to be struck here between establishing a common research framework and
giving young researchers the academic freedom they need to establish their own independence and build their careers. The first semester in the programme is about trying to create this common ground, e.g. through a preparatory forum in social science methodology. “BIGSSS embraces diversity. There is so much out there to look at, so much need for differentiation. But we also need to establish a framework in order to function well as a graduate school. It is necessary to have coherence and to balance the diversity of individual research projects with finding an intellectual and conceptual environment where people can learn from each other,” outlines Dr Peters. “However, it’s not the case that only one approach is considered to be valid in BIGSSS.” While PhD fellows are not expected to have a concrete plan for their entire doctorate at
BIGSSS-departs fellows, faculty and staff at the graduate school’s annual summer retreat in May 2018.
Fellows presenting their research to representatives of the partner organisations at BIGSSS-departs networking event in February 2018.
the beginning of the course, they do need to have an interesting research question of empirical significance and a vision of how they will pursue that project. “What is it that you want to explain? How are you going to do that?” Once a student has narrowed down their area of research during the earlier part of the programme, they can then look to gather relevant data. “They go into the field, raise the data themselves or do secondary data analysis. They look at surveys and conduct interviews. They also go abroad to collaborate with experts in their field and experience a different scientific environment,” says Dr Peters. “Then they come back and write up in the third year. That’s the typical lifecycle.”
healthy egoism in any research project and in my experience it’s consequently hard to implement hybrid approaches. When looking at the same phenomenon, a sociologist may address certain questions differently to a social psychologist for example,” he says. “But rather than melting away diversity by inflating the value of hybridity, we should train our transfer capacities. In my understanding, inter-disciplinarity is about researchers from other areas looking at your project and enabling a discussion, with learning effects on all sides. Also, it’s possible – and sometimes indispensable – to apply more than one research method or theoretical approach.” A fresh perspective on their project can open researchers’ eyes to new ideas on how to treat sources, enhancing the quality and originality of research. It also helps early stage researchers develop their research, analysis and communication skills, which may prove very valuable in their future careers, whether that’s in academia or elsewhere. “The labour market is in need of people with a social sciences background,” says Dr Peters. “Any institution, be it academic, public, profit or non-profit, needs conceptual and communicative skills that can be applied on
BIGSSS embraces diversity. There is so much out there to look at, so much need for differentiation. But we also need to establish a common framework in order to function well as a graduate school. Inter-disciplinary research The opportunity to spend time abroad at one of the 13 academic and non-academic partner organisations in BIGSSS-departs is an important aspect of the programme, encouraging fellows to consider alternative perspectives and share their findings with researchers from other disciplines. While students need to master the core elements of their own discipline, Dr Peters says they are encouraged to collaborate with researchers in other areas. “With our large and diverse faculty, we consider ourselves as a harbour of inter-disciplinarity,” he explains, admitting that there are some limits. “There is a In-house faculty and peers support BIGSSS PhD fellows in developing their projects in regular research colloquia.
human interaction. This is what you’re trained for as social scientist. We’re used to making sense of complex situations and we can help organisations to work effectively.” While a doctorate sets students on the path towards a career in academia, competition for positions at universities is intense, and Dr Peters says many graduates look to take their career in a different direction. “BIGSSS is happy about our very low drop-out rates and the generally high level of job placement. After completing their PhD, many of our fellows go into the third sector, consulting, politics, or they go and work for foundations.”
BIGSSS Bremen International Graduate School of Social Sciences Project Objectives
BIGSSS-departs is a 42 month structured PhD programme which provides close supervision of dissertation work accompanied by a demand-tailored doctoral curriculum. BIGSSS-departs is a full-time PhD programme in which fellows commit themselves to their own dissertation projects and the full academic programme of the BIGSSS curriculum. BIGSSS-departs fellows pursue a freely chosen dissertation project in one of BIGSSS’ three thematic fields: (A) Global governance and regional integration (B) W elfare state, inequality and quality of life (C) Changing lives in changing socio-cultural contexts
Funded by the Excellence Initiative of the German Federal and State Goverments.
Managing Director, Dr Christian Peters Bremen International Graduate School of Social Sciences (BIGSSS) University of Bremen P.O. Box 33 04 40 D – 28334 Bremen Germany T: +49 (0)421 218 66400 E: firstname.lastname@example.org W: https://www.bigsss-bremen.de
Dr Christian Peters
Dr Christian Peters has been Managing Director of the Bremen International Graduate School of Social Sciences since 2013. After graduation with a doctorat cotutelle at l’Ecole Pratique des Hautes Etudes (Paris/Sorbonne) and TU Dresden, he started his career as a research manager at the ZEIT Foundation Ebelin and Gerd Bucerius in Hamburg. He has research interests in political culture, populism studies and the relationship of religion and power in post-secular societies.
Europe, Latin America and Africa, the connections of the Arts and Society Artistic expression has long been a means to express ideas and opinions about wider society, whether it be through text, images, architecture, or a wide range of other means. Researchers at the Leiden University Centre for the Arts in Society are exploring the interaction between the arts and society, getting to the roots of cultural production, as Professor Anthonya Visser explains. The arts have long had a major influence on public debate across different societal domains, providing a means of expression that has helped to shape people’s views on wider society. Based at the Leiden University Centre for the Arts in Society (LUCAS), Professor Anthonya Visser is the coordinator of the Arts in Society project, a graduate programme encouraging students to look more deeply into the relationship between the arts and society. “We applied for grants for four fullyfunded PhD students, each with their own research project within the Arts in Society programme,” she outlines. This work centers on exploring the interaction between the arts and society in four main domains; science and technology, law and justice, politics, and religion. “Religion as a societal domain is very relevant for art, where art is expressed in a societally relevant form. That has been the case for centuries,” explains Professor Visser. “We’re also looking at science and technology, which is a relatively new domain, where art holds relevance in a societal sense. Law and justice is another important domain,
while we’re also researching the ways in which art becomes relevant in politics.” This research spans a wide time-period, from classical antiquity right through to the present day, although the forms of artistic expression in use have of course changed
have antiquity, then we have medieval and early modern, and then we have modern and contemporary,” says Professor Visser. One project in the programme is focused on medieval religious culture, with researchers examining both texts and images to build a
We do not, in the first place, make a moral or political judgment, but rather analyse how crossings of borders between domains are shaped and what is being done as a social practice, as an artistic practice. So we look at modes of artistic invention and innovation. significantly over time, as new methods have been developed and social attitudes have shifted. The scope of the research is correspondingly broad, with four separate PhD projects within the programme looking at the relationship between the arts and society with respect to each of the four specific societal domains, research which is organised into three time clusters. “We
deeper picture of the interaction between arts and society at the time. “We are looking at documents, texts from the medieval period, along with visual elements,” continues Professor Visser. “This is art in which the religious domain is used as a means of expression, to come to an expression or social practice that holds broader relevance in wider society.”
Science and technology Research into the science and technology domain is centered more on the modern age, with one project in this area looking at bioart, a form of artistic practice in which living materials are used to produce artistic works. Researchers are investigating how public debate helps to shape this art. “One of our colleagues at LUCAS does experiments on the intersection between biology, biological experiments, and art. This kind of art can affect public opinion on major issues, for example debates around bio-ethics,” says Professor Visser. Research in the law and justice domain meanwhile is more language-based, with researchers analysing key texts to explore the relationship between arts and society. The courtroom has often been compared to a theatre by legal scholars, an idea which is being explored further within the Arts in Society programme, alongside probing more deeply into the concepts of law and justice. “This group is looking at the literature and analysing specific literary texts,” continues Professor Visser. “It is possible to look at
literary texts and to see how discussions on law and justice are sometimes intermingled, and sometimes influence each other.” This research can reveal deeper insights into the general perception of precisely what these two concepts mean. While the concepts of law and justice are closely related, they do not mean the same thing, now researchers in the project aim to delve deeper in this area and shed new light. “We have a group of people working here together to critique literary texts, including people from the legal and juridicial faculties. They are working together to analyse such texts,” says Professor Visser. The fourth societal domain being addressed in the project is politics; Professor Visser describes this as being perhaps the most traditional domain, with researchers looking at politics in quite a broad sense, as a collective process of decisionmaking. “All art forms in the public domain, so not only visual art forms or architecture for example, but also literature, have a political dimension. So everything that is art in the public domain can be looked upon from a
political perspective,” she says. “When you have art as a social practice, it almost always also becomes a political practice.” There are also cases where art productions cross the borders between these individual domains, as in the case of the Russian feminist punk rock group Pussy Riot, who have staged unauthorised performances in a number of public locations since they were formed in 2011. In particular, the group gained a lot of attention in the West when they angered the Russian religious and political authorities by posting a video of a performance inside a Moscow church in 2012, for which several members were later jailed. “Pussy Riot made use of religious forms to express themselves politically,” says Professor Visser. The aim in this area of research is not to make a judgment on the rights or wrongs of the Pussy Riot case, but rather to look more deeply into the mode and method of artistic expression. “We do not, in the first place, make a moral or political judgment, but rather analyse how those crossings of borders between domains are shaped and what is being done as a social
Arts in Society Leiden University Centre for the Arts in Society (LUCAS) Project Objectives
The LUCAS Arts in Society program hosts four PhD projects exploring key questions in fields of intense interaction between cultural production and social practice: (1) religion, (2) science & technology, (3) law & justice and (4) politics. The aim of the programme is to articulate the research profile of LUCAS in terms of the societal relevance of the Arts.
The project is funded by NWO (The Netherlands Organisation for Scientific Research) with Euro 800.00.
• Rijksmuseum Amsterdam • Museum Beelden aan Zee Den Haag • Dr. P.A. Tiele-Stichting Den Haag • Scaliger Institute Leiden University
Project Coordinator, Professor Anthonya Visser Leiden University PO Box 9500 2300 RA Leiden T: +31 (0)71 527 2071 E: email@example.com W: https://www.universiteitleiden.nl/en/ humanities/centre-for-the-arts-in-society Blog: http://www.leidenartsinsocietyblog.nl
Professor Anthonya Visser
Anthonya Visser is Professor of European Modern European, in particular German, Literature and Culture at the University of Leiden and Academic Director of the Leiden University Centre for the Arts in Society (LUCAS). Author of Körper und Intertextualität. Strategien des kulturellen Gedächtnisses (Böhlau 2012) and of many articles and papers on recent German literature, and issues of cultural identity after the ‘Wende’.
practice, as an artistic practice,” continues Professor Visser. “So we look at modes of artistic invention and innovation.”
Inter-disciplinary research This research is very much inter-disciplinary in nature, bringing together specialists in different fields, including literary history and theory, art history and film and media studies. These broad foundations give researchers at LUCAS a firmer basis to investigate the relationship between arts and society, and contribute to the literature and wider debate. “The Arts in Society graduate programme will deliver four dissertations in the short-
term, and an educational programme for PhD candidates,” says Professor Visser. Beyond the programme’s initial funding term, Professor Visser hopes to continue her research in this area, laying strong foundations for continued investigation at LUCAS. “We are currently around two-thirds of the way through the project. We will evaluate it over the next year or so, then we will see how we will proceed,” she continues. “The idea of art in society will remain central to the research agenda at LUCAS. The precise plans for the future will depend to a large degree on the outcome of the evaluation that we will carry out next year.”
This research is very much inter-disciplinary in nature, bringing together specialists in different fields, including literary history and theory, art history and
film and media studies.
Woman praying to Christ in the Van Hooff prayer book, University Library Vrije Universiteit Amsterdam, XV.05502, f. 133v, Flanders, ca. 1520.
Seeing the light of Solar Energy Conversion Photosynthesis is essential to life on earth, now researchers in the PS3 project are drawing inspiration from the process to develop a solar energy conversion system. We spoke to Dr Dror Noy about the project’s work in designing protein cofactor complexes with photosystem functionality, which could point the way towards new bioreactors for fuel production. The process of photosynthesis is responsible for the energy sources that we all rely on in our daily lives, enabling the conversion of light energy into chemical energy. Photosynthesis is the best characterised and understood of all the biological processes, on the molecular level, as it can be easily triggered with light. “Because photosynthesis is a lightdependent process, the molecules that are doing the work have colour. When they do their work they change their colour, which can be monitored,” explains Dr Dror Noy. Based at the Migal Galilee Research Institute in Israel, Dr Noy is the Principal Investigator of the PS3 project, an initiative which aims to develop a light energy conversion system, drawing inspiration from the initial part of the photosynthesis process in natural systems. “We focus on the first, preliminary steps in photosynthesis in the PS3 project – these are the absorption of light, and the conversion of this light into useful chemical potential,” he outlines. A lot of information is available about this part of the process, including information relating to the molecular structure, geometry and organisation of photosynthetic complexes. These complexes make up a significant proportion of the cell membranes of photosynthetic organisms, giving researchers solid foundations on which to investigate photosynthesis. “Since plenty of biological samples are available, www.euresearcher.com
we can run all kinds of biochemical and structural characterisations,” says Dr Noy. The structure and properties of these complexes have been well characterised, now Dr Noy and his colleagues in the project aim to implement what they’ve learned in the development of a new light energy conversion system. “Given what we know about photosynthesis, about how it is carried out in biology, we now aim to generate our own protein-pigment complexes, that will perform similar functions,” he says.
of artificial complexes,” he continues. “Proteins are polymers of amino-acids. Their three-dimensional structure, and most importantly their functionality derived from this structure, is actually determined by how the amino-acids are ordered within this polymer chain.” The key challenge here is to come up with the sequences of amino-acids that will lead to the right structure, which in turn will give researchers the desired functionality. This is a very complex, technically demanding
One major difference with biological systems relates to the membrane, a set of structures of lipids, which are very hydrophobic, the properties of this membrane are very different to those of water. It is where the natural system is assembled and located. Light energy conversion This is a technically challenging task, with researchers looking to produce a fully functional light energy conversion system, built on a detailed understanding of the underlying processes involved in energy and electron transfer. Within the project, a key part of the role of Dr Noy and his group is to make new proteins, based on the rules of how proteins are created. “In my group we specialise in photosynthetic complexes, but also in protein design and the preparation
problem. “There are 20 different naturally occurring amino-acids, and a protein chain is typically a sequence of a dozen to a few hundreds of them connected in a row, so there are an enormous number of potential combinations. We’re trying to get this protein structure right, with the right sequence of amino-acids,” says Dr Noy. The project also includes a group of computational chemists led by Professor Vikas Nanda, based at Rutgers University in the US, whose expertise helps in identifying the
The design of a novel minimal photosystem is the main goal of the PS3 project. For this, the natural photosystems are the source of inspiration. All photosynthetic organisms use only two types of photosystems, namely the type II quinone, or type I iron-sulfur reducing photosystems. Anoxygenic photosynthetic bacteria use either a type I, or a type II photosystem, whereas cyanobacteria, algae, and plants combine both types into a photosynthetic apparatus capable of water reduction and oxygen evolution. All photosystems reside in the membranes and are coupled to proton pumps for driving ATP synthesis. By using computational protein design tools a novel protein pigment complexes are designed and constructed. These are combined to form PS3, a water-soluble protein-pigment complex that captures the essence of photosystem functionality – light driven electron transport from external electron donors to electron acceptors.
right sequences of amino-acids. “Professor Nanda’s group are experts in computational protein design. We use computational algorithms to come up with the sequences we need in order to make our complexes. We code these protein sequences into DNA, then these codes are introduced into E. coli bacteria to drive production of the desired protein,” outlines Dr Noy. This provides researchers with a protein, but this is just the first step. An extra challenge when dealing with photosynthetic complexes is that you also need to assemble the proteins with the components that actually do the photochemistry. “Some components absorb and emit light – so they transport energy - and some components make charge separation – so they transport electrons. By these two actions, namely energy and electron transport, light is eventualy converted into electric potential,” explains Dr Noy. Plants use chlorophyll molecules for both actions; what determines the specific role of each chlorophyll is its relative position with respect to other pigments and the nearby protein environment that tunes
its properties. Researchers aim to develop protein-cofactor complexes to perform in a similar way, inspired by photosynthesis but not necessarily replicating it in every respect. “We aim to develop a module that can do very much the same work,” says Dr Noy. “One major difference with biological systems relates to the membrane,
The key in PS3 however is to take the system out of the membrane, as the interest here is not ATP, but rather a different energy storage molecule. “Actually generating this molecule is beyond the scope of this project, but if we can generate an electro-chemical driving force, that we can interface with enzymes
Proteins are polymers of amino-acids. Their three-dimensional structure, and most importantly their functionality derived from this structure, is actually determined by how the amino-acids are ordered within this polymer chain. a set of structures of lipids, which are very hydrophobic, the properties of this membrane are very different to those of water. It is where the natural system is assembled and located,” explains Dr Noy. The reason that everything is in a membrane is because in the biological system the membrane is required to generate adenosine triphosphate (ATP), a transient molecule for the storage of energy.
that can do very efficient biochemistry, then that could help us generate energyrich molecules for example,” says Dr Noy. The concept is relatively simple, and Dr Noy says it holds clear potential in terms of fuel production. “In a typical system, we will effectively shine light on proteins in solution. In the same test tube we will also have an enzyme that will use some of the transferred electrons to do its redox
PS3 An artificial water-soluble photosystem by protein design
The PS3 project aims at producing a fully functional light energy conversion system that is inspired by, but does not necessarily mimic, the fundamental solar energy conversion unit of natural photosynthesis – the photosystem. This formidable challenge is addressed by implementing our thorough understanding of biological energy and electron transfer processes, and the growing capabilities of computational protein design.
The PS3 project is funded by an consolidator grant from the ERC
• Migal – Galilee Research Institute, Israel • The Center for Advanced Biotechnology and Medicine (CABM), Rutgers University, USA
Design of a water-soluble chlorophyll-binding protein analogous to natural light harvesting complexes. The design is based on a conserved structural motif found in PSI and PSII. The natural motif is a transmembranal protein, thus it has a mostly hydrophobic surface (shown in pink). Computational protein design converted the outer surface into a hydrophilic surface (shown in turquoise).
chemistry,” he explains. “In this way we can use highly sophisticated catalytic sites of the enzymes, and generate molecules that can be used as fuel.” A prime example is the very potent enzymes that can reduce the protons in water to make hydrogen. If hydrogen production can be coupled to the generation of electric potential through harnessing solar energy, that could provide a means of storing energy. “These processes may be the target of the photosystems produced in the PS3 project,” says Dr Noy. This research could hold important implications for future energy provision, opening up new possibilities in solar energy conversion and light-driven fuel production. “If you can make these proteins, possibly by using certain kinds of genetic engineering methods, then we can think about a hybrid system, a system that uses both biological and synthetic material. This could mean some kind of bioreactor, a means to generate some useful fuel,” outlines Dr Noy. “That is a long way beyond the scope of the PS3 project however, and there are many challenges to deal with first, including
ensuring that the protein complexes are stable enough to last.” The main practical challenge for Dr Noy and his colleagues in terms of the project’s goals at this point is working out how to control the assembly of these proteinpigment complexes, and while a lot of progress has been made in the field of protein design, there is still more to learn. The task is complicated further by the need to introduce the pigments into the proteins. “This is like drug design in reverse. Instead of building a molecule to fit into a protein binding site, you build a binding site to fit a molecule,” says Dr Noy. An additional challenge is that the pigments themselves are not necessarily soluble in water, as they come from the photosynthetic membranes. “So we need to figure out ways to make sure that these pigments are soluble in water,” continues Dr Noy. “These are the kinds of challenges that we deal with on a daily basis. We are learning a lot of new techniques and methods that can help us build new functional proteins, not only with respect to this project, but also others that we are involved with.”
Project Coordinator, Dr Dror Noy Bioenergetics and Protein Design laboratory MIGAL - Galilee Research Institute MIGAL Building, Southern Industrial Zone, Tarshish st. Kiryat Shmona P.O.B. 831 Kiryat Shmona 11016 Israel T: +972-4-7700508 E: firstname.lastname@example.org W: http://www.migal.org.il/Dror-NoyBioenergetics-and-Protein-Design-laboratory Dr Dror Noy
Dr Dror Noy is currently the head of the biotechnology department and the laboratory for bioenergetics and protein design at Migal-Galilee Research Institute. In 2000, after obtaining his PhD in chemistry from the Weizmann Institute of Science, he was a post-doctoral fellow in the University of Pennsylvania. From 2007, he was a research group leader at the Plant Science department at the Weizmann Institute, and moved to his current position at Migal in 2013.
Solar Energy: The future of energy is bright Of all the renewables, solar power is the biggest hope for a future seismic shift in energy. We are seeing large solar farms feeding grids, household solar panel installations are trending and solar frequently makes the news for breakthrough milestones. Solar power is growing in market share and being adopted around the world. Whilst many consumers choose solar energy for environmental reasons, it is the practical efficiency and lower costs that can drive wider uptake and that’s where the power of research and innovation will come into play. By Richard Forsyth
he sun is a fusion reactor that fuses 620 million metric tons of hydrogen every second in its core and unlike oil, it has a lifespan of around 5 billion years before it runs out. The days of finite carbon fuels are numbered. The concept of finite resources and increasing consumption is the talking point and chief concern of the era. Meanwhile, there is enough solar energy calculated to be available daily to power the Earth for 27 years. It’s our best hope to replace carbon-based fuels.
Costing the sun It was in 1876 that William Grylls Adams and Richard Day discovered when selenium was exposed to light it produced electricity, which was the first seed of science for solar power. It was only relatively recently, in the late 50s that solar power became commercially available. Solar panels can absorb photons – particles of light – and transform them into electrical power, by taking electrons from atoms. The solar industries have witnessed exceptional growth and have the potential to shift the balance of our entire global power supply in the future.
As with any technology shift on a global scale, in the end it boils down to economics, if it’s affordable and efficient as a daily solution. The price of a solar panel in the 1970’s was well over 200 times higher than it is today. We are at a stage where choosing solar can be seen as a practical option for home owners and also for governments, but efficiency and performance is key. This journey of improving conversion efficiency is on-going. The success of energy as a commodity will always depend on the economics of its daily viability and solar power is improving consistently. This is why innovation has been and will continue to be, the key for its wider adoption. Photovoltaic (PV) technology is evolving and there have been recent breakthroughs.
PV Innovations that make the difference Much of the innovation that is powering the solar industry’s success can be seen in materials research. For example, there is great potential with perovskites, a class of material that has superconductivity, magnetoresistance and it is easily synthesised - considered ideal for low cost and effective photovoltaics.
Whilst rooftop PV is growing, most of the growth is down to ground-based PV. Solar farms between 1 and 100 acres are increasingly visible in the rural areas of many countries.
Perovskite’s structural compositions can be fine tuned to create material which can absorb any frequency of light. Silicon in comparison only absorbs light in a limited range of frequencies. Perovskite thin film solar cells are lightweight and flexible. They can even be printed directly on materials like glass or metal. This gives rise to exciting possibilities in the construction of buildings in the future. One idea is to fit all buildings with facades that are functioning as solar collectors. This is making buildings into mini power stations, that store and release energy from walls and exteriors. Such buildings could be self-reliant in providing heat and light for the whole year around. What’s exciting about this idea is that the higher the performance that localised solar power can achieve, the more self-reliant and off grid the applications can become. For rural areas this is particularly useful but this could be applied to city urban areas too. Take for example, the street lights in San Diego, where they are powered by the sun in the daytime to power light emitting diodes (LED) during the night to keep the streets lit. Combined with smart sensors it’s been proposed these streetlights will be able to direct drivers to parking spaces. A project in Finland by VTT Technical Research Centre is creating prototypes of solar powered trees, with a view to future solar forests or perhaps trees for your back yard. They have 3D printed trunks made of biomaterials whilst the leaves are basic solar cell power converters. Another innovation does away with panels altogether and instead uses paint – which comprises of polymers dissolved into a solvent which can be applied to any surface. Every aspect of a building that faces the sun is being scrutinised for possibilities for absorbing solar energy. Adapting windows to harvest light is a focus of research. In the US, The National Renewable Energy Laboratory (NREL) has created window technology where a household window transforms from clear to tinted in sunlight, creating electricity during the process. They used advanced materials including the aforementioned perovskites,
and single-walled carbon nanotubes. These kinds of innovation give us a glimpse of possibilities for solar harvesting in new and exciting ways. A point here worth including, is that beyond the technical innovation around improving efficiency, aesthetics plays a part in consumer adoption. The fact is, that large solar panels, resembling great mirrors on rooftops, is not everyone’s idea of homely or blending into the neighbourhood. A recent innovation focus is in making solar power an invisible energy collection device, so your house will not look out of place in a street. This is about blending in. A well-publicised example of this kind of innovation can be seen in a technology devised by the company, Tesla. Tesla is rolling out solar tiles that look like conventional roof tiles – called building-integrated photovoltaics (BIPV). Human factors such as this are an important consideration for technology adoption.
The National Renewable Energy Laboratory (NREL) has created window technology where a household window transforms from clear to tinted in sunlight, creating electricity during the process.
The power from harvesting sunshine At present solar PV does not produce the same amount of equivalent electricity over a year as coal-fired power and we still use about 100 times more energy in the form of oil and 90 times as much in the form of coal. Despite this, solar power is tipped by many as the renewable that could surpass coal and rival oil in just a few decades. To demonstrate the growth trend, according to Data Bridge market research, the global photovoltaic glass market is projected to grow at a CAGT of 33.5% during the forecast period of 2017 to 2024, from USD 4.39 billion in 2016. The year 2017 was an historic year for solar power. More solar PV capacities were installed globally than any other power generation technology according to Global Market Outlook report 2018-2022. The report goes on to say that ‘Solar alone saw more new capacity deployed than fossil fuel and nuclear combined. Solar added almost
twice as much capacity as its renewable peer, wind power...with a total global solar power capacity of over 400GW in 2017 after solar exceeded the 300GW mark in 2016 and the 200GW level in 2015.’ Whilst rooftop PV is growing, most of the growth is down to groundbased PV. Solar farms between 1 and 100 acres are increasingly visible in the rural areas of many countries. In Europe this is linked in part to the EU’s targets for renewable energy. The latest political agreement was put forward by the Commission, the Parliament and the Council on 14 June 2018, to include a binding renewable energy target for the EU for 2030 of 32%, with a clause for an upwards revision by 2023. Many EU countries have already not only hit the EU’s 2020 renewable consumption target but also surpassed ambitious country targets, such as Sweden which has a renewable consumption closing on 50% of total energy consumption. The other end of the scale has countries such as Latvia struggling to commit to very low country targets, offsetting the
Perovskite’s structural compositions can be fine tuned to create material which can absorb any frequency of light.
EU’s total achievement. However, it’s been suggested renewables could provide a third of Europe’s electricity in 2018. As the European Power Sector in 2017 report by Agora Energiewende points out, this is all in a time when we are seeing additional power demand in many sectors due to the digital revolution – for instance, video streaming, electric vehicles (800,000 vehicles by end of 2017) and Bitcoin mining. This, combined with increasing population expansion and subsequent electricity reliance. As our commitment for renewable energy bolsters, the demand for electricity will continue to rise.
Local schemes leading to national transformations Whilst public adoption is key, it also takes a concerted effort from governing authorities to champion renewables like solar. In the UK there is a movement called UK:100 where a network of local authorities have committed to a transition to 100% renewable energy by 2050. This network connects local government to national government and consumers with the sole aim to transform the UKs energy resource to renewable. Around the world, similarly, around 400 cities are committed to ditching fossil fuel by 2050. It will be initiatives like this that can provide a platform to connect together local action for change in a way that drives national transformations. Another example seen in the UK, is a campaign to encourage schools to use solar energy. This was proposed by the Friends of the Earth organisation in their Run to the Sun campaign. The chief message from the campaign was that cash strapped schools can benefit from savings of £8,000 a year by generating their own independent source of electricity. The school gets paid for every unit of electricity the solar panels produce, for either the school, or if it is sent to the national grid (the Feed in Tariff). Such a scheme has the knock-on educational effect in schools of influencing the next generations about the advantages of using solar power. It’s true that the original sums for installation of solar panels can seem daunting, in the tens of thousands, yet with fundraising, help from local education authorities and councils it seems more achievable. Of course, it’s important that governments keep supporting solar, rather than imposing new taxes on it, because without that support it can halt this game-changing solution in its tracks.
Is the sun always shining? It goes without saying, having sunny weather and heatwaves really helps solar perform. Europe’s recent summer heatwave in Britain, France, Germany and northwest Europe has shown how effective solar is in good conditions. In Britain and Germany, it helped break solar power generation records. Ironically, in locations where climate change maybe developing this kind of extreme weather, the more suitable solar will be for high performance. Ultimately, solar power is a renewable with great promise and one that will be increasingly relevant to solving our energy issues as we intend to move away from carbon fuels. Innovation for greater performance, greater economic viability and greater general appeal will make solar a power to contend with.
New light on solar cells Solar cells are an increasingly important element of overall energy provision, yet there is still room to improve their efficiency and performance. Researchers in the Chromtisol project are utilising Titanium dioxide nanotubes to develop a new physical concept of a solar cell which could help improve solar-to-electricity conversion efficiency, as Dr. Jan M. Macak explains. The development of renewable sources of energy is widely recognised as a research priority, with scientists looking to efficiently harness solar power to meet our energy needs. Layers of ordered nanotubular titanium dioxide (TiO2), shown illustratively on scanning electron microscope micrographs, offer a lot of potential in this respect, says Dr Jan M. Macak. “It has become clear that they are unique, they posses large surface area in a small volume, are very stable upon irradiation and can be produced by a simple technology. In combination with suitable chromophores, they can very efficiently absorb both sunlight and artificial light and convert this light into electrons.” This is a topic Dr Macak is exploring further in the Chromtisol project, an EU-backed initiative based at the University of Pardubice in the Czech Republic, which aims to develop a new, more efficient physical concept of a solar cell. “If you want to make a good solar cell, you have to make sure that it absorbs as much light as possible, and reflects as little as possible,” he explains. TiO2 nanotubes
This topic is central to the project’s overall agenda, with researchers aiming to utilise TiO2 nanotube layers in the development of a new type of solar cell. The nanotube layers act as a functional scaffold, and provide a relatively large surface area to the cell. “This is very important, because the larger the surface area, the better for the solar cell,” explains Dr Macak. When light is shone onto a regular flat surface, a certain proportion is reflected; by increasing the surface area of the cell and adapting the morphology, Dr Macak aims to improve absorption efficiency. “The first major challenge in this project is to make sure that we absorb as much light as possible. Currently, as we recently showed in two publications published in journal Nanoscale, the absorption rate is typically somewhere between 80-90 percent, which means that approximately 10 percent of light is reflected and not used,” he outlines.
Simplified sketch of the developed solar cell illustrating the absorption of light and consequent generation of electrons within TiO2 nanotube layers coated with suitable chromophore. The inset graph shows increase of the photon-to-electron conversion efficiency (IPCE) of the nanotube layer with added chromophore.
The type of light that is absorbed is also an important parameter in this respect. While the TiO2 nanotubes perform effectively in absorbing UV light, it’s also important to absorb visible and infrared light, an issue Dr Macak is working to address. “I have put some additional materials called chromophores in the solar cell. In nature, chromophores absorb sunlight and make energy out of it for plants,” he outlines. These chromophores inside the nanotubes are designed to capture the ultraviolet, the visible and the infrared light. “The aim is to utilise, as efficiently as possible, the space inside the tubes,” continues Dr Macak. “Putting these chromophores inside nanotubes is not easy however, as the scale is so small. So the project is not just about
developing the solar cell, it’s also about finding the best strategy to put the correct type of chromphore inside the tubes.” A number of different strategies are available for this task. In one of the more complex approaches, researchers utilise a thin-film deposition technique called atomic layer deposition to coat the interior of the nanotubes. “It’s kind of like a large vacuum tool,” says Dr Macak. This technique has already been exploited by various researchers and industries to make thin functional coatings of different materials for different purposes; Dr Macak says it holds rich potential in terms of developing a high-quality solar cell. “The costs would probably be a little bit higher than other cells, but the light management
and the efficiencies have the potential to be really very high,” he outlines. “The chromophores in the tubes, and the solar cells in general, are also able to capture photons from indirect light. This means things that are in shadow - for example, they could be placed on the sides of cars or houses.” This is very different to the conventional approach of putting large silicon solar cells in fields or deserts to absorb direct sunlight. With the ability to capture photons even from indirect light, the Chromtisol solar cell could potentially be applied in a wider range of locations, not limited to those which often experience high amounts of direct sunlight. “Most silicon solar cell
a major consideration in this respect, which will affect the technology’s range of potential applications. “This would be rather a special solar cell, and so would be used for certain applications, like on certain space applications,” says Dr Macak. While commercialisation is not on the immediate agenda, Dr Macak says a lot has been achieved in the project already, for example in refining and finetuning different tools and methodologies. “Through our work in this area, we are pushing the limits of atomic layer deposition, which is a prominent topic in research,” he stresses. The project’s research has also led to the development of other tools and methods,
The first major challenge in this project is to make sure that we absorb in the nanotubular layer with the suitable chromophore as much light as possible. Currently, the absorption rate is something like 80-90 percent, which means that approximately 10 percent of
light is reflected and not used. installations are sun-facing, to directly absorb sunlight. But there are also places which do not experience so much sun,” points out Dr Macak. The intention is to produce a final prototype of the solar cell at some point over the next year or so, beyond which Dr Macak is also considering the possibility of scaling up the technology. “We’ll look at scaling it up to the larger sizes needed for further experiments and testing,” he outlines. Research is still at a relatively early stage, with Dr Macak and his colleagues still working to improve the core methods and techniques, yet he is fully aware of the wider commercial potential of the technology. Economic factors are
including a methodology on how to put the chromophores inside nanotubes, which Dr Macak believes represents an important step forward. “It’s really a very small area within a structure, and placing the chromophores is a big challenge,” he says. This is quite an exploratory, challenging area of research, and while Dr Macak and his colleagues are keen to translate their work into tangible benefits, it’s also important to note the role of the project in paving the way for further development in future. “We are trying to use these materials in quite an interesting way, and this will be valuable for researchers in future,” he says.
Scanning electron micrographs showing the top view (left) and the cross-sectional view (right) of the TiO2 nanotube layer.
CHROMTISOL Towards New Generation of Solid-State Photovoltaic Cell: Harvesting Nanotubular Titania and Hybrid Chromophores Project Objectives
A lot of attention in research has been centred on technologies that could boost the solarto-electricity conversion efficiency and power recently unpowerable devices and objects. The focus of research in the Chromtisol project is a new physical concept of a solar cell that explores extremely promising materials, yet unseen and unexplored in a joint device, whose combination may solve drawbacks commonly associated with solar cells, in particular carrier recombination and narrow light absorption. The project aims to reach important scientific findings in highly interdisciplinary fields. It is extremely challenging and risky, yet based on feasible ideas and steps that will result in exciting achievements.
ERC Starting Grant / Total cost: EUR 1 644 380 / EU contribution: EUR 1 644 380
Dr. Jan M. Macak, Senior Scientist Center of Materials and Nanotechnologies. Faculty of Chemical Technology University of Pardubice Nam. Cs. Legii 565 530 02 Pardubice Czech Republic T: +420 466 037 401 E: email@example.com W: https://cordis.europa.eu/project/ rcn/193604_en.html M. Krbal, J. Prikryl, R. Zazpe, H. Sopha, J.M. Macak, CdS-coated TiO2 nanotube layers: downscaling tube diameter towards efficient heterostructured photoelectrochemical conversion, Nanoscale. 9 (2017) 7755–7759. doi:10.1039/C7NR02841E R. Zazpe, H. Sopha, J. Prikryl, M. Krbal, J. Mistrik, F. Dvorak, L. Hromadko, J.M. Macak, 1D conical nanotubular TiO2 / CdS heterostructure with superior photon-to-electron conversion, Nanoscale, in press, DOI: 10.1039/C8NR02418A
Dr. Jan M. Macak
Jan M. Macák is a Senior Researcher at the Center for Materials and Nanotechnologies, University of Pardubice. His main research focus is on materials science, with an emphasis on nanostructured materials and their applications. He is also interested in thin film characterization, self-organisation phenomena and semi-conductor chemistry.
Development that doesn’t cost the earth A lot of attention and investment is focused on improving living standards across the developing world, yet this could put pressure on efforts to limit the impact of climate change. We spoke to Dr Narasimha Rao about the work of the DecentLivingEnergy project in developing a body of knowledge to help balance the goal of eradicating poverty with climate change mitigation. The goal of eradicating poverty does not seem to be naturally compatible with efforts to combat climate change, as improving living standards typically leads to increased energy consumption, which in a fossildominated world increases carbon emissions. Researchers in the DecentLivingEnergy project aim to develop a body of knowledge to help balance these two objectives, looking to relate living standards more closely to energy consumption. “What basic minimum of energy use – and consequently of greenhouse gas emissions – is necessary for people to attain a certain standard of living?” asks Dr Narasimha Rao, the leader of the project. Rigorous methods are being used to assess the energy used in different activities associated with basic living standards, to which everybody is entitled. “This includes having a basic amount of nutrition, shelter, access to mobility, and to schools and hospitals, among other things” outlines Dr Rao.
Basic needs and capabilities This work builds on the existing literature in philosophy and applied ethics which describes the basic needs and capabilities common to all of us, no matter what country we live in or what else we may want in life. Researchers
are reviewing that literature, and translating it into actual material requirements, which Dr Rao says is an innovative aspect of the project’s work. “The literature lends support to the idea that there’s a universal, irreducible minimum set of capabilities that people need in life. We try to translate those into actual goods and services. An obvious example is providing clean cooking fuels to the 2 billion people who use traditional biomass stoves. This saves lives, frees up women’s time, and has a negligible impact on climate change. Less obvious are things like refrigerators in the home, and equipment to heat and cool your home to a comfortable temperature range, which support good health” he outlines. Dr Rao and his colleagues also consider the means for participating actively in society. “It’s pretty well established that people want social affiliation, they want knowledge about the world and to connect with people ,” he continues.“So that could mean access to motorised mobility to get to a job, or a hospital, in a reasonable amount of time. Access to the internet is important in this day and age, through any device, whether it’s a television, computer screen, or a cell phone,” says Dr Rao “From an energy perspective, the infrastructure required to provide cellphones is pretty trivial. This is a universal satisfiers of a basic need that
does not have a significant impact on climate change,” he outlines. This analysis is primarily concerned with understanding the requirements per person in a given society, focusing specifically on India, Brazil, and South Africa. There are heterogeneities even within societies and lifestyle differences among the population, for example between urban and rural areas, that will lead to different energy requirements, an issue that Dr Rao and his colleagues take account of in research. “We account for geographic differences, differences in social and human institutions, and differences in culture, such as dietary preferences,” he explains. For example, the average diet in Brazil is associated with a higher carbon footprint than the Indian diet, because of the higher amount of meat consumption. “These cultural differences are important,” stresses Dr Rao. “We want to guide policy in certain respects, and encourage a shift towards reducing carbon emissions, but we recognise that there are some constraints.”
Synergies: eradicating poverty and mitigating climate The goals of eradicating poverty and addressing climate change, the project finds, have important synergies. Researchers
The literature lends support to the idea that there’s a universal, irreducible set of capabilities that people need in life. We try to translate those into actual goods and services.
have identified opportunities to improve development outcomes and reduce emissions at the same time; Dr Rao points to one such example. “For instance, reducing the production of white rice in the developing world, which is associated with methane emissions, and is also of relatively low nutritional value. A further innovative aspect of the project is to look at micronutrition, such as vitamins and minerals that are essential for good health,” he says. Almost 90 percent of Indians are deficient in iron, for example; moving towards the production of alternative cereals instead of white rice could both improve iron levels and reduce methane emissions. “Public health authorities in India are aware of the importance of diversifying cereal production. With this project, we have demonstrated that the health benefits can be accompanied by environmental improvements,” says Dr Rao. A significant amount of energy is likely to be required over the coming years in India for building safe homes with adequate space, and for developing transportation systems in emerging urban areas. Here too, researchers have found that using advanced, local construction materials can reduce both the cost and emissions impact of new buildings in India, compared to the prevailing practice of fired bricks. While the focus in the project has been on three countries up to this point, Dr Rao believes that this research holds broader relevance across the developing world. “What the study is doing, for the first time ever, is to provide a transparent, replicable analytical framework to quantify the requirements for eradicating poverty in terms of energy. Policymakers can examine different development targets, low carbon policies and other countryspecific conditions and see the combined effects on greenhouse gas emissions. This illuminates the relative impact of different
development goals, and the extent to which their achievement is going to affect our ability to achieve our most aggressive decarbonisation goals,” he says. The research also highlights the challenges in raising living standards and achieving the climate stabilisation goals set out in the 2015 Paris agreement. With rapid development in living standards, it will be a challenge to achieve ambitious climate stabilisation goals without relying to a large extent on risky technologies such as carbon capture and storage. “The good news is that if you reduce energy demand significantly, such as through the use of efficient appliances and reducing waste, you can support meeting decent living standards without relying on risky technologies,” explains Dr Rao. “There’s a trade-off, though, between achieving further lifestyle improvements, and avoiding reliance on these risky technologies. We can’t expect that in future people in developing countries will be satisfied with just basic living standards. We would expect to see energy use beyond the basic minimum.” This underlines the importance of identifying other lifestyle changes that enhance wellbeing and reduce emissions, such as using public transportation. The project’s research is primarily focused on quantifying energy needs, yet Dr Rao says their work already provides a platform for further work linking human wellbeing to the use of other resources, such as water, materials and minerals. We already understand the extent of the use of cement and steel in building homes, and the use of water for essential nutrition. Extending this work to other materials, such as petrochemicals or rare minerals, for example, would be a natural progression. “We need to understand to what extent we depend on various materials to meet basic living standards, given the environmental pressures their extraction can cause” outlines Dr Rao.
Decent Living Energy Energy and emissions thresholds for providing decent living standards to all Project Objectives
This project investigates the relationship between poverty eradication and climate change. It defines the material requirements for providing decent living standards to all, and quantifies the energy and emissions needed to deliver these standards in three countries, India, Brazil and South Africa. It provides insights on low-carbon development.
The Decent Living Energy project is supported by the European Research Council Starting Grant, No. 637462.
• Dr Jihoon Min • Dr Alessio Mastrucci • Dr Shonali Pachauri • Dr Keywan Riahi • Luis Gustavo Tudeschini
Project Coordinator, Dr Narasimha D. Rao The International Institute for Applied Systems Analysis (IIASA) Schlossplatz 1 2361 LAXENBURG Austria T: +43 2236 807216 E: firstname.lastname@example.org W: www.decentlivingenergy.org Rao, ND., J. Min, R. DeFries, SH Ghosh, H. Valin, J. Fanzo. Healthy, affordable and climate-friendly diets in India. Global Environmental Change, 49: 154-165. doi Rao, ND, J. Min. Less global inequality can improve climate outcomes. WIREs Climate Change. 10.1002/wcc.513 doi Rao, ND, B.V Ruijven, K. Riahi, V. Bosetti. Improving poverty and inequality modeling in climate research. Nature Climate Change. 7(12), 857-862. doi
Dr Narasimha D. Rao
Dr Rao is a Project Leader at the International Institute for Applied Systems Analysis with research interests in energy, climate change and development. His background is in energy systems analysis, empirical economics and ethics. He has a PhD from Stanford University and Masters degrees from MIT.
Getting to the roots of forest sustainability Woody plants acquire nitrogen and other resources to maximise growth and enhance their reproductive fitness. Dr Judy Simon and her team aim to build a deeper understanding of the basic mechanisms behind plant interactions with regard to nitrogen acquisition and its internal allocation, research which could hold important implications for forest management. A lot of
attention in ecological research over the past few decades has been devoted to interactions between plants, yet the underlying processes that determine the competitive success of individual plants or species have largely been neglected. This is a topic central to the work of the project Woody PIRATS, an initiative based at the University of Konstanz in Germany. “The overall aim of this project is to gain a more detailed understanding on the basic processes and mechanisms underlying woody plant interactions, such as competition, facilitation, and/or avoidance of competition between plants and different players in forest ecosystems,” explains Dr Judy Simon, the project’s Principal Investigator. These are important issues in terms of plant health and the sustainability of forest ecosystems. The acquisition and allocation of resources, in particular nitrogen (N), plays a central role in maximising the growth and reproductive fitness of plants, especially longliving woody species. “In the daily competition for limited resources, different strategies have evolved in plants to enhance their chances of survival,” says Dr Simon (1). Her group is investigating how both inorganic and organic N are acquired from the soil, in particular organic N. “With the recent suggestion that tree growth is limited by nutrient availability, particularly N (2), it becomes even more important to understand how trees acquire N from the soil and allocate it at the whole plant level,” continues Dr Simon. A number of techniques from various fields are being utilised to study plant interactions in the rhizosphere, a region of soil which is of great interest with respect to plant interactions. To Dr Simon, the most interesting zone in the soil is where plant roots can be found and interact with not only each other, but also with soil microorganisms and mycorrhizal fungi. “Our research includes the rhizosphere, and in general those soil layers which are important for N cycling,” she outlines. A recently developed in situ microdialysis technique (3) is being adapted within the project to quantify nitrogen fluxes
in the soil, which Dr Simon says has several advantages over conventional methods. “There is no disturbance of the natural system, and degradation of organic molecules over time can be excluded,” she explains.
Transparent soil The group is also adapting a system to visualise processes in the rhizosphere in 3D using ‘transparent soil’ (4). In this system, ‘transparent soil’ – a transparent substrate consisting of a matrix of solid particles with a pore network containing liquid and air – is used to help researchers gain deeper insights into how plants compete for N in the rhizosphere. “This The Plant Interactions Ecophysiology Group.
substrate, used in combination with cutting edge 3D live microscopy systems, will provide valuable new information on living plants and soil organisms. In particular the effect of the physical heterogeneity of the growth substrate compared to phytagel as a substrate which is commonly used,” outlines Dr Simon. This approach enables researchers to look more closely at complex processes such as avoidance of competition, or the exploitation of microsites. With transparent soil, Dr Simon and her colleagues can monitor root-rootinteractions live, in situ, in 3D, and so study them in greater depth. “It allows us to identify species responses to the presence and absence of nitrogen, and/or to other tree species in the rhizosphere,” she explains. “It will provide novel insights into how tree species communicate to exploit nitrogen sources more efficiently and identify micro-niches exploited by tree species that allow better nitrogen uptake.” The distribution of N across different regions is not uniform however, and levels of availability do vary. Studies have shown that as the level of N supply in the soil varies, the
Woody PIRATS Woody Plants – Interactions and Resource AllocaTion Strategies Project Objectives nature of plant interactions changes, a topic of great interest to Dr Simon and her research group. “In a recent experiment, we looked at the interactions between native and invasive tree seedlings and how the response to different competitors might shift with varying N supply,” she says. “Our surprising result is that one cannot generalise the responses of different species, but rather they strongly depend on competitor identity. Some species cope better or have advantages over others.”
economically and ecologically valuable forest ecosystems. Forests provide vital ecosystem services, and Dr Simon hopes the results of the group’s research will help to improve management and boost sustainability over the longer term. “For example, a better understanding of the competition for N between trees could contribute to a more efficient use of fertiliser in forest plantations,” she says. “Also, our biodiversity work on invasive species and how natives respond to their ‘occurrence on
The overall aim of this project is to gain more detailed knowledge on the basic processes and mechanisms underlying woody plant interactions, such as competition and facilitation. Climate change A further important consideration is the impact of climate change on plants and interactions in the rhizosphere. The effects of higher temperatures, increased levels of atmospheric N deposition and longer periods of drought are all taken into account in the group’s research. “For example, plant N acquisition is strongly linked to soil water availability, thus with increasing periods of drought, it is to be expected that the availability of soil N pools to plants will be reduced,” outlines Dr Simon. The impact of climate change on N distribution in the soil depends on a variety of different factors that might also influence each other though, and Dr Simon says with the current knowledge it is difficult to draw wider conclusions in this area. “More research is still needed to study these interactions and especially their combined effects, which might even be stronger than the single impact, on forest ecosystems,” she continues. This research holds important implications for a sustainable forest management, helping to build a deeper picture of the functioning of
the scene’ provides insights that can be used for conservation and restoration.” Research in this area is ongoing, and Dr Simon is keen to stress that she views plant interaction and ecosystem ecophysiology as the continued focus of attention for her group. The new approaches that are being established within the group will be applied in future projects. “Using transparent soil and microdialysis as new tools provides unique solutions to study root-root interactions,” stresses Dr Simon. While the focus of the Woody PIRATS project has been on plant-plant interactions, Dr Simon plans to broaden out the scope of research in the future. “I plan to extend our research to also include other players in the rhizosphere, such as free-living soil microorganisms and mycorrhizal fungi, and their influence on plantplant interactions,” she says. “We are currently only at the beginning with this research and more work definitely needs to be done to really gain a fundamental understanding of plant interactions with regard to N cycling in forest ecosystem functioning, including the response to abiotic and biotic stressors.”
Capturing animal movement in the forest plots. February (left), May (right).
Woody PIRATS investigates tree-tree interactions as a fundamental process promoting the establishment of specific nitrogen acquisition and allocation strategies across species (and biomes) as well as altered responses of nitrogen metabolism to biotic and abiotic stressors at different levels: the whole plant, the community, and the ecosystem.
The Heisenberg Fellowship and the project “Consequences of competition and abiotic stress on the acquisition and internal allocation of nitrogen in temperate woody species” are both funded by the German Research Foundation (DFG) – total amount: c. 500,000€. Part of this project is the development of a new approach in which to use “transparent soil” to visualize processes in the rhizosphere. This project is funded by the VolkswagenStiftung – total amount: 108,000€.
PD Dr Judy Simon (PhD, University of Melbourne, Australia) Heisenberg Fellow / Group Leader “Plant Interactions Ecophysiology” Department of Biology University of Konstanz Universitätsstrasse 10 D - 78457 Konstanz Germany T: +49 7531 88 4322 E: email@example.com W: www.plantinteractionsecophysiology.com 1 Reich et al. (1997) Proc Natl Acad Sci USA 94, 13730-13734 2 Körner (2003) J Ecol 91, 4-17, Millard & Grelet (2010) Tree Physiol 30, 1083-1095 3 Inselsbacher et al. (2011) Soil Biol Biochem 43, 1321-1332 4 Downie et al. (2012) PLoS ONE 7, e44276
Dr Judy Simon, PhD
Dr Judy Simon, PhD studied biogeography at Saarland University (Germany) and obtained her PhD on tree ecophysiology at the University of Melbourne (Australia). She worked as a postdoc at the University of Freiburg (Germany). From there she went to the University of Konstanz where she is currently a group leader and DFG Heisenberg Fellow.
Tyre recycling that treads new ground The majority of rubber recycled from car tyres is currently used for relatively low-quality applications, rather than in the production of new tyres. A more effective method of recovering rubber from used tyres could help close the loop between recycling and production, as Ir. Hans van Hoek and Associate Professor Wilma Dierkes explain. Although a significant proportion of the rubber in used tyres is currently recycled, only a relatively minor amount is used again in the production of new tyres. This is a topic which is the focus of a lot of attention in global research, as countries seek to use raw materials more efficiently, not least because of environmental issues. Based at the University of Twente in the Netherlands, Dr. Wilma Dierkes and Ir. Hans van Hoek are working on the Closing the Loop project, an initiative aiming to help overcome the issues which currently limit the use of recycled rubber in car tyre production. “After
Closing the Loop
Closing the loop: re-use of devulcanized rubber in new tires Dr. Wilma Dierkes University of Twente ET/MS3/ETE Drienerlolaan 5, 7522NB Enschede T: +31(0)53 489 47 21 E: firstname.lastname@example.org W: https://www.utwente.nl/en/et/ms3/researchchairs/ete/ W: http://www.windesheim.nl/onderzoek/ onderzoeksthemas/technologie/kunststoftechnologie Dr. Geert Heideman, Associate Professor of the professorship of polymer engineering at the university of applied sciences Windesheim, is one of the members of the supervising group of this project. Ir. Hans van Hoek is also a member of his research team. Dr. Wilma Dierkes, Associate Professor of Sustainable Elastomer Systems in the Faculty of Engineering Technologies of the University of Twente, is the principal investigator of this project. Ir. Hans van Hoek is a PhD candidate in the Faculty of Engineering Technology at the University of Twente. Over the last 15 years, he has worked as a teacher in mechanical and control engineering courses, while he also has experience in marine engineering on both motor and steamturbine-driven ships. His day to day job is teaching at the university of applied sciences Windesheim, Zwolle, the Netherlands.
Compact laboratory setup for crosslink analysis.
a successful project using EPDM rubber, in this project the focus is on re-using rubber from tyres for road vehicles. We are looking at the de-vulcanisation process and aim to make the de-vulcanizate more widely applicable,” says Ir. Hans van Hoek, the project’s main researcher.
The results of the project so far are very promising, with the pure de-vulcanizates showing a tensile strength of 8 Megapascal and good characteristics in terms of strain-at-break and other important parameters. Devulcanisation The de-vulcanisation process is much more sensitive than rubber reclamation, another method used to recover waste material. With reclamation the main polymer chains in rubber are broken, whereas in the de-vulcanization process they remain intact and only cross-links are disconnected. “During vulcanization you get mono-, di- and polysulfidic cross-links. The de-vulcanisation process does not de-crosslink the monosulfidic cross-links,” explains Ir.
Surface of vulcanized blend of 50/50 %wt of de-vulcanized rubber and virgin rubber.
van Hoek. Researchers aim now to optimize process conditions with a view to improving the quality of de-vulcanized rubber. “My first target was to increase the tensile strength of the revulcanized de-vulcanized rubber,” says Ir. van Hoek. “The minimum tensile strength we have obtained now is about 8 Megapascal.” The de-vulcanized rubber will always be used in a blend with virgin compounds. “When you mix the de-vulcanized material with virgin rubber, a low tensile strength of the revulcanizate will have a significant influence on the tensile strength of the blend,” points out Ir. van Hoek. “That’s why our primary focus was on the tensile strength.” Researchers are now exploring issues around blending of virgin and de-vulcanized rubber, aiming to identify means to optimise the quality of the eventual mixture. A tyre is of course subjected to
significant stresses and strains on the road, part of the natural wear and tear on a car, that’s why the rubber of car tyres has to meet rigorous standards, which of course apply too to the blend of virgin rubber with de-vulcanizate. The results of the project so far are very promising, with the revulcanized pure devulcanizates showing good characteristics in terms of tensile strength, strain-at-break and other important parameters. An important further step is to consider whether this can be scaled-up to industrial levels, to provide an efficient means of producing good quality recycled material. “Companies are interested in using this on a larger scale,” stresses Dr. Wilma Dierkes, the project’s Principal Investigator. This work is driven to a large degree by environmental concerns. An efficient, reliable method of compounding de-vulcanized rubber with virgin materials could have a significant impact in this respect, yet meeting the quality requirements of the final material remains the focus of Ir. van Hoek’s research. “We are not yet ready with the whole process and there is still more work to do,” he continues.
Climate change will have a significant impact on the physical conditions of both marine and inland waters, as well as fish and shellfish distribution and productivity. We spoke to Professor Myron Peck of the CERES project about their work in investigating the physical changes that will occur as a result of climate change, and how CERES can help fisheries and aquaculture companies adapt and thrive. Mussel farm, west coast of Ireland. Photo: Thomas Doyle
Modelling the future of the seas The fisheries and aquaculture sectors are both key contributors to the European Blue economy, yet companies will need to adapt in the future as the impact of climate change on fish and shellfish becomes more apparent. This is a topic that lies at the core of the CERES project, an ECfunded project that brings together more than 200 participants from 26 different organisations, including both academic and commercial partners. “Most of our work so far has been around trying to understand the physical changes that are going to occur as a result of climate change, and to project those at a scale that’s relevant for the fisheries and aquaculture industries,” outlines Professor Myron Peck, the project’s Scientific Coordinator. In CERES, researchers are projecting future changes in European waters, using complex global climate models, which are then ‘downscaled’ to the regional level. “These models provide detailed information throughout the Mediterranean and the North-East Atlantic, as well as the Baltic, Barents and Norwegian Seas,” continues Professor Peck. “We are also working with freshwater models, looking at things such as riverflow and changes in water temperatures.” These changes are likely to have a significant impact on the distribution and/or productivity of specific fish species in the future, which is a major concern for the fisheries and aquaculture industries. Temperature in particular is a major factor that determines where fish will be located. “Within a tolerable range, temperature is a major determinant of where fish will be – and how fast well-fed fish will grow. If waters become too cold or too hot for too long then some local fish and shellfish resources may be lost,” explains Professor Peck. The effects of water temperature on growth rates is an important consideration for aquaculture companies in particular. “In a warmer future, an aquaculture company farming a fish at the northern (cold) limit of this fish’s geographical range may profit, while a company growing the
same fish at the southern (warm) limit of this fish may experience losses. The same is true for the fishing industry,” says Professor Peck. “The effects of climate change on the same fish species are expected to vary, depending on where it’s fished or where it’s grown.” The wider political and economic environment is another important consideration in terms of the future of the fisheries and aquaculture industries. The way that total catches are allocated to different nations may be subject to change
following Britain’s exit from the EU for example, while there is still international disagreement on how to address climate change. “A lot of attention in CERES is being paid to the political changes that are occurring, so we’ve also developed a set of scenarios that accompany the physical changes. That’s an important part of the project because it sets the scene for the biological modelling and the biological work,” explains Professor Peck. These scenarios contextualise climate change, giving companies in aquaculture and fisheries an insight
into how it will affect them, on a scale that brings home its importance to their operations. “A good example would be wind energy. Under a very green (less severe climate change) scenario of the future, it could be that we’ll see a proliferation of wind farms in the North Sea, leading to a further spatial squeeze on fisheries,” says Professor Peck. The likely extent of future climate change is still a matter of debate, so Professor Peck and his colleagues in the project employ different physical climate change scenarios, under which they will examine effects on fisheries and aquaculture. The project addresses about 90 percent of the highvalue European aquaculture targets, including shellfish such as mussels and clams, along with more than half of the high-value targets for fisheries. “This includes Mediterranean species such as bluefin tuna, as well as species such as hake and small pelagics such as anchovies and sardines. In northern Europe, colder-water demersal / bottom-dwelling fish such as cod and haddock are examined,” says Professor Peck. The likely extent of future climate change is still a matter of debate, so Professor Peck and his colleagues in the project have defined different scenarios, under which they will examine the effects of climate change on fisheries and aquaculture. “The Intergovernmental Panel on Climate Change (IPCC) has defined regional concentration pathways (RCPs), which describe different scenario of future carbon emissions and, hence, global warming,” he explains. The IPCC has defined four of these scenarios for future carbon emissions, each defining a specific emissions trajectory and subsequent radiative forcing - essentially the difference between the amount of sunlight absorbed by the earth’s surface and the amount reflected back into space, measured in watts per square metre (Wm2). These scenarios range from the worst case of 8.5 Wm2 down to 2.6 Wm2, which would actually represent a reduction in emissions over
the next forty years. However, it currently looks like emissions levels are more in line with the 8.5 scenario, so now researchers in the CERES project are aiming to assess the likely impact of this on fish and shellfish resources. “We have data on physical carbon emissions, and they are used in models to project changes in sea surface temperature, sea bottom temperature, water currents, salinity, oxygen, and of course pH – the kinds of things that are going to affect fish and shellfish,” outlines Professor Peck. There is still uncertainty over how changes in these factors, in particular pH or ‘ocean acidification’, affect specific fish populations, so in the early stages of the project researchers did a literature review to assess the current state of knowledge. “We looked at more than 500 studies conducted on aquaculture and fisheries targets. When you start looking at studies on specific species, or even more general studies – you can see that temperature has been studied quite intensively,” says Professor Peck. There have been far fewer studies, however, on the interaction between temperature and pH, and their combined impact on fish populations. CERES researchers have conducted some experiments and also investigated other areas to fill other gaps in knowledge. “Our project has done some work, for example, on shellfish, looking at the effect of temperature on growth in shellfish well fed on phytoplankton, compared to those that are poorly fed. We’ve also looked at the effects of temperature and salinity on the survival and growth of tuna larvae – we’ve conducted some very strategic experiments to gain some of the answers we need to make specific models work better,” outlines Professor Peck. Along with this research into the direct impact of climate change, the CERES project is also looking into the possible indirect effects, such as harmful algal blooms, disease, or jellyfish blooms. “Disease is relatively easy to model, because diseases respond to temperature. There may be reductions in some
Land-based trout farm in Turkey. Photo: Ferit Rad
diseases due to warmer winter sea temperatures in the Mediterranean for example,” says Professor Peck. “Algal blooms are very difficult to model, they are linked to wind patterns in certain areas and specific oceanographic features.” It is also very difficult to predict where a jellyfish bloom may occur, an event which can cause serious damage to fish farms. Large amounts of data have been gathered recently on where and when outbreaks occur, from which Professor Peck and his colleagues can investigate the physical factors behind these events and whether they can be accurately forecast. “We are looking at whether we can develop earlywarning tools to help farmers, so that they can take preventative measures. One option would be to put up a bubble curtain, to keep the jellyfish out,” he outlines. This research holds important
a deeper understanding of future changes, companies will be well placed to adapt to new circumstances. “We’d like to provide advice to fisheries companies targeting specific species, to say whether a particular species will continue to thrive in a certain area, or if they might want to consider changing their focus as fish populations shift,” says Professor Peck. A good example would be sea bass; while they were not historically found in large numbers in the North Sea, the population has increased ten-fold in around ten years. “These types of changes in distribution are largely due to temperatures becoming more tolerable, and it becoming warm enough in northern areas for traditionally southern species to survive to survive. So you can have sea bass farms where you couldn’t before,” outlines Professor Peck. “Cod is at the southern end of its distribution in
Most of our work so far has been around trying to understand the physical changes that are going to occur as a result of climate change, and to project those at a scale that’s relevant for the fisheries and aquaculture industries. implications for the future of the fisheries and aquaculture industries, so Professor Peck is keen to encourage companies to participate in the project’s work right from the early stages. “The idea is to go to industry with these tools and conduct participatory science. We don’t want to just talk to fisheries and aquaculture companies at the end of the programme, but rather to involve them at the beginning, to tell them about our plans, what the models can do and what kinds of estimates they can provide, and hear what is most interesting and useful to them” he stresses.
New circumstances This is central to the project’s overall agenda, with Professor Peck and his colleagues in CERES aiming to provide effective advice to fisheries and aquaculture companies. With
Fisheries trawl in Greek waters. Photo: Dimitrios Damalas
the Irish Sea and the North Sea, and it will likely continue to become less productive in these areas in the future.” The debate around climate change is often framed around longer-term challenges, yet aquaculture and fisheries companies by nature tend to prioritise more immediate issues as they seek to boost profits, while a degree of scepticism remains about climate change research. By working closely with industry, Professor Peck aims to heighten awareness among companies of how climate change may affect their operations and reinforce the continued importance of research in this area. “As scientists, I believe we need to work with industry. When you can link with industry and show how you can provide effective advice, I think it strengthens the credibility of science,” he says.
Mussel longline farming in the Limfjorden, Denmark. Photo: Camille Saurel
CERES The CERES project brings together researchers from 15 countries to address a wide
range of questions around the future impact of climate change on the European fisheries and aquaculture industries. We spoke to leaders of two workpackages within the project to get a deeper picture.
EU Research: What is your role within the CERES project?
Dr John Pinnegar: My main involvement is in leading a task for workpackage one. In one
part of that, my colleagues and I are producing future projections of temperature and climate around European seas but also trying to map out how politics and social attitudes might change. We’re also trying to model how fish distribution will change over the next 100 years, using a whole suite of species distribution models and building on historical data. We know where the fish are now, and how that relates to temperature and salinity and other factors – we can forecast how temperature and salinity are going to change over the next 100 years, and from that predict where fish will move to.
Dr Ignacio A. Catalán Alemany: I’m the leader of workpackage 2, which deals with the effects of climate change on fisheries. We have three main tasks in this workpackage. The first task is related to collecting information on direct effects of climate change related factors. In the second task, we are analysing historical changes in species distribution, or species productivity. So we are trying to work out if the changes that have been detected can be attributed to any particular factor that is related to climate change. Then in the third task, we try to integrate all this information into models that project future changes, under a series of scenarios.
EUR: Is the project also looking at indirect effects of temperature change on fish populations? Dr John Pinnegar: Yes, we’re looking at the indirect effects of temperature on aquaculture and fisheries, including things like jellyfish outbreaks and harmful algal blooms. Also, the projections are that as water warms up, its oxygen-carrying capacity will go down. With a lot of fish, their ability to grow and their metabolism is related to temperature and oxygen concentrations. So if the oxygen concentration drops, they’ve effectively got less resources available to put into growth and activity.
CERES Climate change and European aquatic RESources ceresproject.eu
CERES advances a cause-and-effect understanding of how climate change will influence Europe’s most important fish and shellfish resources and the economic activities depending on them. It will provide tools and develop adaptive strategies allowing fisheries and aquaculture sectors and their governance to prepare for adverse changes or future benefits of climate change.
This project receives funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 678193 (CERES, Climate Change and European Aquatic Resources). EU funding 5.6 million Euro.
There are a total of 26 project partners from 15 different countries. For full details, please visit the website.
Professor Myron Peck Institute of Marine Ecosystem and Fishery Science (IMF) University of Hamburg Olbersweg 24, D-22767 Hamburg, Germany T: +49 40 42 838 6600 E: email@example.com
EUR: Are you also considering how the fishing industry will adapt to these changes in future? Dr John Pinnegar: In the project we’re doing vulnerability assessments, where we look at
Professor Myron Peck
European fishing fleets. We look at factors like the fish that they catch - whether they’re warm water or cold water species - where the ports are located, and try to rank the fishing ports and fleets of Europe in terms of their overall vulnerability.
EUR: This covers quite a wide range of fish species? Dr Ignacio A. Catalán Alemany: In CERES we work with the concept of storylines, of which we have 27 – 4 from inland waters, 13 from marine fisheries, and 10 from marine aquaculture. In each of these storylines we focus on particular species, areas and associated industries. We look at traits we think are important for the population that we are studying. In terms of reproduction, depending on the species, we look at questions like whether there is an increase or decrease of the number of eggs produced as a response to projected varying conditions in temperature, or PH, or oxygen levels.
Myron Peck is professor of experimental biological oceanography at the University of Hamburg, Center for Earth Systems Research and Sustainability. His group’s research integrates ecophysiology of fish and zooplankton, field surveys of marine ecosystems as well as physical and biological modeling to help gain a cause-andeffect understanding of how climate change will influence ocean habitats.
Aquaculture farm in Portugal. Photo: Rui Gomes Ferreira
The right track to a greener tomorrow Railway sleepers have historically been made out of concrete, yet with concern over sustainability rising, researchers are looking to develop alternatives. The Greenrail sleeper makes use of recycled materials and could also help turn the railways into a source of clean energy, as the company’s founder and CEO Giovanni De Lisi explains. The majority of
sleepers used on Europe’s railways are made out of concrete, now researchers are exploring an alternative solution that promises to not only address sustainability concerns, but also turn railways into a source of clean energy. This is built on Giovanni De Lisi’s experience of working in railway infrastructure, during which he saw the shortcomings of conventional sleepers first-hand. “I observed all the technical and environmental issues deriving from the use of concrete sleepers, such as high vibration and noise levels, ballast pulverization, and elevated maintenance costs,” he says. This planted the seed which eventually led Mr De Lisi to found Greenrail, a Milan-based company developing an innovative sleeper utilising recycled materials; this combines the attributes of a concrete sleeper with those of a composite one, resulting in significant technical and environmental benefits. “This allows us to reuse up to 35 tonnes of endof-life tires and recycled plastic for each kilometre of a railway line,” he explains.
Greenrail Greenrail, innovative and sustainable railway sleepers: the greener solution for railway sector Greenrail S.r.l aims to bring to European and global market a new kind of innovative and sustainable railway sleeper with a high potential in railway field, improving technical, economic and environmental characteristics, able to become a standard solution for global railway lines. Giovanni De Lisi Via Giovanni Durando c/o PoliHub, 39 20158 Milano (Italy) T: +39 02 91773151 E: firstname.lastname@example.org W: www.greenrailgroup.com Project Manager since 2003, born and raised in the infrastructure sector, has worked for 9 years in the railway field, supervising the maintenance and strengthening of the lines. In 2011, his technical background allowed him to invent a new concept of an eco-sustainable railway sleeper. In 2012, he founded Greenrail S.r.l., which shortly became an important player in the railway sector and an example of sustainable industrial development according to the principles of circular economy.
Greenrail sleeper This work has already attracted a lot of interest, and the company has gained backing from the EU under the Horizon 2020 programme to further develop the sleeper, with a view towards wider commercialisation in future. The inner core of the Greenrail sleeper is produced using pre-stressed, reinforced concrete, and so retains the same mechanical characteristics as a conventional sleeper; the novel aspect lies in the outer shell. “This protects the inner core in concrete, and leads to numerous improvements in the sleeper’s performance, such as better electrical insulation, increased track stability, and less vibration and noise,” outlines Mr De Lisi. The outer shell of the Greenrail sleeper has a damping effect, which helps to transfer the train loads to the track ballast and subgrade. “Therefore, it not only protects the inner core
thawing phenomenon,” outlines Mr De Lisi. Railways are operated in a wide variety of environmental conditions, from extreme cold to searing heat; Mr De Lisi says the Greenrail sleeper can be adapted to local conditions. “Our sleepers can be designed and produced according to different technical specifications of each country and, hence, may be applied on all kinds of railway lines. We can also modify our outer shell’s composition, in order to ensure the best performance,” he says. This also helps to reduce maintenance costs, which is an important consideration for many railway companies. In Italy for example, the cost of maintaining a high speed railway line comes to around 30,000 – 50,000 euros per kilometre a year; the Greenrail sleeper could have a significant impact in these terms. “Thanks to the reduced ballast pulverization, a railway line
This protects the inner core in concrete, and leads to numerous improvements in the sleeper’s performance, such as better electrical insulation, increased track stability, and less vibration and noise. from weather agents and climate phenomena, but also reduces the vibration and noise levels deriving from railway traffic,” continues Mr De Lisi. “This is especially important for railway tracks in the vicinity of populated areas.” The Greenrail sleeper is designed to be extremely durable, with the outer shell produced using a unique mix of rubber and plastics, which provides effective, reliable protection against the elements. It has been designed to endure severe climate conditions, from -40°C right through to +80°C, while the major issues that can affect a sleeper have also been taken into account. “The design of the sleeper prevents the concrete from being penetrated by sand or water, ensuring protection from the freezing/
constructed with Greenrail sleepers requires less maintenance, which reduces costs by at least 30 percent,” says Mr De Lisi. These are important attributes in terms of the future of the railways, yet Mr De Lisi is keen to stress that Greenrail’s long-term vision goes beyond producing sleepers towards something more transformative. “Thanks to the possibility to incorporate smart systems in the outer shell of the sleeper, such as PV panels for energy harvesting and systems for predictive maintenance, railway infrastructures will be turned from passive to active ones. In such a scenario, Greenrail will become a smart railways’ services’ provider, able to foresee and plan maintenance activities, and make sure of smooth railway traffic,” he says.
Architecture as a Social Construction Architects may have a particular vision for how a building will fit in to the surrounding landscape and what purpose it will serve, but it’s people who use them on a day-to-basis and help to ascribe meaning to the structure. Alongside artistic and cultural concerns, sociological considerations should be taken into account in the design of buildings, argues Dr Silke Steets. The Social Construction of Reality was very well received by the scientific community when it was published in 1966, providing a new perspective on human thought, and it still ranks among the most important sociology books today. Written by the sociologists Peter L. Berger and Thomas Luckmann, the book brought together different strands of sociology that were previously only loosely connected, now Dr Silke Steets aims to extend their ideas to the material realm, in particular the architectural field. “The built environment is a core aspect of our daily lives, so my idea was that we need to think about buildings from a sociological point of view, and Berger and Luckmann’s sociology of knowledge is most helpful here,” she outlines. The traditional approach to architecture emphasises artistic and cultural concerns, but now Dr Steets wants to help create a wider perspective. “I want to create more sensitivity around the views of the ordinary users of buildings, and how ordinary people perceive buildings,” she explains.
Sociology of knowledge This work draws heavily on the ideas of Berger and Luckmann outlined in The Social Construction of Reality. The two authors developed a concept of the sociology of knowledge which includes not only philosophical ideas, but what passes for knowledge in people’s daily lives. “This everyday perspective is what I gain from Berger-Luckmann,” says Dr Steets. Based at the University of Leipzig in Germany, Dr Steets is now applying
these ideas to architecture, with an awareness that a building is at the same time both a material and a symbolic construction. “We talk about significant buildings in public space and engage in discourse about it. This would be one example of socially ascribing meaning to a building,” continues Dr Steets. “Another level would be looking at the everyday way in which people use buildings. Therefore, we need a multi-faceted approach to understanding how buildings become meaningful to us.”
Dr Steets. “A good architect analyses the spatial and social situation, and then makes a suggestion on how it could be improved.” This starts from detailed analysis of the overall situation, and Dr Steets believes sociological considerations should be taken into account. Alongside contributing to the material turn in sociological theory, Dr Steets also aims to encourage architects to consider sociological understandings of the built environment. “I’m arguing for a more inter-
I want to create
more sensitivity around the views of the ordinary users of buildings, and how ordinary people perceive buildings. The historical context of architecture is also important in these terms. A building may have been designed and constructed for a specific purpose, but this may well change over time, and while many cities want to preserve their architectural heritage, some buildings may be associated with individuals, ideas, or periods of history that people would rather forget. “Leipzig for example is very keen to preserve its medieval and musical traditions. However, it’s rejecting the heritage of the GDR (German Democratic Republic 1949-90) era, such as modernist architecture from the ‘60s and ‘70s,” says Dr Steets. Architects are trained to create spaces, which involves formal as well as social aspects. “Whereas a client or a user sees a situation as it is, the architect sees it as it could be,” stresses
disciplinary perspective in architecture,” she says. This is already happening to a degree, with some younger architects showing interest in sociological ideas and collaborating in research, something which Dr Steets aims to encourage further. “I want to get in conversation with architects and planners, and to create a consciousness or sensibility about how users perceive buildings,” she says.
Berger-Luckmann Berger/Luckmann Revisited: The Sociology of Knowledge Between Disciplinary History and Empirical Application PD Dr. Silke Steets, Leipzig University Nikolaistrasse 8–10, Room 4.16 Internal Mailbox: 16 31 99 04109 Leipzig, Germany T: +49 (0) 341 / 97-37772 E: silke.steets(at)uni-leipzig.de W: http://www.silke-steets.de/en/
Leipzig University, Campus Augustusplatz 2017 Photographer: Swen Reichhold / Universität Leipzig
Silke Steets is a sociologist and Heisenberg Fellow at the Institute for the Study of Culture at Leipzig University. Her theoretical background is in the sociology of knowledge and her research interests include the relationship between space, popular culture, religion, contemporary art, materiality and the city.
Constructing the future The continued growth of urban centres requires significant investment in infrastructure, yet construction traffic causes high levels of pollution and disrupts daily life in cities. We spoke to Francesco Ferrero and Cindy Guerlain about the SUCCESS project’s work in investigating an alternative approach to managing the construction supply chain. The continued growth of European cities requires significant investment in both the development and refurbishment of buildings and infrastructure, and many people have grown used to seeing construction traffic weave its way through urban areas. This can be highly disruptive to daily life, as large amounts of materials are required at construction sites, a topic at the heart of the H2020 SUCCESS project, which brings together 11 partners from the public and private sectors. “Constructionrelated transport is a big component of urban transport,” stresses Francesco Ferrero, the coordinator of the project. The project is investigating an alternative approach to managing the supply chain where, instead of transporting goods and materials directly to a site, they are instead taken first to a construction consolidation centre (CCC). “The idea of a consolidation centre is very much related to the possibility of aggregating different loads dropped off by carriers and making deliveries to the construction site using cleaner vehicles with a higher load factor, which means that you basically unload what you are transporting into a consolidation centre,” explains Ferrero. Construction supply chain This work is built on data collected from four separate construction projects, or pilots, from which researchers aim to understand how major projects work and identify where efficiency can be improved. A precise methodology has been applied to gather data from these sites, in Luxembourg City, Paris, Valencia and Verona, which have very different circumstances and requirements. “Every project is different. You may have areas
The pilot site in Luxembourg City.
that are completely different even within the same city,” acknowledges Ferrero. There is no one-size-fits-all solution to managing the urban supply chain, and Ferrero says it’s always important to take the local circumstances into account. “We need to test solutions that can work in specific cases. One of our priorities was to do a lot of benchmarking, so that cities could identify what would be the best approach for them, based on similar projects in similar cities,” he outlines. Researchers have gained some important insights from the data that has been gathered at the four pilot sites. One major point is that currently many of the trucks transporting materials to construction sites are not fully-
This could represent a more effective approach to managing the construction supply chain, reducing traffic in urban centres at busy times and ensuring resources are used more efficiently. Many businesses are typically involved at a construction site, including not only building suppliers, but also actors like electricity and plumbing companies, and they must temporarily combine efforts according to an agreed plan, so managing a construction project is a complex task. “These companies have their own suppliers. They may only use one supplier, they might have four,” says Ferrero. These companies need access to essential materials in order to work effectively and
We need to find the right balance. While we want materials to be transported quickly and efficiently, it’s also important to consider the environmental impact. loaded. “The average load factor that we have observed in the sector is around 50 percent, which is very inefficient,” points out Ferrero. A CCC will allow managers to combine different loads where possible and so use trucks more efficiently, while Ferrero says these centres can also lead to further benefits. “When a CCC is in an accessible location on the outskirts of a city, the price of land is lower and traffic disturbance is reduced,” he outlines. “It can basically stay open 24-7, so carriers don’t have just narrow windows of opportunity for making deliveries. You can use it to unload your goods and materials, and to combine them in an optimal way. We’ve put a lot of effort into analysing the conditions and understanding how consolidation centres could work.”
meet their deadlines, so even small delays to deliveries can have serious effects. “Delays in delivery could lead to a need to comprehensively review the work plan for the day, as it might depend on the correct equipment being available. If the equipment or personnel aren’t available, then this creates disruption and inefficiencies that in the end result in higher costs,” says Ferrero. A CCC has much more storage space than a typical construction site, so it can be used as a sort of back-up for some materials, where they are stored in a safe place. While delays in delivery to the construction site cannot be ruled out, they are likely to be less serious than if the materials were being transported from further afield. “We want to create a sort of
The pilot site in Verona.
just-in-time provision for a construction site, so there’s a greater degree of flexibility,” explains Ferrero. The location of a consolidation centre is another topic the project is addressing. “We’re looking at where a consolidation centre should be located with respect to the project(s) that it is intended to serve, and to the projections for the future, for example the important projects that may be coming up over the next few years,” continues Ferrero. “We need to find the right balance. While we want materials to be transported quickly and efficiently, it’s also important to consider the environmental impact.”
Sustainable Urban Consolidation CentrES for conStruction Project Objectives
SUCCESS aimed to improve the urban freight transport caused by construction. It provided tools to better understand the impact of new policy measures (e.g. consolidation centres, size and class of admitted vehicles), discuss these measures cooperatively with different stakeholders (local administrations, contractors, transport companies), identify the measures that can work better in each specific case, and the cost and benefits associated to them.
SUCCESS is financed by the H2020 programme and is part of the CIVITAS initiative.
Environmental impact The CCC would ideally be close enough to the construction site so that the goods can be transported there reasonably quickly, but not so close that it causes too much pollution in the city. The cost of the land should not be too high as well, as that is often a very big part of the business case for a consolidation centre. “If the construction companies are contributing to the cost of a consolidation centre, then it needs to be in a convenient location for them,” says Cindy Guerlain, a researcher also closely involved in the project’s work. These issues all need to be taken into account in considering the impact of a consolidation centre, which is a central element of the project’s research. “We have simulated the impact of these consolidation centres in the pilot locations, in fact two in the case of Paris,” says Ferrero. This would give the key stakeholders, including construction companies, transport companies, suppliers and local authorities, a firmer basis on which to take decisions. A toolkit has been developed within the project, including a cost/benefit analysis tool to assess the impact of a CCC, and also a tool to identify the ideal location for a centre. “People can look at the map of a city, see where the suppliers are located and look at the relative importance of the suppliers,” explains Ferrero. The tools also enable stakeholders to calculate the impact of a CCC on several key performance indicators (KPIs), such as on levels of particulate matter in the atmosphere,
Construction is still scarcely affected by technological innovation (Photograph by Thibault Desplats).
which Ferrero says is one of the key outputs from the project’s work. “This is very useful in terms of creating an environment where you can take these complex decisions, providing a basis to identify the major factors involved and consider conflicting interests,” he outlines. The project’s work is already having a practical impact, with several construction companies actively looking to implement some of the solutions that have been developed. Beyond the funding term of the project, Ferrero believes there remains significant scope for technical innovation in the construction sector, which could help to improve the sustainability of urban development. “We’re starting to see the use of 3-D printing and other innovative manufacturing technologies in some parts of the construction sector for example,” he says. With the construction sector under pressure to increase its productivity performance while reducing its environmental impact, this is set to remain an important area of research for Ferrero and his colleagues in future. “We will continue to look at new opportunities to apply some of these solutions to new construction projects, and to continue with our research,” he says.
The pilot site in Paris.
The project consortium includes 11 European partners from France, Italy, Luxembourg and Spain. They represent different and complementary types of institutions such as public administrations, construction companies, research centres and professional associations. By sharing missions and knowledge, they enable the SUCCESS project to fulfil its objectives. http://www.success-urbanlogistics.eu/partners-presentation/
Francesco FERRERO Lead Partnership Officer - Mobility, Logistics and Smart Cities Luxembourg Institute of Science and Technology (LIST) IT for Innovative Services (ITIS) Department 5, avenue des Hauts-Fourneaux L-4362 Esch/Alzette T: (+352) 275 888 - 2227 E: email@example.com W: www.success-urbanlogistics.eu LinkedIn: http://www.linkedin.com/in/ francescoferrero Twitter: @franzferrero Francesco Ferrero
Francesco FERRERO has been the Lead Partnership Officer for Mobility, Logistics and Smart Cities with the IT for Innovative Services Department of the Luxembourg Institute of Science and Technology since 2016. Previously, Francesco was the Head of the Smart City Strategic Program with the Istituto Superiore Mario Boella, Torino.
The pilot site in Valencia.
Better Understanding Human Behaviour and its Consequences in Markets and Games Professor Vincent Crawford explains how his project BESTDECISION is advancing our understanding of central questions about economic behaviour, the design of institutions, and the governance of relationships, by combining traditional economic and gametheoretic methods with psychological insights and experimental and empirical evidence. The key to BESTDECISION’s approach is using experimental and empirical evidence to increase the realism of the behavioural assumptions in economic models, while preserving the generality and power of traditional methods of analysis. In Crawford’s words, “The project uses traditional economic methods of analysis to address traditional economic questions, but with behavioural assumptions more firmly grounded in psychological insights and experimental and empirical evidence than has been customary – much more of a compromise with psychology on behavioural assumptions and realism, but not compromising at all on methods. Human behaviour is always going to be less than perfectly ‘rational’ and noisy, but that’s not the main issue – the main issue is ‘what is the central tendency of behaviour’ and what is the best way to model that?” With this goal in mind, the BESTDECISION project is divided into several lines of study. Each was chosen for its central importance to economics, with a judgment that improving the realism of assumptions would yield conclusions that are more useful in applications. Consumer theory and labour supply One line of study reconsiders one of the most frequently used models in all of economics, consumer theory, in which people balance prices and budgets to satisfy preferences for consumption that reflect trade-offs between different goods. Consumer theory’s most important application may be the theory of labour supply, in which workers balance their preference for more leisure against the benefits of earnings that can be spent on consumption goods. This line seeks to increase the realism of consumer theory’s predictions concerning labour supply by modifying its assumptions about preferences in a direction suggested by Daniel Kahneman and Amos Tversky’s ‘prospect theory’, one of the best-supported and widely applied models of decision-making from psychology, for which Kahneman (after Tversky’s death) shared the 2002 Economics Nobel Prize.
A famous 1997 paper, part of the work for which co-author Richard Thaler was awarded the 2017 Economics Nobel Prize, tested the standard theory of labour supply using cabdrivers — who choose their own hours, as the theory assumes — and found it seriously wanting: The theory predicts that drivers who have an unusually profitable morning, signalling a higher ‘wage’, will work longer that day. But drivers tend to quit earlier on such days, in this and several more recent datasets: the opposite of what the theory predicts.
psychological grounding and experimental support in other settings limits the risk of making the model more flexible. In Crawford’s words, “The basic idea of prospect theory is that people don’t only care about levels of consumption, as assumed in traditional economics – they also react to changes in consumption relative to a ‘reference point’, with ‘loss aversion’: ‘losses’ below the reference point hurting them more than equal-sized ‘gains’ above it help them. As I tell students, suppose you have two people, both middle-class now, but last year one of them
The trouble is that the standard theory has no way of saying that communication makes a difference but in real life it makes all the difference. In life, if I can’t talk to you, I have a very limited set of tools for repairing a broken relationship. This anomaly confronts economists with a stark choice: either drivers are irrational, or their preferences deviate from the traditional assumptions. Economists are reluctant to give up on rationality, which is the source of much of the theory’s power. But changing assumptions about preferences is also risky, because it is hard to know where to draw the line, and sufficiently flexible preferences can ‘explain’ anything — and will therefore explain nothing. Thaler and his co-authors informally suggested an explanation by changing the traditional assumptions about preferences to conform more closely to prospect theory, whose
was a poor student and the other a millionaire. They’re going to look at their current choices in very different ways, and prospect theory gives you a ready-made way to model that.” Thaler and his co-authors noted that drivers who care about changes in income and are loss-averse, make choices that cluster around income targets (like prospect theory’s reference points) — daily targets, as it seems from the data. On good days, drivers hit their targets sooner and work less; while on bad days they work more. Such choices, which seem irrational under the traditional assumptions, can usefully be viewed as rational when drivers care about changes as
Empirical Application: Observed Reference points - Selten Index of Predictive Success.
well as levels of income and are loss-averse. As Crawford says, “Suppose you have a person who is reference-dependent and loss averse, as in prospect theory. That is going to make them make choices that look irrational under the assumption that people care only about their levels of consumption – just as you would look irrational if I was asking you to choose between apples and oranges and bananas and I assumed you didn’t like bananas, but you chose some anyway. I would conclude you were irrationally wasting your money because I made too narrow an assumption about your preferences. Just as people have concluded that cabdrivers were irrational because the traditional theory doesn’t allow them to respond to changes that they actually care about.” More recent empirical work, including some by Crawford and team member Juanjuan Meng, has proposed and tested more formal models, confirming that a generalisation of labour supply theory based on prospect theory can give a coherent account of drivers’ choices, resolving the anomaly without losing the model’s power to make sharp testable predictions. But all such models to date have relied on very strong ‘structural’ assumptions about the shapes of the functions used to represent drivers’ preferences that nobody has carefully tested or really thought about. This left two questions open: First, whether the models’ power to explain drivers’ anomalous choices comes from the general notions of referencedependence and loss aversion, or merely from the functional form assumptions of the models that have been tested. Second, whether the generalised models explain the data sufficiently better to justify their extra flexibility. The work in this strand, joint with team members Ian Crawford and Juanjuan Meng, seeks referencedependent generalisations of classic results from ‘nonparametric’ (done without functionalform assumptions) consumer theory. The results obtained so far in this strand yield elegant and practical methods to answer both questions. Continuing work will make the methods more complete and tractable.
Bilateral bargaining in Singapore.
The design of bargaining institutions A second line of study concerns another question at the core of economics, the design of optimal bargaining institutions. In Crawford’s words, “A broader goal is to build more realistic limits on human cognition into the theory and see how that changes game-theoretic microeconomics, which relies very heavily on complete rationality and perfect ability to predict other people’s behaviour, which is a bit more than most people can do.” The analysis of design of optimal bargaining institutions is one of the pillars of gametheoretic microeconomics, hence a natural place to start. In 1983, Roger Myerson and Mark Satterthwaite proposed a novel solution to this design question, part of the work for which Myerson shared the 2007 Nobel Prize. They considered settings with two traders, one who owns an indivisible object and would be willing to sell it for enough money, and another who would be willing to buy it if the price is right. As Crawford says, “Suppose I have an object and a value for it – a minimum amount I would be willing to take. You also have a value. But neither of us knows the other’s value, so we don’t know whose is higher, or whether it would be mutually beneficial to trade. The ideal outcome would be for us to trade if and only if your value is higher than mine, at a price that fairly shares the gains. Myerson and Satterthwaite started with the observation that none of the bargaining institutions they had
Empirical Application: Observed Reference points - Pass Rates.
heard of always achieves this ideal outcome. One familiar institution has me making you an all-or-nothing offer, in this case called an ‘ask’, and you saying ‘Yes’ or ‘No’. You (who cannot make a counteroffer in this institution) will say ‘Yes’ when and only when your value is higher than my ask. So far, so good; but I can do better by ‘shading’ my ask above my true value, so that we won’t trade when my true value is below your value (so that we should trade) but my ask is above your value. Other familiar rules, like the ‘double auction’, in which you submit a ‘bid’ and I simultaneously submit an ask, and we trade when your bid exceeds my ask, suffer from the same kind of shading problem.” Importantly, they didn’t stop there, instead stepping outside the box by asking whether there is any feasible institution that ensures that we trade when and only when we should, given the incentives that any such institution creates. Thus they went beyond considering institutions as given and immutable, instead thinking of them as something that can be chosen. Their characterisation of optimal institutions is an important precursor of the burgeoning modern field called ‘market design’ or ‘economic design’. To step outside the box they needed game theory, which first made it tractable to analyse behaviour in settings like this only a few years before they wrote. Game theory asks what determines my best decision when the rewards depend on your decision, and vice versa. The traditional theory assumes that we are rational and will make decisions that are in Nash equilibrium, in the sense that my decision is best for me given yours, and vice versa. “Put another way, the assumption is that we can perfectly predict each other’s responses to the game, so that our rational responses make the predictions come true. In the double auction my decision is actually a rule, or ‘strategy’, that tells me what to bid as a function of my value; and your strategy tells you what to ask as a function of your value”. Assuming that traders would respond to any chosen institution by playing their Nash equilibrium strategies, Myerson and Satterthwaite asked whether there is any institution that ensures that we trade when and only when we should, given the incentives for shading any such institution may create. They showed that the answer is ‘No’, but also that in some leading cases, familiar institutions like the double auction are ‘second-best’: not perfect, but as good or better than any feasible institution. In general, though, optimal bargaining institutions are complex, and sensitive to features of the setting that real institutions ignore. Crawford’s second line reconsiders Myerson and Satterthwaite’s conclusions, replacing Nash equilibrium by an alternative “level-k” model of
BESTDECISION Behavioural Economics and Strategic Decision Making: Theory, Empirics, and Experiments
The project studies microeconomic questions combining standard methods with assumptions that better reflect psychological evidence. One part studies nonparametric identification and estimation of reference-dependent versions of the standard microeconomic model of consumer demand or labour supply. Another reconsiders the optimal design of bargaining rules, replacing the standard Nash equilibrium assumption with a “level-k“ model of strategic thinking that is better supported by evidence. Still other parts seek to improve our models of how cooperation is brought about and maintained in longterm relationships, with particular attention to the role of communication.
The BESTDECISION project is funded by the European Research Council.
Professor Vincent Paul Crawford Department of Economics All Souls College Oxford OX1 4AL United Kingdom T: +44 1865 279339 E: firstname.lastname@example.org W: https://cordis.europa.eu/project/ rcn/185399_en.html
Professor Vincent Paul Crawford
Professor Vincent Crawford is the Drummond Professor of Political Economy, University of Oxford, and Research Professor, University of California, San Diego. He is a Fellow of the Econometric Society, the American Academy of Arts and Sciences, the British Academy, and Academia Europea. He is known for his work on game-theoretic microeconomics, particularly on bargaining and coordination, strategic communication, and matching markets; and for his work on experimental and behavioural game theory.
strategic decision-making that has much more experimental support than Nash equilibrium in experiments that elicit subjects’ initial responses to games. When only a trader knows her/his own value, Nash equilibrium thinking in games like those created by bargaining institutions is incredibly complex. The rules experimental subjects use tend to be rational, but to rely on simplified models of each other’s responses that cut through much of the complexity. The goal is to see how much of Myerson and Satterthwaite’s analysis survives under more realistic models of behaviour. The analysis so far has shown that the essential parts survive, with some qualifications, but that the unpredictability of traders’ strategic thinking often forces the optimal institution to take the much simpler form of a posted-price mechanism, where an optimal price is set, and trade takes place if and only if my bid is above the price and your ask is below it. Such institutions promise to yield results that are less brittle, and are (perhaps unsurprisingly) more like real-world institutions than Myerson and Satterthwaite’s optimal institutions are.
just did X. I believe you are trying to adhere to our agreement, but that’s not what I thought we had agreed upon. Please try to explain your thinking.” The ensuing dialogue is likely not only to reaffirm both parties’ desire to continue the relationship, but also how to adjust the contract and/or atone for the breach. As Crawford says, “The trouble is that the standard theory has no way of saying that communication makes a difference but in real life it makes all the difference. In life, if I can’t talk to you, I have a very limited set of tools for repairing a broken relationship.” Further work on this line will explore ways to plug the possibility of communication into game-theoretic models of long-term relationships and show why and how it matters. This is sure to be connected with relaxing the Nash equilibrium assumption, which is what in traditional theory makes it unnecessary to communicate. In Crawford’s words, “My hope is that if you relax the equilibrium assumption you automatically create a richer and more realistic role for communication, particularly natural language communication.”
Embracing the dynamics of argument
Experimental studies of historydependent learning in financial crises, and of strategic thinking
A broader goal of BESTDECISION is attempting to create better working models of how we use communication to make long term relationships work. As Crawford says, “Suppose we both think we know how our relationship is supposed to work – how, for instance, we are supposed to deal with an unexpected, unprecedented event that requires some response. In the traditional game-theoretic model of cooperation in longterm relationships, we have an ‘implicit contract’ that covers everything that might happen. If one of us does something that violates the ‘contract’, we end the relationship, and the implicit threat to do so creates a powerful incentive for us to both follow the ‘contract’, so that we continue to reap the benefits of cooperation. In real relationships, however, that’s not how it works. And more importantly, it works in different ways depending on how much we can communicate, and how rich a language we have available to communicate in. Why else prohibit ‘conspiracies in restraint of trade’? Yet in the traditional theory, communication doesn’t matter.” Put another way, the traditional model of cooperation is the same for chimps and humans, which cannot be right. In a chimp relationship (at least a non-hierarchical one), if one chimp violates the ‘contract’, the other might mirror the first’s actions to show her/him how it feels, or inflict physical pain, or end the relationship. By contrast, humans who share a rich language are more likely to communicate their displeasure in a way that gives them a chance to repair the relationship, perhaps like this: “I see that you
Two final lines of study, which space does not permit discussing here, consider experiments to study other aspects of strategic decision-making. One, with team member Miguel Costa-Gomes, considers history-dependent learning in games like those that arise in financial crises, where individual traders try to outguess the ‘bubble’. The ultimate goal is to quantify how the structure of financial markets influences the likelihood of crisis. Another, also with Costa-Gomes, focuses on strategic thinking in static situations, using an ‘eye-tracking’ design that observes individuals’ searches for freely available but hidden payoff information. Crawford explains, “The goal here is to study people’s thought processes in more detail than is possible by observing only people’s decisions and ‘black-boxing’ cognition. We have been designing experiments that look at people’s eye movements along with their decisions and use an algorithmic view of decision making to interpret the eye movements – and infer the thinking behind the decisions.” In conclusion, the several interwoven lines of research in BESTDECISION seek to use more realistic behavioural assumptions, in combination with the already powerful analytical, mathematical, and statistical methods of mainstream microeconomics, to shed new light on central economic questions. The goal is both to enhance the power and usefulness of traditional microeconomics in applications, and to ‘civilise’ behavioural economics and behavioural game theory.
The development of digital technologies and social media is leading to rapid changes in traditional power dynamics, allowing more of us to share our opinions, publicise our ideas and contribute to public debate - with all its ambiguities. We spoke to PD Dr Gotlind Ulshöfer about her work in developing an ethics of power for the digital age.
Many different conceptions
of power have been developed over the course of history, with thinkers throughout the ages exploring the underlying ethical foundations of power. With the development of new digital technologies widening access to information and dramatically changing conventional power relationships, Dr Gotlind Ulshöfer is looking again at an ethics of power. “There are essentially two parts to this project. There’s a theoretical part, where I discuss the meaning of power and its ethical dimensions from a theological point of view. Power there is seen
a classical definition of power, for example described in the work of the sociologist Max Weber. But if you look at the work of Michel Foucault, or Hannah Arendt, they also examine other dimensions of power, including the communal side,” continues Dr Ulshöfer. “Part of the project is about looking at which meaning of power is relevant in which context.” The pace of technological development is an important consideration in this respect. While the traditional media still play a central role in shaping debate and creating the public sphere, the growth of social media means
First of all, I aim to identify the important ethical questions around these technological developments and look at their influence on society. From there, we can then start to intensify considerations of the normative questions. more in the sense of empowerment, or in service to others,” she says. The second part of the project centres around what Dr Ulshöfer describes as fields of reference, for example power dynamics in the sharing economy, or in the use of data, or in social media. “I am looking for example at issues around online influence and the use of data,” she says.
Power dynamics This latter point is very much a modern concern, yet Dr Ulshöfer believes that traditional ethical frameworks and theological concepts are still relevant in examining contemporary power dynamics. With a background in theology and economics, Dr Ulshöfer draws on established ideas on ethics and justice to develop a new understanding of power. “It’s important to look at this from an ethical point of view,” she says. Part of the project centres on looking at different definitions of power. “There is
more and more people today have a platform to share their opinions, which has positive as well as negative effects on the creation and style of debate in the public sphere, for example with hate speech. “Nowadays there are more outlets than ever before,” acknowledges Dr Ulshöfer. With technological change continuing apace, Dr Ulshöfer believes it’s important to consider the ethical implications and impact of new technologies on power dynamics, a topic that is central to her research. The different power players like big internet companies and quasimonopolists are also important considerations in research, as well as individuals like influencers and the phenomenon of ‘the crowd’. “First of all, I aim to identify the important ethical questions around these technological developments and look at their influence on society,” she outlines. “From there, we can then start to intensify considerations of the normative questions,
and look at issues from more of a theological perspective.” This could then provide a basis for Dr Ulshöfer to analyse ethical questions around changes in power relations and contribute to the literature. In particular, Dr Ulshöfer aims to contribute to literature of theological ethics and also to the public discourse concerning questions around power. “I aim to develop perspectives on power and how to deal with it from an ethical point of view,” she outlines.
Ethics of Power The project is funded by the Deutsche Forschungsgemeinschaft and implemented at the Faculty of Protestant Theology at Tübingen University, Germany. Project Coordinator, Privatdozentin Dr Gotlind Ulshöfer Eberhard Karls Universität Evangelisch-theologische Fakultät Liebermeisterstraße 12 D -72076 Tübingen T: +49 7071 297 2591 E: email@example.com W: http://www.uni-tuebingen.de/ de/48956
PD Dr Gotlind Ulshöfer is a researcher with a Heisenberg-grant from the Deutsche Forschungsgemeinschaft at Tübingen University, Germany. She holds diplomas in theology and economics and a dissertation and habilitation in Protestant theology. She has studied at the universities of Tübingen, Heidelberg, Jerusalem and Princeton Theological Seminary.
Evangelisch - Theologische Fakultät
Surveying the Galaxy The Gaia satellite was launched in December 2013, and over its five-year mission it will measure the positions, motions and distance indicators of more than a billion stars. The ESSOG project aims to exploit this huge body of information, helping to build a more detailed picture of our galaxy, as Professor James Binney explains. Astrometry, the measurement
of the movement of stars across the sky, lies at the heart of astronomy. Upon this foundation astrophysicists build our physical picture of the universe. The deployment of the European Space Agency’s (ESA) Hipparcos satellite in the ‘90s revolutionised astrometry by, for the first time, making astrometric measurements in space and by pioneering a novel technology. “Doing it from space is a game-changer, as the Earth’s atmosphere does two very bad things from the perspective of measuring stars. First, it dithers the image of a star, which makes it difficult to identify the centre of the image. Then light is also refracted by the atmosphere, so it actually moves the apparent position of a star,” explains James Binney, Professor of Physics at the University of Oxford. Moreover, from space it’s possible to look in two quite different directions at once, and doing this resolves the classical difficulty in measuring `parallax’, the tiny variation over the seasons in the direction to a star as the Earth moves around the Sun. “When you look in one direction with a telescope, the parallactic motions of all stars occur in phase. So the angles between stars, which is what you can measure, do not change very much – the stars kind of dance in formation,” outlines Professor Binney. The Hipparcos satellite solved this problem by looking simultaneously in two directions separated by more than 90 degrees, and imaging these two star fields on a single detector. The parallactic motions of the stars in one field were out of phase with those in
the other field, so the distances between their images on the detector changed significantly, and the annual variation of these distances could be measured with precision. Hipparcos was a great success, so soon after its mission was complete, ESA funded a second astrometric satellite called Gaia, which will gather huge volumes of data. “Gaia started its five-year programme of activity in the summer of 2014, and it is measuring the motion of over a billion stars with a precision that is in the order of a hundred times greater than what Hipparcos achieved,” says Professor Binney. Moreover, Gaia works in a different way to
ESSOG project April 2018 saw the release of the first significant set of data from the Gaia mission. The release contains the parallaxes and sky motions of more than a billion stars, and Doppler shifts for several million stars. As Principal Investigator of the ESSOG project, Professor Binney aims to extract scientific insights by combining this trove of data from Gaia with results from massive spectroscopic surveys using large ground-based telescopes. “The goal in ESSOG was to develop the conceptual tools required to exploit this enormous body of information on the kinematics and chemical compositions
Gaia sweeps its two telescopes systematically over the skies, in a complicated pattern, and finds what stars are there. It then measures their positions repeatedly – almost 70 times over the 5 year-period - and from those positions it figures out how they are moving. Hipparcos. “Hipparcos was sent into orbit with a list of roughly 100,000 stars. It was instructed to measure the parallaxes and motions of these stars,” says Professor Binney. “Gaia was not sent with a list, it simply monitors everything in the sky brighter than a faint threshold. It sweeps its two telescopes systematically over the sky, in a complicated pattern, and finds what stars are there. It then measures their positions repeatedly – almost 70 times over the 5 year-period – and from those positions it figures out how they are moving.”
of these stars,” he explains. “We do that by fitting the data into a dynamical model of the galaxy. Such a model specifies the distribution of the mysterious dark matter that holds together galaxies and clusters of galaxies. We have shown that just over half the force that holds the Sun in its orbit around the Galaxy comes from dark matter rather than stars or interstellar gas. The model also specifies how different types of star are distributed in `phase space’ – where the coordinates are position and velocity. Some stars in the galaxy have a
Gaia’s all-sky view of our Milky Way Galaxy and neighbouring galaxies, based on measurements of nearly 1.7 billion stars. The map shows the total brightness and colour of stars observed by the ESA satellite in each portion of the sky between July 2014 and May 2016. Copyright: ESA/Gaia/DPAC.
ESSOG has developed a new type of perturbation theory that yields accurate fits to complex orbits. Left/ top; an orbit in real space; right/ bottom a cross section of the orbit is phase space.
chemical composition similar to the Sun, while others have a different composition, with less iron or more elements like oxygen and magnesium.” In the context of a chemo-dynamical model, these chemical distinctions between stars reveal where and when each star was born, and thus give insights into the evolution of the Galaxy. As the Galaxy has aged, dying stars have polluted the once pristine interstellar medium with elements heavier than lithium. “This enrichment – or pollution – of the interstellar medium of the Galaxy by heavy elements is an integral part of the evolution of the Galaxy. By studying the correlations between the motion of stars and their chemical composition, we expect to be able to explain – to a large extent – how our Galaxy was assembled, and how it has arrived at its present configuration,” outlines Professor Binney. The role of ESSOG is to develop the conceptual tools, algorithms and computer codes required to achieve this goal. “We have developed a couple of novel approaches to building dynamical models,” continues Professor Binney. “These models give you dynamically consistent fantasy galaxies, which can be compared with the observational data coming from both Gaia and ground-based surveys. By adjusting the fantasy model until it gives us a decent fit, we can map the Galaxy’s gravitational field and determine how stars are distributed in phase space,” he outlines. “The stars in catalogues tend to be relatively nearby, or very luminous, so they are easy to measure. By ‘observing’ the fantasy galaxy in a computer, we can relate the biased contents of the catalogues to what is actually out there.” Early work on the Gaia data will centre on analysing the data for the several million stars for which Gaia has obtained spectra, in addition to measuring astrometry. Later, a slightly different modelling process will be applied to data for the much greater number of stars for which we don’t have a spectrum from which a Doppler shift can be measured. “There are different approaches available – we use different groups of stars in different ways,” says Professor Binney. Since the interstellar medium contains smoke (‘dust’) that absorbs light, it’s necessary to model the interstellar medium in parallel with the Galaxy’s stellar and darkmatter content. “The ESSOG project produced a new tool for mapping the interstellar medium using measurements of how strongly light from individual stars has been absorbed. We are now applying this tool to the Gaia data – a very challenging task computationally. ESSOG also involved modelling dynamically the flow of interstellar gas within the Galactic disc. This
study led to a significant decrease in the rate at which the Galaxy’s central bar is thought to rotate. The recently released data will allow us to test whether this revision was correct.”
Cosmological paradigm The improved understanding of our Galaxy that will flow from ESSOG will test the prevailing cosmological paradigm in greater depth than previously possible. “Since about 2000, we’ve had the Lambda-CDM model, which has been tremendously successful in explaining the large-scale distribution of galaxies and differences between the appearance of galaxies observed at ever greater distances and therefore at ever earlier cosmic epochs. In fact, although many details remain uncertain, we believe that we now understand in broad outline how galaxies formed,” explains Professor Binney. The Lambda-CDM model includes an accelerating expansion of the universe. “Lambda stands for the cosmological constant, which drives the acceleration of the cosmic expansion. On the largest scales, gravity seems to be acting repulsively, causing the expansion of the universe to speed up,” says Professor Binney. The Gaia data will allow researchers to test the Lambda-CDM model and potentially identify any areas where it could be refined and improved, which Professor Binney says is an important aspect of the project’s research. “In principle we could predict from the Lambda-CDM model and well established physics how galaxies are structured, but only via computations that are unfeasibly complex. So researchers have been guessing what the results of these computations would be. Using the Gaia data and the tools we’ve produced in ESSOG, we can test predictions based on these guesses,” he says. Professor Binney says Gaia will deepen our understanding of all extra-galactic astronomy. “Our understanding of galaxies and the largescale structure of the universe rests heavily on our understanding of how stars work and evolve,” he says. “This understanding will be made more precise and more secure by having precise distances to stars in our own galaxy that are like those you can see in other galaxies. Gaia is going to make this possible.” There is also a wider dimension to the ESSOG project. While the Gaia mission is a central part of the ESA’s long-term research programme, it was decided that the data it gathered would be made publically available, so it’s not a given that the scientific payoff from these data will enhance Europe’s science base. “It’s important that there are groups in Europe that are well-prepared for the release of these data,” stresses Professor Binney.
ESSOG Extracting science from surveys of our Galaxy Project Objectives
To develop the tools required to extract science from surveys of our Galaxy. The tools to include chemodynamical models of the stellar and gaseous components and a procedure for mapping interstellar dust. To use preliminary versions of these tools to analyse data from ground-based surveys in anticipation of the arrival of data from Gaia in 2018.
FP7-IDEAS-ERC / Funded under ERC-AG-PE9 - ERC Advanced Grant - Universe sciences/ Maximum ERC Funding € 1 954 460. Start date: 2013-04-01, End date: 2018-03-31.
Project Coordinator, Professor James Binney Rudolf Peierls Centre for Theoretical Physics Clarendon Laboratory Parks Road Oxford OX1 3PU T: +44 01865 (2)73979 E: James.Binney@physics.ox.ac.uk W: https://www2.physics.ox.ac.uk/research/ galactic-dynamics
The Chancellors, Masters and Scholars of The University of Oxford Wellington Square University Offices Oxford OX1 2JD United Kingdom Professor James Binney
Professor James Binney studied in Cambridge, Freiburg and Oxford, had postdocs in Oxford and Princeton and joined the Oxford faculty in 1981. His books include graduate-level texts on galactic astronomy and dynamics, books on critical phenomena and quantum mechanics, and `A Very Short Introduction to Astrophysics’. He has received medals from learned societies in France, Italy, the USA and the UK. He has been a Fellow of the Royal Society since 2000.
Laboratory bench for verification
The final frontier of space exploration Integrated circuits need to withstand a harsh radiative environment if they are to function effectively in space. We spoke to Mr Daniel González about the work of the SEPHY project in designing and developing a new Ethernet transceiver device, work which will enhance European competitiveness in what is a fast-moving area. The harsh environment
of space places heavy demands on equipment, so equipment and components designed for use beyond the Earth’s atmosphere need to be correspondingly robust and reliable. Effective high-speed networking technologies are increasingly essential to modern space systems and the EU is keen to enhance the competitiveness of European companies in this area, a large part of the motivation behind the work of the SEPHY project. “The main goal of the initial funding call was to support the development of integrated circuits for space applications, enabling European countries to compete with components made in the US,” outlines Mr Daniel González, the coordinator of the project. The primary goal of the project is to develop an Ethernet transceiver device, enabling the wider adoption of Ethernet in space applications, including fully synchronous Time-Triggered Ethernet communication, which was developed by Austrian company TTTech, a key partner in the SEPHY consortium.
Protocol stack A large proportion of the key elements of the overall protocol stack for both regular (‘eventdriven’) and scheduled (‘Time-Triggered’) Ethernet has been developed, with only the
physical layer remaining, the part which enables the transmission and reception of packets of information through a cable. This is what Mr González and his colleagues from both the academic and commercial sectors are designing in the project, taking into account the challenges of the space environment. “In order to be used in space, integrated circuits need to be designed in such a way that they can withstand the radiation that they will
robust and reliable. This approach makes them more suitable for use in space. “This is related to radiation hardening by process,” outlines Mr González. The other method is radiation hardening by design, which means that special features or counter-measures are added in the design process, so that the chip will ultimately be able to withstand the radiation levels found in space; both these methods are being used in the SEPHY project.
In order to be used in space, integrated circuits need to be designed in such a way that they can withstand the radiation that they will be exposed to. be exposed to,” he says. There are two main ways in which a chip can be prepared, or hardened, for use in space; the first is called radiation hardening by process. “This means that the process by which the chip is built has some features that make it well-suited to fabricating integrated circuits for space applications. By process, that means all of the fabrication steps required to fabricate a given integrated circuit,” says Mr González. These integrated circuits are built on silicon wafers, the manufacturing of which can be adapted in such a way that they are more
“The process that we are using to fabricate this ASIC (Application-Specific Integrated Circuit) is offered by Microchip Technology Nantes. It’s a SOI (Silicon Over an Insulator) process, which gives full protection against a space radiation effect called Single Event Latchup (SEL), which can cause serious damage,” continues Mr González. “This SOI protects our circuit against SEL, one of the destructive events that can affect a circuit in space.” There are also several other radiation effects to consider, one of which is Total
Ionisation Dose (TID). This relates to the presence of trapped charges, which can cause the transistors in a circuit to behave differently over time. “The greater the radiation dose, the more the transistors – the building blocks of every circuit – will degrade,” explains Mr González. Researchers aim to protect the device against this effect as well as others, including other Single Event Effects (SEEs), aside from SEL, which was previously presented. “For example, Single Event Transients (SETs) and Single Event Upsets (SEUs) can change the state of some of the circuit blocks,” outlines Mr González. “The SEUs affect storage elements of a device. So, when a device is storing a 0 or a 1 as value, it flips the content of this memory – so it turns a 0 into a 1, and vice versa. Meanwhile, SETs are spikes produced on the signals that are driven inside the circuit.” The transceiver is designed to withstand these types of effects, either by process or design, achieving the radiation hardening required for space operations. The specifications have been developed in collaboration with the project’s industrial partners, including several deeply involved in the space industry, who are also playing a key role in testing the circuits. “They are looking to check if the chip behaves as expected electrically, and they are also assessing its performance under radiation conditions,” says Mr González. While there are several options in terms of sites where the chips could be tested for TID, there are less facilities available for heavy ion testing. “The chips will be radiated with the levels that we are expecting the chips to be exposed to when they are applied in space,” continues Mr González. “We will see how well the chips can withstand these radiation levels.”
A first preliminary prototype has already been designed and fabricated, from which invaluable insights were gained into the improvements and modifications required, and a second chip is currently in the process of fabrication. A great deal of progress has been made over the course of the project, and in future Mr González expects to move towards developing a full prototype of the Ethernet device. “The goal of this project was not to get a qualified component, but we hope to have a very good prototype of this Ethernet device within the next year or so,” he says. The transceiver could be relevant to several industrial sectors away from the space market, yet the initial motivation in funding the project was to reduce European dependence on US components, so Mr González is keen to stress that the space
SEPHY test chip 1 with specific carrier.
Consortium meeting in Vienna (from left to right): Úrsula Gutierro (ARQ), Daniel González (ARQ), Jesús López (ARQ), Anselm Breitenreiter (IHP), Jean-Marc Vrignaud (ATM), Ulrich Zurek (ATM), Remy Charavel (EC project officer), Christian Fidi (TTT), Artiza Elosegui (TTT), Matthias Mäke-Kail (TTT), Manuel Sánchez (TASE), Pedro Reviriego (UAN), Anna Ryabokon taking photo (TTT).
SEPHY SPACE ETHERNET PHYSICAL LAYER TRANSCEIVER
The SEPHY project aims to develop an ITARfree and radiation hardened 10/100-base-T Ethernet transceiver (PHY), which can be used worldwide. The project will foster innovation by developing a new space market device, which will reduce European dependence on imports, as it is designed and manufactured with European flows and processes.
This project has been funded under the EU Horizon 2020 COMPET 2014 Enabling European competitiveness, non-dependence and innovation of the European space sector (grant agreement No. 640243).
• Arquimea Ingeneria SLU – Madrid, Spain •M icrochip Technology Nantes – Nantes, France • T TTech Computertechnik AG – Vienna, Austria • Thales Alenia Space – Madrid, Spain • Universitas Nebrissensis – Madrid, Spain • IHP – Frankfurt (Oder), Germany
Project Coordinator, Mr Daniel González CTO ARQUIMEA INGENIERIA S.L.U. C/Margarita Salas, 10 28919, Leganés, Madrid, SPAIN T: +34 91 689 80 94 E: firstname.lastname@example.org W: http://www.sephy.eu/ SEPHY leaflet: http://sephy.eu/leaflet/ SEPHY flyer: http://sephy.eu/flyer/
Mr Daniel González Gutiérrez
Mr Daniel González Gutiérrez received his M.Sc in Telecommunication Engineering from the Technical University of Madrid and is currently working in ARQUIMEA as CTO managing the operations of the company and was previously leading the microelectronics group. Before joining ARQUIMEA Daniel worked for more than ten years in other companies and institutions of the space and consumer microelectronics sectors.
Consortium meeting in Nantes (from left to right): Christophe Guerif (MTN), Pedro Reviriego (UAN), Manuel Sánchez (TASE), Christoph Larndorfer (TTT), Jean-Marc Vrignaud (MTN), Úrsula Gutierro (ARQ), Jesús López (ARQ), Anna Ryabokon (TTT), Anselm Breitenreiter (IHP), Ulrich Zurek (MTN), Yuanqing Li (IHP).
market is the priority. “The technology could be used outside the space sector, but the focus is really on getting the transceiver used in space applications,” he outlines. The project’s work has already attracted a lot of interest from the commercial sector, and so far three non-disclosure agreements (NDAs) have been signed with candidate users of the technology. One area of interest in this respect is launchers, rockets that are used to launch satellites into space. “The lifetime of these rockets is not long – they are launched, and within a few hours they release the equipment into space. The radiation specifications for these launchers are somewhere between those of the commercial world and the space world,” outlines Mr Gonzalez. Research is centred around developing the Ethernet device however, rather than qualifying it for use in space applications, and Mr González says that remains the focus of his attention. “The outcome of the project will be a chip which is
ready to move on to the qualification phase, with a small re-design if necessary, with updates on areas that need to be improved,” he outlines. This work holds clear importance to the wider European space industry. While Europe is home to great scientific expertise, it’s essential to maintain a strong research base if European companies are to remain competitive in a rapidly changing market. “Europe is trying to get components in the market that are manufactured and designed within Europe, to reduce our dependency on parts from the US,” says Mr González. The SEPHY project has an important role to play in these terms, helping to foster innovation and building on Europe’s position as a leader in the field, which in turn will bring benefits to the wider economy. “We’re working with the major European space companies. They typically lead the development of satellites, and hold a lot of expertise on space infrastructures,” continues Mr González.
SEPHY Test chip 2 final layout.
SEPHY Test chip 2 on a frame.