EU Research Spring 2018
BEYOND THE HORIZON
Commissioner for Research calls for more private research funding
Focus on China: Research funding in the 21st Century
Environmental forecast: Climate Change in Europe
The Economic Impact of ICT
Disseminating the latest research under FP7 and Horizon 2020 Follow EU Research on www.twitter.com/EU_RESEARCH
Editor’s No I
t’s true that sometimes making progress in one way can set us back in another. Social media specifically, has become a defining platform of the era for how we choose to communicate but also, how we choose to compare ourselves with our peers.
What’s alarming is that studies are showing that relying on social media can make us sadder and ironically it can make us feel isolated as opposed to sociable. The term Facebook depression is now coined and reflects the darker side of a system that was supposed to make us integrate and keep us in closer contact with people. Even darker still is the use of bots to create impressions, bolster views or invent online communities and this has already proven to affect voters and elections. When Twitter tightened up its control on bots that were designed to influence voters it was reported that some people lost a lot of followers in a single day – a false community wiped from their lives.
As a seasoned editor and journalist, Richard Forsyth has been reporting on numerous aspects of European scientific research for over 10 years. He has written for many titles including ERCIM’s publication, CSP Today, Sustainable Development magazine, eStrategies magazine and remains a prevalent contributor to the UK business press. He also works in Public Relations for businesses to help them communicate their services effectively to industry and consumers.
The ease of manipulating people online can be terrifying in its scope. We should be very careful with the way we use such tools and I suspect after the scandals that are emerging in the United States with social media manipulation, allegedly from Russia, there will be increased scrutiny and research on the power of social media in the future.
Hope you enjoy the issue.
Richard Forsyth Editor
Contents 35 Mobile Flip
Our biomass resources have historically been under-exploited, due to both economic and logistical factors, but now researchers in the MOBILE FLIP project are taking a fresh look at the topic, as Doctor Tarja Tamminen explains
38 Bayes Knowledge
4 Research News
EU Research takes a look at the latest news in scientific research, highlighting new discoveries, major breakthroughs and new areas of investigation
10 NCRNA The piRNA pathway is known to play a central role in germline reprogramming, now researchers in the NCRNA project aim to investigate its function in other areas, including spermatogenesis and transposon silencing, as Professor Dónal O’Carroll explains
12 Andrea Electroencephalography is a well-established technique for investigating the function of the human brain and certain neurological conditions. We spoke to Professor Silvia Comani about the ANDREA project’s work in developing a novel dry electrode system
14 Aiding the neglected Emotional maltreatment is known to have a damaging impact on children’s mental health, yet it remains extremely difficult to identify and prevent. We spoke to Dr Iris Lavi about her research into the roots of emotional maltreatment
16 Photometa The PHOTOMETA project aims to promote the development of functional metamaterials, which could hold potential across a range of different applications, as Professor Costas Soukoulis explains
19 MOSTOPHOS Organic electronics devices could play a major role in addressing contemporary social challenges. We spoke to Dr Denis Andrienko about the MOSTOPHOS project’s work in investigating the issues which limit the stability of blue phosphorescent materials
Toxic materials are used in most industrial chemical processes, now researchers in the CarbaZymes project are drawing inspiration from the natural world to develop more sustainable processes, as Professor WolfDieter Fessner explains
A number of different methods are available to monitor the dynamics of chemical reactions, now researchers are developing a novel approach using ultrafast EUV pulses. This could help scientists gain new insights into structural rearrangements, says Professor Daniel Strasser
26 Climate Change Climate change is fueling more intense weather patterns and plays a part in triggering weather related disasters. We look at the evidence to support this.
30 GLOBAL VALUE The purpose of the GLOBAL VALUE is to enable multinational corporations to measure their impacts on global sustainable development, as Project Manager, Norma Schönherr explains
More effective soil and crop management in yam systems could help improve crop productivity and boost food security and income, says Professor Emmanuel Frossard, the Principal Investigator of the YAMSYS project
34 NO Crop Pathogen A deeper understanding of the mechanisms by which pathogens are able to infect their plant hosts will help lay the foundations for improved control strategies, as Dr Angela Feechan and Dr Anna Tiley explain
The underlying assumptions on which professional decisions are reached is not always clear. Bayes Knowledge aims to develop new methods to support evidence-based decision-making, as Professor Norman Fenton explains
40 Falcon Vast amounts of information are available today on how products and services are used. Karl Hribernik tells us about the FALCON project’s work in developing a framework which will help to both improve product-service systems and shorten the development cycle
42 Data4Water Water management is a complex area of research and crosses several disciplinary boundaries. We spoke to Professor Mariana Mocanu about the Data4Water project’s work in coordinating and supporting research
44 Cloud socket We spoke to Robert Woitsch about the CloudSocket project’s work in introducing the concept of Business Process as a Service (BPaaS), that could bring organisations closer to the cloud
45 Dice We spoke to Dr Giuliano Casale about the DICE project’s work in developing a methodology and tools that will help accelerate software development, opening up new commercial opportunities for SMEs
46 WiSHFUL Spilios Giannoulis and Ingrid Moerman explain how WiSHFUL aims to develop software architectures and open interfaces accelerating the wireless experimentation process
48 WSF Researchers in the Welfare State Futures project are investigating fundamental questions about the future of welfare, as Professor Ellen Immergut explains
51 SI-DRIVE Social innovation projects help to change the way we live and work, yet the field itself is relatively under-researched, now the SI-DRIVE project is taking a fresh look at the topic. We spoke to Jürgen Howaldt, Christoph Kaletka and Antonius Schroeder about their work
54 Social Media Research indicates social media platforms make us feel inadequate and end up reflecting only our own views back at us. It’s time to ask ourselves, is social media really that social?
58 Migration Tens of millions of people crossed the Atlantic during the age of mass migration. Researchers are combining different sources of data to build a deeper picture of the period, as Professor Imran Rasul explains
59 Manutelligence Many companies today offer a combination of products and services to their customers, a trend which opens up new business opportunities. The Manutelligence project is developing a collaborative engineering platform, as Maurizio Petrucciani and Sergio Terzi explain
62 Flex4Grid We spoke to Markus Taumberger about the Flex4Grid project’s work in developing a framework to help flexibly manage both energy demand and generation, making the power grid fit for the future
65 REELCOOP There is a concerted focus in research on the development of renewable energy systems. The Reelcoop project aims to both develop new generation systems and enhance research cooperation around the Mediterranean, as Professor Armando Oliveira explains
66 Electra New real-time control strategies will be required to help the EU grid adapt to changing circumstances, one of the challenges that Luciano Martini and his colleagues in the ELECTRA project are working to address
While millions of people use Europe’s roads, railways, airports and seaports on a daily basis, our transport infrastructure still needs to develop in line with modern demands, an issue central to the work of the FOX project, as Dr Thierry Goger explains
BEYOND THE HORIZON
Commissioner for Research calls for more private research funding
Focus on China: Research funding in the 21st Century
Environmental forecast: Climate Change in Europe
The Economic Impact of ICT
Disseminating the latest research under FP7 and Horizon 2020 Follow EU Research on www.twitter.com/EU_RESEARCH
70 Lasig-Twin Laser-based ignition systems could help to improve fuel efficiency in combustion engines. Continued collaboration, knowledge-sharing and effective training are central to the ongoing development of laser spark plug technology, says Dr. Nicolaie Pavel of the LASIG-TWIN project
72 Ashley Avionics systems are an integral part of modern aircraft and perform an everwidening range of functions in today’s planes. Marina Almeida tells us about how ASHLEY helped to lay the foundations for the next generation of avionics systems
73 XERIC Substantial amounts of energy are required to run auxiliary systems within electric vehicles, which is a major drain on the battery and so limits the range of the car. We spoke to Dr Nino Gaeta about the Xeric project’s work in developing a new climate control system
74 InDev We spoke to Aliaksei Laureshyn about the InDev project’s work in looking deeper into the underlying causes behind road accidents, research which could inform the development of effective countermeasures to improve safety
76 CosmicDawn Cosmic microwave background data holds revealing insights into the period immediately after the big bang, while galaxy surveys can also help improve our understanding of the primordial universe, as Professor Hiranya Peiris explains
Øyvind Mejdell Jakobsen and Ann-Iren Kittang Jost tell us about the TIME SCALE project’s work in developing technologies and knowhow to support the space exploration missions of the future
EDITORIAL Managing Editor Richard Forsyth firstname.lastname@example.org Deputy Editor Patrick Truss email@example.com Deputy Editor Richard Davey firstname.lastname@example.org Science Writer Holly Cave www.hollycave.co.uk Acquisitions Editor Elizabeth Sparks email@example.com PRODUCTION Production Manager Jenny O’Neill firstname.lastname@example.org Production Assistant Tim Smith email@example.com Art Director Daniel Hall firstname.lastname@example.org Design Manager David Patten email@example.com Illustrator Martin Carr firstname.lastname@example.org PUBLISHING Managing Director Edward Taberner email@example.com Scientific Director Dr Peter Taberner firstname.lastname@example.org Office Manager Janis Beazley email@example.com Finance Manager Adrian Hawthorne firstname.lastname@example.org Account Manager Jane Tareen email@example.com
EU Research Blazon Publishing and Media Ltd 131 Lydney Road, Bristol, BS10 5JR, United Kingdom T: +44 (0)207 193 9820 F: +44 (0)117 9244 022 E: firstname.lastname@example.org www.euresearcher.com © Blazon Publishing June 2010
Cert o n.TT-COC-2200
The EU Research team take a look at current events in the scientific news
Research Commissioner calls for more private funding Carlos Moedas defends Europe’s defragmentation to attract investment © European Union, 2015
The European commissioner for Research, Innovation and Science, Carlos Moedas, said that “one of the fundamental pieces that is not working in Europe is investment”, appealing to the defragmentation of the continent to attract private investment. “One of the key pieces that is not working in Europe is funding. The United States is much less dependent on bank debt ... and the difference is extraordinary in terms of venture capital, “said Carlos Moedas, speaking at a conference on digital economy in Lisbon. For this reason, he stressed that Europe has to “attract private capital and, for that, it has to prove that it is not a fragmented set of companies and states. “The biggest European challenge is to reduce this fragmentation between companies and laws,” he noted, referring to phenomena such as Catalonia’s attempt to become independent and the UK’s exit from the European Union. In his view, there are no “purely national problems”, since issues such as climate; cybersecurity and health “have to do with solutions that can only come from a higher level.” Still, Carlos Moedas pointed out that, globally, “globalization cannot be naive”, since “the rest of the world does not play with the same rules”. Making a parallel with the United States, he said that Europeans “are concerned about the
environment and the future of their children,” while the United States wants to abdicate climate agreements. “The digital world has two characteristics that have come to change politics: speed and scale,” making matters more comprehensive, he said. Speaking in a third phase of the digital age, he argued that Europe has a “fundamental opportunity” to assert itself, in case it bets on matters such as science, necessary for areas such as artificial intelligence, cyber security and virtual currency trading. Still, he admitted that in terms of regulation, Europe is still “in the early stages”. “I think that in regulating the digital world, we’re still not sure what’s coming. In 50 years we will look and think about what we were doing, “he added. Referring to Portugal, he noted that the country stands out from the other European states in the area of entrepreneurship, and highlighted cases of ‘start-ups’ (companies with great potential for growth) such as Feedzai. “We do not know many in Europe,” he said. Carlos Moedas also said that Portugal must continue on the path of reducing the deficit, “but was optimistic.”I see [in Portugal] a country with a stable Government that has managed to get out of a crisis that no one believed to be possible, and meanwhile I look to Greece, which is still in assistance and I think we can be optimistic.”
China intensifies investment into sustainable development The country is promoting international cooperation in the scientific research of Qinghai-Tibet Plateau to assess climate change and promote the plateau’s sustainable development The Qinghai-Tibet Plateau represents one of the largest ice masses on Earth and has been called the “Third Pole” by scientists. “The Third Pole Environment (TPE) research program inaugurated by the CAS is expected to be listed as China’s major scientific research with global significance in 2018,” said Cao Jinghua, director of the CAS International Cooperation Department. According to Cao, TPE has seen huge progress since it started in 2009. “The project has received around 800 million yuan (around 125 million U.S. dollars) in funds from 2009 to 2017. China’s second comprehensive scientific expedition to the Qinghai-Tibet Plateau is offering new opportunities,” Cao said. The CAS is also pursuing international cooperation on the research. It set up the Kathmandu Centre of Research and Education in Nepal to facilitate cooperation. It invites scientists from South Asian countries, the U.S. and European countries to collaborate on the research. “We hope to play a world-leading role in this research. We are also intensifying research on the North Pole and the South Pole,” Cao said. China conducted its first comprehensive scientific expedition to the Qinghai-Tibet Plateau in the 1970s. In August 2017, China started the second expedition focusing on climate change, biodiversity and ecological changes on the Qinghai-Tibet Plateau, which will last for 5 to 10 years.
Getting to the heart of Marijuana research
While many attest to its healing powers, research into the full potential has long been legally restricted. The movement to legalize medical marijuana has its roots in the 1980s and early 1990s, the worst days of the US Aids epidemic. The disease was a death sentence, and stricken young men sought out marijuana for relief and solace. In San Francisco’s Castro District, a gay Vietnam veteran named Dennis Peron ran an illegal dispensary to supply them. Peron went on to co-write Proposition 215, which California passed in 1996, becoming the first state to allow medical marijuana – med for short. Among all the things med is touted as doing, relief from wasting illnesses like Aids and cancer (especially during chemotherapy) is among the respectable. A reader in the UK writes that when his wife was dying of breast cancer, “I purchased a vaporiser for her, it quickly became invaluable for both pain relief and as a mood enhancer”. Though research is limited, a recent study found a quarter of cancer patients in Seattle use marijuana. While cannabis contains scores of chemicals, the most familiar are tetrahydrocannabinol (THC), which gets people high, and cannabidiol (CBD), a non-psychoactive compound often associated with medical benefits. Beyond this point, little is known about the
plant’s capabilities and limitations as medicine, at least compared to any drug available in pharmacies. That’s because in much of the world, including the US, it’s long been illegal or very difficult to research the marijuana plant for medical use. A US study found a significant reduction in demand for pharmaceuticals to address conditions like anxiety, depression and pain, in states where med is legal. Some patients clearly prefer med. what’s not yet known is how well it performs relative to conventional medicine. There’s justifiable excitement about medical breakthroughs which legalization may bring. But most haven’t arrived yet. According to the the US National Cancer Institute: “Studies in mice and rats have shown that cannabinoids may inhibit tumor growth by causing cell death, blocking cell growth, and blocking the development of blood vessels.” Within the cannabis industry, the inclination is to spin positive developments to the greatest effect. So news of cannabis possibly displaying cancer fighting properties in lab tests is taken to mean: weed cures cancer! And this kind of misinformation can cause pain. The situation is that until research into marijuana is legalised, we will never know it’s true benefits or costs.
The power of art to improve childhood learning Study shows immersion in the arts helps children learn other subjects more effectively Preschools have a single, all-important mission: getting kids ready to learn. Students should emerge knowing their letters, numbers, and shapes, as well as a certain level of emotional awareness. A recent study identifies a uniquely effective way to impart this foundational knowledge: immersion in the arts.
Children at both sites engaged in some arts-based activities as part of their homeroom classes. But in addition, those in Settlement’s preschool attended a dozen 45-minute-long arts classes each week. “They were taught in fully equipped artist studios by credentialed artist-teachers for music, dance, and the visual arts,” the researchers report.
It reports low-income children who attended a Head Start program experienced a more robust rise in readiness if their program included daily music, dance, and visual arts classes.
The results: “Children at the arts-integrated preschool showed greater growth than their counterparts on a composite indicator of school readiness, as well as in two specific concept areas.” Those were texture/material, which presumably reflects their experience working with visual-art materials, and self/social awareness.
The results “support the broad educational value of the arts,” writes a research team led by psychologist Eleanor Brown of West Chester University. “Our findings indicate that the arts may hold value not only for art’s sake, but also for advancing children’s overall school readiness.” The study, published in the journal Early Childhood Research Quarterly, featured 265 children between the ages of three and five in the Philadelphia area. All were from poor or low-income families. One hundred and ninety seven of them participated in the Kaleidoscope Preschool Arts Enrichment Program, run by the Settlement Music School. The rest attended another fully accredited, well-regarded Head Start program.
Altogether, the results suggest the arts could be used “to close gaps in academic and social-emotional school readiness” between students from poor and better-off families. Of course, underprivileged kids are relatively unlikely to have access to arts education, and Brown and her colleagues concede the Settlement model will be difficult to duplicate. “Arts instruction has sometimes been regarded as a distraction from the science, technology, engineering, and math disciplines known as STEM,” the researchers write. These results “argue against pitting the arts against the sciences and mathematics, and align with arguments that the arts not only enhance the development of the whole child, but in fact can be used to teach STEM content.” Leonardo da Vinci would surely agree.
The Birth of new economics The victory of science over religion in Europe has had adverse effects on the development of social science Devaluing religion led to a loss of understanding of the spiritual and emotional aspects of man. European philosophers and social scientists created models of men as being brains suspended in vats, with no heart and no soul. While a strong sense of morality is built into our nature, discarding the heart and soul from scientific consideration led to a loss of the understanding of the nature of morality. Since morality cannot be given an empirical foundation, it was abandoned as a meaningless concept within a scientific framework for the construction of knowledge. This has resulted in an economic theory which has become blind to concerns for justice, equity, poverty, exploitation and similar issues, which were central to economics in an earlier era (when it was a branch of moral philosophy). Logical Positivism represents a sophisticated and complex misunderstanding of science, which rose to prominence in the early 20th century, and had a spectacular crash later on, as philosophers became aware of its defects. The main idea of this philosophy is the scientific theories should only be concerned with observables and should ignore, or eliminate, unobservables. Under the influence of positivism, behavioural psychology ignored the deeper and unobservable structures of human thoughts and emotions, and instead focused on observable behaviours, stimuli and responses. A
New blood test to detect 8 types of cancer before symptoms appear Study shows test is able to catch cancer cases anywhere from 33 percent to 98 percent of the time A new blood test can screen for eight cancer types and help spot the tumour location, a recent study from Johns Hopkins Kimmel Cancer Centre shows. The research, “Detection and localization of surgically resectable cancers with a multi-analyte blood test,” was published in the journal Science. The non-invasive test is called CancerSEEK. It simultaneously quantifies eight proteins and detects highly specific gene mutations related to cancer from DNA in the blood. Collectively, the screened cancer types account for more than 360,000 (60 percent) of cancer deaths in the U.S. Importantly, five of these cancers – ovarian, pancreatic, stomach, liver, and esophageal – currently lack a screening test. The other cancer types covered by this test are colorectal, breast, and lung.
similarly shallow analysis led economists to posit human behaviour as being driven solely by the purpose of maximisation of lifetime consumption. Focus on observables and quantifiables has led to a single-minded concentration on wealth as the sole goal of economic endeavour. It is only with the re-discovery of multi-dimensional nature of our lives that the deep defects in this measure, and the damage it is causing, are gradually becoming visible. The most important aspects of our lives are based on unobservables and un-quantifiables. The spectacular technological progress of the West has dazzled our eyes, making it difficult to see any defects in their structures of knowledge. But learning how to split atoms and build bombs and spaceships does not lead to insight into the secrets of the human heart. Massive gains in material wealth have been accompanied by increasing social misery everywhere. We can all see the breakdown of communities, families, increasing inequalities and injustice, environmental collapse, and senseless wars leading to millions of deaths, with billions living below the poverty line. At the root of the failure to solve our social problems is a hopelessly defective Western social science which denies the existence of the heart and soul. Hope for the future of humanity lies in a radical re-construction of the social sciences, which re-integrates the heart and soul, as the starting point for the study of human beings and societies.
Facebook appoints new head of research in AI Former IBM exec Jérôme Pesenti will be taking the lead at Facebook’s AI research (FAIR) arm Yann LeCun, who previously held the lead role, will be shifting to a more research-focused role where he will continue to drive the scientific agenda of FAIR in his new role as chief AI scientist, a Facebook spokesperson tells TechCrunch.
“The use of a combination of selected biomarkers for early detection has the potential to change the way we screen for cancer, and it is based on the same rationale for using combinations of drugs to treat cancers,” Nickolas Papadopoulos, PhD, the study’s senior author, said.
The division now has 130 employees, and one of Pesenti’s tasks will be leading the group as it H:few days ago, the company announced plans to double the size of its Paris AI research team to 100 people by 2022.
Although the test does not cover all cancer types, investigators observed that expecting such a tool to be developed is not realistic.
“There was a need for someone to basically oversee all the AI at Facebook, across research, development, and have a connection with product,” said LeCun. “AML and FAIR were reporting to the CTO, who no longer has the bandwidth to take care of that, given the increased importance of AI and more systems built around deep learning.”
CancerSEEK may be used with other routine blood work by primary care providers. Overall, the test shifts the focus from late-stage to early disease, which could be key in reducing cancer deaths in the long-term, the team believes.
Increased public research funding ‘fails to bring more citations New research finds that working with peers overseas has a greater influence on citation impact than governmental research funding
A rising class of internationally renowned scientists are “decoupling” from their national research base, according to new research. In fact, increases in national public research spending tend to have a “negative or negligible” effect on the number of citations that a paper receives, according to an analysis of research in 42 countries published on the preprint server arXiv ahead of peer review. One of the authors of the research said that the findings were “very interesting” but that the implications for policy were not yet clear. The study builds on previous research published in Nature earlier this year by one of the authors, Caroline Wagner, Milton and Roslyn Wolf chair of international affairs at Ohio State University. With colleagues she found that public research
and development spending is only weakly linked to the citation impact of a country’s research paper output. Professor Leydesdorff the leader of this latest study said that it is widely known that if governments spend more money on research they get more publications, but it is “not the case” that this corresponds with higher levels of citations.He added that the implications for research policy were not yet clear, as the finding needs further testing. It has become increasingly clear over the past 10 years that there is a “special layer” of scientists who comprise the top 1 per cent worldwide and are independent of the bottom layer of researchers, he said.
Ground breaking research centre opens in Ireland €9.7 million renewable energy centre opens doors at Queen’s University Belfast Funded by the EU’s INTERREG VA Programme, managed by the SEUPB, the Bryden Centre for Advanced Marine and Bio-Energy Research will recruit 34 PhD students across the marine and bioenergy disciplines. Match-funding for the project has been provided by the Department for the Economy in Northern Ireland and the Department of Business, Enterprise and Innovation in Ireland. This research includes the use of tidal power at Strangford Lough and the North Antrim Coast, ocean energy sites in Western Scotland, as well as the potential for wave and tidal power generation in Donegal. The abundance of natural energy resources, value in organic waste and the opportunities for the circular economy in the inter-regional area have also driven the focus of the bio-energy research. The potential for Scotland, Northern Ireland and Ireland to become leaders in marine renewable energy is vast.
Acting Vice-Chancellor of Queen’s University Belfast, Professor James McElnay, commented: “The role of Queen’s University in leading the Bryden Centre for Advanced Marine and Bio-Energy Research is substantial to the University and to the entire renewable energy sector in Northern Ireland and Ireland, producing vital cross-border research. “Queen’s is already renowned for our research in this area through the Centre of Advanced Sustainable Energy. This partnership will continue to build and expand our expertise and help to develop the next generation of leaders in renewable energy research and education.” The project also aligns with the EU’s Energy 2020 agenda, specifically the renewable energy directive which requires that all 28 member states meet at least 20% of their total energy needs with renewables by 2020.
Crisis in Climate Change research Scientists warn Canadian climate science faces crisis that may be felt globally In an open letter addressed to Justin Trudeau, more than 250 scientists from 22 countries highlight their concern over the imminent end of the C$35m Climate Change and Atmospheric Research program. Launched in 2012, the program funded seven research networks that explored issues such as the impact of aerosols, changing sea ice and snow cover, as well as atmospheric temperatures in the high Arctic. Much of the research emerging from the program was focused on Canada’s Arctic, yielding data sets that were used around the world by scientists seeking to better understand climate change and its impacts. The government’s decision came as a surprise to many in Canada, said Dan Weaver of Evidence For Democracy, the research advocacy group who published the letter on Monday. “The government has taken great effort to engage with policies around climate and climate education, green energy and a lot of these great things,” he said. “But somehow along the way, the support for the atmospheric science – the underlying science of the issue – has been overlooked.”
In response to the letter, Canada’s minister of science pointed to the additional C$70m her government had set aside for climate research in the last budget, adding to the C$37m provided annually by federal research granting councils. “We are doing more to combat climate change than any Canadian federal government in history,” Kirsty Duncan said in a statement. “Our government will continue to support and invest in the actions necessary to address climate change.” Weaver welcomed these efforts, but he said they should be paired with long term funding for research. “If Canada isn’t contributing this key piece of the puzzle, no one else can,” he said.“The Trudeau government is taking concrete actions on climate and science in a variety of ways, but it’s missing a critical piece,” he said. “And that’s what this is all about – ensuring that this gap is filled in some way or another.”
How to F###ing feel better The Science of why swearing physically reduces pain “Pain used to be thought of as a purely biological phenomenon, but actually pain is very much psychological. The same level of injury will hurt more or less in different circumstances,” says Richard Stephens a psychologist and author of the study “Black Sheep: The Hidden Benefits of Being Bad”. We know, for example, that if male volunteers are asked to rate how painful a stimulus is, most of them will say it hurts less if the person collecting the data is a woman. Pain isn’t a simple relationship between the intensity of a stimulus and the severity of your response. Circumstances, your personality, your mood, even the experience of previous pain all affect the way we experience a physical hurt. Stephens designed a particularly cunning experiment with one of his undergraduates, Claire Allsop. This study was so neatly devised that she won a prestigious award from the British Psychological Society for it. Allsop wanted to know whether she could increase pain tolerance by making someone feel more aggressive. If pain tolerance depends on “innate” aggression then it shouldn’t be possible to induce mildmannered people to suffer for longer. But if, as the swearing study showed, the same person can stand far greater levels of pain when swearing than when not, might swearing actually cause aggression levels to rise, increase arousal, and help us deal with pain that way?
responded with “explode” and “fight” she classified as feeling more aggressive than those who thought of “explore” or “light.” “We basically showed the same pattern of effect as we did for swearing: They could pain longer, and said they perceived it as less painful, and they also showed a rise in heart rate.” After the golf game the male students could immerse their hands for an average of 117 seconds, females an average of 106 seconds. After shooting people, those times jumped to 195 seconds for the men and 174 seconds for the women. The result also suggests a converse experiment to me: Can we rate the severity of swear words by how much analgesic effect they have? Rather than asking people to say subjectively whether they think a swear word is mild, moderate, or severe, wire them up to heart rate monitors and have them stick their hands in ice water. Perhaps that’s something for the team at the Oxford English Dictionary to consider ahead of their next edition.
“We were looking at things we could do in the lab and one easy way is to have them play a first-person shooter game,” explains Stephens. In fact, each of her volunteers played either a first-person shooter— one of those video games where you run around trying to kill people before they kill you—or a golf game. To test exactly how the game had affected the volunteers, Allsop then had them fill in a hostility questionnaire where they rated themselves from 1 to 5 against adjectives like explosive, irritable, calm, or kindly. Finally, she used a test to see how aggressively primed the students were. The test is a kind of solitary hangman—she showed the volunteers prompts like “explo_e” or “_ight.” Those who
Following the piRNA pathway to genomic stability The piRNA pathway is known to play a central role in germline reprogramming, now researchers in the NCRNA project aim to investigate its function in other areas, including spermatogenesis and transposon silencing. We spoke to Professor Dónal O’Carroll about the project’s work in using cutting-edge techniques to gain new insights into the role of the piRNA pathway A
core responsibility of the mammalian germline is the transmission of the genome from one generation to the next. One of the major challenges in maintaining the integrity of the germline is to ensure that transposons, DNA sequences that constitute between 4050 percent of the mammalian genome, do not cause too many de novo mutations, as Professor Dónal O’Carroll explains. “If these transposons are transcribed they can effectively copy and paste themselves into novel locations in the genome, and that creates genetic damage,” he outlines. Based at the University of Edinburgh’s Centre for Regenerative Medicine, Professor O’Carroll investigates fundamental questions around both short and long non-coding RNAs and the mammalian germline. “The germline is derived from somatic cells. They’ve got two sets of chromosomes – one maternal, one paternal – and those chromosomes are not epigenetically equal,” he continues. “During mammalian development there’s an event called germline reprogramming, where all the epigenetic information and all the DNA methylation are erased, and they have to be put back.”
Germline reprogramming There is a window during germline reprogramming when some transposons may become active, and unless they’re suppressed or targeted in some way this
could cause de novo integrations and mutations. This is where the piRNA pathway (piwi-interacting RNA), which relates to a class of small, non-coding RNA molecules, comes in. “The classical function of the piRNA pathway is to repress transposons. As it represses transposons during germline reprogramming, it also signals for them to be de novo DNA methylated, to silence them later in life,” explains Professor O’Carroll. This topic has attracted a lot
Histological section of a mouse seminiferous tubule. The most undifferentiated germ cells are found in the basal (outer) part of the tubule that differentiate as they progress towards the lumen (centre) where the nascent sperm cells are visible.
of research attention over the years, yet there is much still to learn about the wider role of the piRNA pathway, now Professor O’Carroll and his colleagues aim to shed further light on the topic in an EC-backed project. “We wanted to understand the function of the piRNA pathway in more detail, beyond its central role in germ-line
reprogramming. How does it function later on in spermatogenesis – does it regulate transposable elements? Does it contribute to spermatagonial stem cell biology?” he asks. Researchers have been applying conditional genetics methods in this work, using engineered point mutations to make catalytically inactive proteins and gain new insights into the underlying mechanisms behind spermatogenesis, the process by which mammals produce sperm cells. These techniques are being applied on mice, with researchers investigating how transposons are silenced. “We can delete genes, and we can change and restructure their function in vivo by removing active enzymes,” outlines Professor O’Carroll. A wide variety of other techniques are also being applied in the project, which allows Professor O’Carroll and his colleagues to investigate further key questions around the piRNA pathway. “In our laboratory we essentially couple genetics with high-throughput sequencing from very defined populations of cells. We’re looking at cells from the testis,” he continues. “The genetics techniques allow you to test a particular hypothesis around the role of a gene. So if a gene is important, then the sequencing will give you more detail. The sequencing in other experiments will then help you understand the mechanism by which the gene is important.”
Non-coding RNA pathways and the mammalian male germline Project Objectives
We propose analysing the contribution of both short and long non-coding RNAs to mammalian male germ cell development and SSC homeostasis. In the mouse male germline the small non-coding piRNAs and their interacting-Piwi proteins Mili and Miwi2 are essential for the establishment of epigenetic transposon silencing. We will determine if Miwi2 expression identifies the illusive SSC population with long-term self-renewal capacity in vivo and explore the biology of this adult stem cell population.
Project Funding The project itself set out to achieve four main goals, the first of which centered around showing that the piRNA pathway does indeed play an important role in repressing transposons in meiotic cells. Researchers have been able to show that it does this by cleaving the transposon messenger RNA. “A piwi-interacting protein called MILI acts almost as a pair of scissors, which is guided to its target by a piRNA. We were able to show that piRNA is the trigger for the molecular scissors. It enables the molecular scissors to cut and neutralise it,” says Professor O’Carroll. The reason this happens in meiosis is that there are big changes to the chromatin landscape at this time, as many different events are affecting chromosomes. “These changes to the chromatin landscape enable certain transposons to become expressed, but these are post-transcription regulators,” explains
in spermatogonial cells. “Its function during germline reprogramming is essential for the spermatogonial stem cells, because if you don’t have normal reprogramming then you have defective gene expression in the spermatagonia,” outlines Professor O’Carroll. The last aim in the project relates to another type of non-coding RNA, called long noncoding RNAs, and how they complicate the germline. “We recently published a paper saying that the germline has a very diverse group of long-coding RNAs, many of which originate from defined classes of transposable elements,” continues Professor O’Carroll. A number of other papers have been published around these four key aims of the project, with the wider goal of building a deeper understanding of the role of the piRNA pathway, beyond its function in germline reprogramming. “We want to understand the contribution of the piRNA
We wanted to understand the function of the piRNA pathway in more detail, beyond its central role in germ-line re-programming. How does it function later on in spermatogenesis – does it regulate transposable
elements? Does it contribute to spermatagonial stem cell biology? Professor O’Carroll. “In the second aim, we asked whether a second Piwi protein called MIWI2 was expressed in a population of spermatagonial stem cells. The answer is that it is indeed expressed.”
Stem cell activity This population of cells can regain stem cell activity, even after damage to the testis, and are very important to the regenerative capacity of an individual in recovering from injury. The third aim in the project centered around investigating why MIWI2 is important
pathway to spermatogenesis, essentially to ensuring that spermatogenesis actually works,” says Professor O’Carroll. There are many unanswered questions in this respect, so beyond the term of the current project, Professor O’Carroll plans to pursue further research in this area in future. “We understand now that reprogramming is important for latent spermatagonial stem cells. As reprogramming is so important, we would really like to understand the mechanics of reprogramming in more detail,” he says.
ERC-SG-LS3 - ERC Starting Grant - Cellular and Developmental Biology (ERC-2012StG_20111109).
• The University of Edinburgh, United Kingdom (Coordinator) • European Molecular Biology Laboratory, Germany
Project Coordinator, Professor Dónal O’Carroll Chair of Stem Cell Biology Associate Director, MRC Centre for Regenerative Medicine SCRM Building, University of Edinburgh 5 Little France Drive Edinburgh EH16 4UU United Kingdom T: +39 06 900 91222 E: email@example.com W: http://www.crm.ed.ac.uk/research/group/ rna-function-germ-and-stem-cell-biology Professor Dónal O’Carroll
Professor Dónal O’Carroll has been Chair of Stem Cell Biology, Head, Institute of Stem Cell Research since 2015. He has also been Associate Director at the MRC Centre for Regenerative Medicine, since 2015. He has been Wellcome Trust Senior Investigator since 2015, ERC Investigator since 2012, and he was Group Leader at EMBL Monterotondo between 2007 and 2015. He has been Adjunct member of faculty at The Rockefeller University since 2007. He was Postdoctoral research at The Rockefeller University, New York between 2001 and 2007. He attained his PhD in 1999, at the Research Institute of Molecular Pathology, in Vienna.
Picking up the brain’s electrical signals Electroencephalography is a well-established technique for investigating the function of the human brain and certain neurological conditions, yet conventional methods have some significant shortcomings. We spoke to Professor Silvia Comani, PhD about the ANDREA project’s work in developing a novel dry electrode system A technique designed
to record electrical activity in the human brain, electroencephalography (EEG) enables scientists to gain new insights into its function. The standard approach with EEG is to use socalled ‘wet’ electrodes, so named because a conductive material like a paste or gel is required to enhance the quality of the recorded signal and to reduce the impedance between the sensor and the surface of the head, yet this procedure has a number of drawbacks. “Firstly, this is a time-consuming procedure, especially when high-density recordings are required. For each electrode, you need to apply the paste, to adjust the impedance and to achieve the best possible contact between the head and the sensor. This takes on average a minute per electrode,” explains Professor Silvia Comani, the Principal Investigator of the ANDREA project. A second major disadvantage of traditional ‘wet’ electrode systems is that the preparation of the skin during the application of the gel may induce allergies in the subject and errors in the recording. “This gel may leak, and that may lead to cross-bridges between adjacent electrodes, hence to a contamination of the recording,” outlines Professor Comani. Prototype cap with 97 dry Multipin electrodes (cap turned inside out).
Andrea project The ANDREA project aims to develop a novel, dry electrode system which will overcome these issues. There are already some systems which use dry electrodes, yet in many cases they are quite painful for the subject to actually wear; Professor Comani says researchers in the project have developed innovative, flexible polymer electrodes that can be used with relative ease. “We’ve been developing electrodes in a specific shape, so that they can be kept on the subject’s head for up to an hour with no real problems, no pain,” she explains. These electrodes are integrated into a cap network, which can be adjusted to fit the contours of an individual subject’s head. “The cap is cut to fit different shapes of head as best as possible. It is extremely important that the same pressure is applied on each electrode all over the head, in order to have the same signal quality throughout the entire cap,” outlines Professor Comani. “For this purpose we have developed a socalled adduction mechanism, so that there is an even distribution of pressure.” This means that good-quality signals can be recorded across the cap, while the optimal
The big advantage of using dry electrodes is the ease of mounting a high-density cap and recording the EEG. If a patient is suffering an epileptic crisis, you can rapidly mount the cap and record the foci, where you think the epileptic crisis originates. If you’re interested in basic science, using dry electrodes will reduce the overall recording time. pressure level will also be more comfortable for the subject, so that the cap can be worn for longer and signals can be acquired for longer, as required for some neurological investigations. The number of electrodes in the cap may vary depending on the specific purpose of the recording, whether it’s looking to gain new insights into epilepsy for example, or monitoring how a sports team works together to achieve a shared goal. “The most useful number of electrodes in the cap depends on the application – but from the technological point of view there is no limitation on the
number of electrodes that you can have in a single cap,” says Professor Comani. A number of different systems have been developed within the project. “We’ve been developing systems with 8 electrodes, as they are extremely useful for sport applications. They are easy to wear and we can quickly pick up signals that are really useful for sport applications in terms of training and performance,” continues Professor Comani. A different application may require more electrodes, in which case the cap network can be adjusted to record the signals, from which scientists can learn more about how the brain performs specific tasks. This is a complex area of research, and Professor Comani says different areas of the brain may be involved in performing a specific task. “Traditionally it was thought that there are areas of the brain that are dedicated to a task, like the motor areas for example. However, in recent years it’s become clear that it’s not just one area, or a couple of areas, that are dedicated to a certain type of task – for instance motor or cognitive tasks – but rather a network of areas,” she explains. This makes it more complex to relate brain signals to specific tasks. “Analysing EEG signals typically means
analysing not just the signal recorded by one electrode, but analysing the ensemble of signals recorded by all sensors simultaneously,” says Professor Comani. “The challenge of reconstructing the activity of specific cortical sources in relation to EEG signals is related to the solution of the inverse problem: from the EEG signals produced we reconstruct the activity of the brain sources that have generated them.” There are certain mathematical techniques which can be used to decompose and re-project the EEG signal over the scalp, in order to reconstruct what sources were active at a given
Fig. 1: Overlay plot of 5 sec of spontaneous EEG containing an eye blink artifact. Frontal electrodes L1, LL1, R1, and RR1 for a single volunteer shown. Dry and conventional recordings acquired sequentially; filtered 1-40 Hz.
Fig. 2: Overlay plot of 50 sec of spontaneous EEG containing externally triggered eye blink artifacts. Frontal electrodes L1, LL1, R1, and RR1 for a single volunteer shown. Dry and conventional recordings acquired sequentially; filtered 1-40 Hz.
ANDREA Active Nanocoated Dry-electrode for Eeg Applications Project Objectives
The scientific objective of the ANDREA project was to develop a novel dry electrode EEG system with adjustable cap network provided with an automated sensor positioning mechanism, active pre-amplification and a SW toolbox for artifacts removal. The novel technologies address the requirements of high signal quality and reliability, mobility, high patient/subject comfort and longterm use for broad EEG employment.
Project Funding time during the recording. By using these techniques in different frequency bands, researchers can investigate which brain areas were involved in the execution of a task. “Depending on what types of tasks you are interested in, you can filter off all unnecessary frequencies, those that are not of interest at that time,” outlines Professor Comani. Other physiological signals, like blinking for example, may interfere with the signal of interest in research; this is an issue Professor Comani and her colleagues in the project have been working to address. “We have developed new techniques to get rid of the interference from those artefacts, and to retain the artefact-free brain signals,” she continues. “For example, there are the so-called data-driven techniques, which consider specific characteristics of the signals, which can be used to separate the artefactual signals from the true brain signals.”
System application The system itself has been applied on both clinical and non-clinical populations, with researchers comparing it to the traditional ‘wet’ systems and aiming to assess its efficacy in certain contexts, for example evaluating patients with neurological conditions. The project has collaborated closely with a private healthcare clinic, which gave Professor Comani and her colleagues the opportunity to test the system on patients with epilepsy or dementia. “We had a cap with 64 electrodes, and we compared that to the traditional clinical ‘wet’ system, with 22 electrodes,” she explains. Both the performance of the cap and the signal quality were evaluated. “We measured the impedance between electrode and skin in both systems, and also asked patients to describe their comfort on a scale of 1 to 10 – the higher the value, the higher the discomfort. Then, from the technological point of view, we evaluated the dynamic range of the electrodes,” continues Professor Comani. “In terms of signal quality, we evaluated the signal-to-noise ratio during blinking, and eye closing and opening.” The results so far have been positive, with researchers finding that in the majority of cases
neurologists were able to reach a diagnosis using just the data from the dry electrodes. This could enable medical professionals to gain deeper insights into neurological conditions like epilepsy. “The big advantage of using dry electrodes is that if a patient is suffering an epileptic crisis, you can rapidly mount a highdensity cap, record the EEG signals from the cortex, and reconstruct the position of the foci, where you think the epileptic crisis originates. Whereas mounting a wet EEG system takes time, and one might miss the possibility to register the EEG during a crisis,” explains Professor Comani. This information on the foci can then inform surgery. “With EEG you can localise the foci of the epilepsy. Those foci can be ablated, but if you don’t have a recording during the epileptic crisis then you have to provoke a crisis in a patient in order to localise the foci,” says Professor Comani. “Using dry electrodes would mean that you could easily record the EEG, and therefore be able afterwards to localise the epileptic foci, without having to provoke an epileptic crisis in the patient.” Researchers are also looking to apply the system on non-clinical populations, one of which is a team of basketball players. Acquiring EEG signals from a team, during either a match or practice, could allow scientists to analyse how they work together. “The players have different roles within the team, but they’re all working towards the shared goal of winning the game. They behave collaboratively,” points out Professor Comani. Many theories have been suggested to describe how team members interact, but they have not always been substantiated by quantitative measures of brain activation within each member; this is an area of great interest to Professor Comani. “This is extremely interesting from the scientific point of view, to see these so-called functional connections between different subjects, which are not supported by anatomical connections. Functional connections are found at a frequency level, in the recorded signals,” she says. “Social neuroscience is a potential field of application for the ANDREA system, the field would greatly benefit from this technology.”
ANDREA is an EU-funded FP7-PEOPLE Marie Curie Industry-Academia Partnership and Pathways (IAPP) project running from January 1, 2014 until December 31, 2017.
The ANDREA consortium merges the complementary expertise and resources in biomedical engineering, material science, biomedical signal processing, neuroscience and clinical neurology available at 3 academic partners (BIND Center at the University of Chieti, Italy - Coordinator, BMTI at the Technische Universitaet Ilmenau, Germany, the Faculty of Engineering at the University of Porto, Portugal), one industrial partner (eemagine Medical Imaging Solutions GmbH, Germany) and a private hospital (Casa di Cura Privata Villa Serena, Italy).
Professor Silvia Comani, PhD Coordinator - EU Project ANDREA Department of Neuroscience, Imaging and Clinical Sciences University “G. d’Annunzio” of Chieti-Pescara Via dei Vestini, 33 66013 Chieti Scalo - Italy T: +39 0871 3556925 E: firstname.lastname@example.org W: www.andreaproject.eu Professor Silvia Comani, PhD
Silvia Comani, PhD is Associate Professor of Applied Physics and Director of the BIND – Behavioral Imaging and Neural Dynamics Center. Professor Comani received the Italian Degree of Doctor in Physics in at the University of Bologna (Italy), and the PhD degree in Physics in at the Catholic University of Louvain-la-Neuve (Belgium). Her main research interests are the development of novel methods for the analysis of biomedical signals and their application in neuroscience. She is author of more than scientific articles published in international peer-reviewed scientific journals, and serves as reviewer for international peerreviewed scientific journals.
Getting to the core of emotional maltreatment Emotional maltreatment is known to have a damaging impact on children’s mental health, yet it remains extremely difficult to identify and prevent. We spoke to Dr Iris Lavi about her research into the roots of emotional maltreatment, which could hold important implications for the development of effective intervention programmes A stable, emotionally
supportive relationship with their parents is crucial to child development, particularly during their early years. The vast majority of parents are of course committed to supporting their children, yet there are also cases where parents neglect their emotional needs, as Dr Iris Lavi explains. “It may be that parents are depressed, or have other psychological difficulties that prevent them from attending emotionally to their child, and from bonding with them,” she says. Based at the University of Haifa’s Centre for Research and Study of the Family, Dr Lavi is investigating a number of significant questions around the emotional maltreatment of children, building first of all from a clear understanding of what it means. “First of all, emotional maltreatment is a pattern of behaviour. It’s not an isolated occurrence that happens every once in a while, but a pattern of behaviour in the relationship between the child and the parent,” she stresses.
This could mean name-calling or aggressive behaviour from parents towards their own children, which often leads to a cold and alienating atmosphere in the home that affects a child’s confidence and sociability. This in turn can have a serious impact on the mental health of the child. “A child who has been emotionally maltreated may lack confidence and show signs of depression. Typically these children have an inability to form a positive self-image,” explains Dr Lavi. These cases are often difficult to identify however, as there may not be clear physical signs. “Sometimes these children are physically, educationally and medically well cared for,” continues Dr Lavi. “Also, there is a continuum of what is thought to be normative behaviour, both culturally and individually, while a child may have grown familiar with an emotionally abusive environment and have come to think of it as normal. They may not know that it’s not normal to live in such an environment of emotional neglect and abuse.”
Identification and prevention The task of identifying cases of emotional maltreatment is correspondingly complex, and often falls to teachers and social workers. This is a major part of the motivation behind Dr Lavi’s work in analysing the effectiveness of identification, prevention and intervention programmes. “I aim to draw attention to this phenomenon of emotional maltreatment, and to make sure that people know to look for it and how to look for it,” she outlines. This is a relatively neglected area in comparison to physical abuse for example, yet Dr Lavi says it can have a serious long-term impact on children. “We aim to understand how big an impact emotional maltreatment has on families, and to make sure that people know the consequences on the child and their development,” she continues. “I’m currently looking at the emotional processes of parents, trying to understand whether there are certain triggers to emotional maltreatment.” This may be affected by the parents’ own childhoods, their experiences growing up, and
Although child maltreatment is a very complicated, multifactorial social phenomena, we can still look at it at a very
basic psychological process level. If we can change these processes at the micro level, at the individual level, then maybe we can generate a change
their own socio-economic circumstances. Parents may have experienced a similar environment during their own childhood, which may be a factor in their approach to parenting, while other groups may be more likely to emotionally maltreat their children, a topic which was investigated previously. “Researchers looked at whether certain intervention or prevention programmes could prevent the occurrence of emotional maltreatment in certain populations. For example, a population of parents who have had episodes of depression, or people who have lived below the poverty line,” outlines Dr Lavi. Effective intervention programmes can help to break the cycle of emotional maltreatment down the generations, a topic which has also attracted attention. “Researchers looked at whether these intervention programmes could prevent the occurrence of emotional maltreatment,” says Dr Lavi. There are also cases where there are deeper causes behind emotional maltreatment, whether it be psychological problems or other issues that prevent parents from bonding with their children. Dr Lavi aims to probe deeper in this area, gaining new insights into the root causes of emotional maltreatment. “We’ve seen that parents who emotionally maltreat their children are different with regards to their emotional processes, compared to those parents who don’t,” she says. One of the key questions in this area of research is how the way parents handle their
own emotions affects the likelihood of them then emotionally maltreating their children. “The reason this is important is that child maltreatment is a very complex phenomena, it has more than just one cause,” explains Dr Lavi. “It has social roots, it has familial roots, it relates to the community system and the prevailing social norms.”
Intervention A deeper understanding of the basic processes that lead to emotional maltreatment could also help in the design of effective intervention programmes. For example, if a parent has difficulties in processing emotional information, then helping them do that better could have a wider impact. “Although child maltreatment is a very complicated, multi-factorial social phenomena, we can still look at it at a very basic psychological process level. I am looking at emotional reactivity, emotional regulation and threat perception,” continues Dr Lavi. “If we can change these processes at the micro level, at the individual level, then maybe we can generate a change that will move things in a better direction,” says Dr Lavi. Ultimately, Dr Lavi believes that everybody could benefit from learning to be better parents. “We can always do better and make ourselves more available to our children,” she stresses. “We can always be more in tune with what they need, more nurturing, better able to help children fulfill their potential and be happy contributors to society.”
AIDING THE NEGLECTED Meta-Analysis of Emotional Maltreatment Prevention and Intervention Programs
The objectives of the study is to better understand the emotional processes of family members, and how these processes contribute to risk and resilience of families. I am asking whether learning how parents and children process emotional information could help us understand whether the family can overcome normative and extraordinary challenges.
• T he Haruv Institute Post-Doctoral Fellowship • The Marie Curie International Outgoing Fellowship, European Union • University of Haifa Starting Grant EU contribution: EUR 264 711jkkio
• Professor Emily Ozer, University of California Berkeley • Professor James Gross, Stanford University • Professor Lynn Fainsilber Katz, University of Washington • Professor Miri Cohen, University of Haifa • Dr. Ruth Berkowitz, University of Haifa • P rofessor Adital Ben-Ari, University of Haifa • P rofessor Asher Ben-Arieh, Hebrew University • Dr. Stephanie Romney, Parent Training Institute, San Francisco • Professor Alicia Lieberman, University of California, San Francisco
Project Coordinator, Dr Iris Lavi The Center for Research and Study of the Family, School of Social Work, University of Haifa Eshkol Tower, 5th floor The University of Haifa 199 Aba Khoushy Ave Mount Carmel 31905, Haifa Israel T: +972 4-824-0812 E: email@example.com W: http://sw2.haifa.ac.il/en/socialworkhome-en-2; http://family.haifa.ac.il/en/; http://irislavi01.wixsite.com/home-page Dr Iris Lavi
Dr Iris Lavi is a faculty member at The Center for Research and Study of the Family, School of Social Work, University of Haifa. Her research interests include various aspects of parenting, child development and positive psychology. She studies processes that lead to the development of resilience in children, parental influence on the wellbeing of children and the effects of stress on children and families.
Better Materials with Meta Materials Metamaterials have novel and unique electromagnetic properties, not found in natural materials; yet some major challenges remain before they can be applied more widely. The PHOTOMETA project aims to promote the development of functional metamaterials, which could hold potential across a range of different applications, as Professor Costas Soukoulis explains The potential of
metamaterials has generated a high level of interest in both the academic and commercial sectors, with researchers seeking to develop new materials with electromagnetic properties tailored for specific practical applications. Unlike natural materials, metamaterials can offer a tailored electromagnetic response at almost any desired frequency, as Prof. Costas Soukoulis, coordinator of the PHOTOMETA project, explains. “The electromagnetic response achievable with metamaterials can be quite unusual and unconventional, for example leading to negative wave refraction or backwards propagation. Such a response is not possible with natural materials,” he says. This leads to new paths for electromagnetic wave control, as all wave properties, such as frequency, polarization and direction of propagation, can be manipulated at high levels of precision. “This offers a means to more precisely control light, especially at frequencies where the response of existing natural materials does not offer many possibilities, for example THz frequencies,” explains Prof. Soukoulis. The possibility to engineer the electromagnetic properties of metamaterials, combined with the fact that those properties are highly dependent on the design of metamaterial building blocks – with many design possibilities available – means metamaterials have great potential in a wide range of applications, including in sensing, monitoring and security and telecommunications. Yet there are some significant challenges to overcome before this wider potential can be realised, which forms a core part of the agenda of the EC-backed PHOTOMETA project. “The project’s overall aim is to demonstrate novel optical metamaterials and systems
with new functionalities, offering great advances in a variety of applications, ranging from quantum information processing to nano-lasers and wave sensors,” outlines Prof. Soukoulis. This work centers around four major points: “We aim to design and fabricate new optical metamaterials and metasurfaces for electromagnetic wave control. We also want to understand and reduce losses in optical metamaterials,” continues Prof. Soukoulis. “Thirdly, we aim to design dynamic metamaterials for tunable devices and THz generation, and also to manipulate and control optical forces using metamaterials.”
higher and higher rates; yet this is not always possible, which means that it is very difficult to achieve a strong resonant response beyond a certain frequency, resulting in saturation of the operational frequency,” outlines Prof. Soukoulis. “The second major shortcoming of metallic optical metamaterials is their significant losses, due to the absorption of the metals, which increases with frequency.” This factor alone may weaken the resonances and degrade the operation of metamaterials, representing a significant challenge in terms of the project’s wider goals. Prof. Soukoulis and his colleagues are investigating different approaches,
The electromagnetic response achievable with metamaterials can be quite unusual and unconventional, for example leading to negative wave refraction or backwards propagation. Such a response is not possible with natural materials Functional metamaterials Researchers are taking quite a broad-based approach to this work, exploring novel designs and constituent materials for the development of different types of functional optical metamaterials. This research is built on a thorough understanding of how metamaterials behave and how their novel properties are realised. “Metamaterials behave as collections of oscillators, which need strong resonances in order to realise their exotic properties. Initially, this was demonstrated in the microwave region and subsequently there was a strong effort to go to optical frequencies,” explains Prof. Soukoulis. Controlling the properties of a metamaterial at higher frequencies is a significant technical challenge however. “In order to move up in frequency, electrons need to oscillate at
aiming to regain control of the properties of a metamaterial in the optical range. “To solve the frequency saturation problem, we have introduced the dark mode concept. These non-radiating modes, if present in the metamaterial building blocks, can provide very strong resonances at very high frequencies, giving us the desired response. This concept can be supported in configurations with very few metallic parts, leading to reduced losses as well,” he says. An alternative approach to overcoming losses is to use the dark mode concept in an Electromagnetically Induced Transparency (EIT) scheme, while there are also other options. “We have incorporated gain in the metallic metamaterials, which can lead to loss compensation and restore their strong resonances. We are also exploring the potential of metamaterials made entirely by dielectrics,” continues Prof. Soukoulis.
Figure Left: Like natural materials which acquire their properties from the properties of their individual atoms, metamaterials acquire their response from the response of their meta-atoms, like the split-ring resonators shown in the figure. A very important difference is that while atoms, and thus their properties, are given by nature the meta-atoms are man-made and they can be engineered at will, metamaterial response.
Figure Right: A metasurface composed of nanoscale metallic split ring resonators (SRRs) – white color in the background image -, if excited by a properly polarized wave with frequency equal to that of the SRR magnetic resonance, can give strong and broadband THz radiation emission, via optical rectification, which is induced by the non-linear response of the metallic SRRs. Work published in Nature Communications (Nature Communications 5, 3055 (2014); doi:10.1038/ncomms4055). Figure Left: A novel and uncommon response that can be obtained with metamaterials is the so-called negative refraction, as shown in the figure, where a wave is negatively refracted at the two interfaces of a negative index metamaterial (NIM) slab placed in air. The NIM slab can be also impedance matched with air (i.e. if both permittivity and permeability are equal to -1), resulting to no-reflected waves at its interfaces, and thus total transmission.
PHOTOMETA Photonic Metamaterials: From Basic Research to Applications Project Objectives
“PHOTOMETA” project focuses on the theoretical study of novel artificial materials characterized here as metamaterials (MMs), [such as photonic crystals (PCs), negative index materials (NIMs), and plasmonics] which enable the realization of innovative electromagnetic (EM) properties unattainable in naturally existing materials.
MATERIALS, Meta Materials, Photonics, ERC-AG-PE3 - ERC Advanced Grant Condensed matter physics. EU contribution: EUR 2 100 000.
• Professor Eleutherios Economou • Professor Maria Kafesaki • Dr George Kenanakis
Contact Details Figure Right: In our proposed novel dielectric laser design, the gain medium (green-color in top panel), which is bound by two silver stripes (grey-color), lases into the dark mode (blue and red peaks in the middle of the bottom panel) and the energy is therefore stored indefinitely between the two silver strips. When a scattering element is included (green strip in top-panel), the stored energy is able to radiate away in the form of a wave (blue and red crests). Here only one unit cell is shown, which is periodically repeated to form the actual radiating metasurface. Work published in Physical Review Letters (Phys. Rev. Lett. 118, 073901 (2017); https://doi.org/10.1103/PhysRevLett.118.073901).
Wider potential The project’s work is largely exploratory in nature at this stage, with Prof. Soukoulis and his colleagues pursuing fundamental research into the properties of metamaterials. However, Prof. Soukoulis is fully aware of their wider potential. “As we learn more about the properties of the metamaterials we study and we realize the possibilities offered by them, we become more and more excited with the subject. There are many open research directions where the PHOTOMETA research could be exploited. For example, quantum metamaterials, non-linear metamaterials and optical metasurfaces for full wave-front control,” he outlines. Metamaterials offer a unique platform for both basic and applied research, providing a sound platform for further investigation. “Although the research in this project is mainly exploratory, we always have specific applications in mind,” continues Prof. Soukoulis. “For example, sensing is one of the areas where metamaterials can offer revolutionary advancements, from microwaves, for the evaluation of dielectric materials for example, to the visible region.” There are also many other potential applications of metamaterials, including shielding of high-frequency electronic devices, like cell phones, laptops, aircraft electronics and medical devices. The
performance of these devices can be influenced by the presence of neighbouring electronic instruments, which Prof. Soukoulis says can be a major problem. “This electromagnetic interference (EMI) can cause malfunction of sensitive devices, primarily medical equipment, and as a result, they can become unsafe, or even harmful,” he explains. Researchers are looking at ways to shield these devices from this interference, and so enable them to continue functioning effectively. “We have investigated experimentally the electromagnetic properties of graphenebased paint-like layers, large-scale polymeric composite films, 3D printed polymeric samples, and others, as possible candidates for EMI shielding,” says Prof. Soukoulis. The project’s research has already opened up new paths of investigation, leading to a further extension of the initiative, so that Prof. Soukoulis and his colleagues can devote more time and energy to investigating this area. For his part, Prof. Soukoulis is convinced that metamaterials will be used widely in the future, in large part due to their unique properties, which encourages him to continue his research in this area in future. “We definitely plan to stay in this area, focusing mainly on basic research, but basic research that is associated with great potential in terms of applications,” he stresses.
Project Coordinator, Professor Costas Soukoulis FOUNDATION FOR RESEARCH AND TECHNOLOGY HELLAS, Greece N PLASTIRA STR 100 70013 HERAKLION Greece T: +30 2810 391380 E: firstname.lastname@example.org W: https://cordis.europa.eu/project/ rcn/107047_en.html
Professor Costas Soukoulis
Costas Soukoulis is a Senior Scientist in the Ames Laboratory and a Distinguished Professor of Physics at Iowa State University. He has been instrumental in creating the revolutionary fields of photonic crystals (PCs) and left-handed materials (LHMs), extending the realm of electromagnetism and opening exciting new applications.
For more information, please visit: www.euresearcher.com
Images courtesy of Adv. Funct. Mater., 25, 1955-1971, 2015.
A deeper picture of blue pixels
Organic electronics devices could have a major role to play in addressing contemporary social challenges like reducing CO2 emissions, yet some technical challenges remain before they can be more widely applied. We spoke to Dr Denis Andrienko about the MOSTOPHOS project’s work in investigating the issues which limit the stability of blue phosphorescent materials A relatively small
amount of material is required to produce AMOLED electronic displays, which are made of thin organic layers, making them an attractive option in certain electronic devices, including mobile phones, laptops and televisions. They also have some other beneficial attributes, as Dr Denis Andrienko of the MOSTOPHOS project, an EC-funded initiative bringing together both academic and commercial partners, explains. “AMOLED displays can be tuned fairly easily, by tuning the chemical structure of the emitter, or the entire pixel. Also, the power consumption, which is very critical for the display, is fairly low for an OLED. As a result, they don’t heat up much,” he outlines. However, there are also some limitations to consider. “If you look at a typical OLED display on the market, whether it’s in a TV or a mobile phone, you will see that there are three different pixels – red, green and blue. The green and red pixels, because of their lower excitation energy, are already phosphorescent. They harvest both triplet and singlet electronic states, which boosts their efficiency” says Dr Andrienko.
Blue pixels The same efficiency gains could potentially be achieved with blue pixels, yet in practice this is not currently the case, as blue light has a significantly higher excitation energy and hence there is a higher chance that an organic material will be damaged by the excitation. One of the key challenges in the organic electronics field is providing a high efficiency of blue pixels in a display, while at the same time ensuring that the lifetime is long enough to be used in a device, a topic central to the work of the MOSTOPHOS project. “That basically is the key point. The project is about figuring out what limits the stability of blue phosphorescent materials in OLEDs and providing chemical design rules for more stable deep blue emitters,” says Dr Andrienko. This work centres on developing a simulation framework, which will allow Dr Andrienko and his colleagues to look deeper into these stability-limiting mechanisms in phosphorescent OLEDs. “We’re working on a simulation platform,
which can in principle be used not only for blue phosphorescent OLEDs, but also to simulate other organic semiconducting devices” he continues. “We aim to provide feedback for organic chemists, helping them to synthesise structures with properties relevant to devices.” A good example could be a situation where a manufacturer wants to display a sky-blue colour in a device, or another that maybe wants their device to have a lifetime of 40,000 hours. Determining a material’s suitability in these terms can be a complex and expensive process, so Dr Andrienko and his colleagues aim to develop an efficient method of effectively pre-screening them. “The idea is basically to have a database of compounds, and to pre-screen those compounds before synthesising them. Synthesising and characterising materials for OLED applications is very time-consuming and expensive, so if we can reduce that cost using a computer simulation, then that would be a real asset to a company working on the design of this material,” he
An example of a complex blue phosporescent OLED design targeted by the consortium.
explains. There are a number of parameters to consider in characterising a material. “There are microscopic properties such as energetic disorder, electronic coupling elements, and reorganization energies,” outlines Dr Andrienko. “We are interested in relating them to exciton diffusion, exciton tolerance, and triplet-triplet and tripletpolaron interactions.” This also provides the foundation for researchers to simulate the entire device, which in the long term could help scientists to improve reliability and efficiency, and also to extend their lifetimes. While certain simulation techniques are effective at certain time and length scales, on their own they are not sufficient to simulate the entire device. “For example, quantum mechanics can deal with wave functions and atoms up to length scales of let’s say 10 nanometres. However, a device may be micrometres thick, so you
cannot use just that single technique to obtain the properties of the entire device,” points out Dr Andrienko. A multi-scale approach is therefore required in order to build a more complete picture. “So we use more techniques to complete the description, such as the
Synthesising and characterising materials for OLED applications is very time-consuming and expensive, so if we can reduce that cost using a computer simulation, then that would be a real asset to a company working on the design of this material master equation approach and continuous drift-diffusion equations. They cover different scales,” explains Dr Andrienko. “We are trying to effectively transfer model parameters from a fine-scale resolution to a more large-scale resolution, to simulate the device.”
Meeting of the consortium members in Dresden on 1st December 2017.
The project’s research in this area relies on a deep knowledge of fundamental processes occurring on OLEDs. For example, two triplet states annihilate at a certain rate; these rates can be obtained through quantum mechanical calculations. “These rates are then used in the so-called master equation, where there are only rates and events - the overall system is simulated on the level of these rates and events. Once this has been done, we can then transfer the information we have obtained from that scale to the drift-diffusion equations. Then, at that scale we can add light outcoupling, and simulate the entire device,” explains Dr Andrienko. This multi-scale approach to modelling OLEDs will enable researchers to analyse the underlying mechanisms behind the particular characteristics of a device. “We will be able to
pinpoint the factors behind the degradation, for example it might be because there is an imbalance in electron and hole mobilities,” says Dr Andrienko.
Efficiency roll-off The wider goal in this research is to understand the degradation process of OLEDs and the socalled efficiency roll-off, which can essentially be described as the reduction in efficiency of an OLED as the voltage is increased. This is an issue which currently limits the application of OLEDs. “The lower efficiency of this blue pixel means that either a higher current is required, or you need to have a larger area of blue pixels in the OLED. That has a negative impact on the battery lifetime,” explains Dr Andrienko. The software suite that has been developed could help companies identify the right materials for
MOSTOPHOS Modeling Stability of Organic Phosphorescent Light Emitting Diodes In-silico OLED design requires addressing all scales, from Angstroms (quantum mechanics) to micrometers (drift-diffusion and master equations).
The objectives of this project are to integrate various levels of theoretical materials characterization into a single software package, to streamline the research workflows in order for the calculations to be truly usable by OLED materials developers, complementary to experimental measurements.
H2020-EU.220.127.116.11. - Cross-cutting and enabling materials technologies.
• Max Planck Institute for Polymer Research (Coordinator), Germany • Consiglio Nazionale delle Ricerche, Italy • Universidad del País Vasco, Donostia, Spain • University Rome “Tor Vergata”, Italy • COSMOLogic GmbH, Germany • Technische Universiteit Eindhoven, Netherlands • Technische Unversität Dresden, Germany • FLUXiM AG, Switzerland • CYNORA GmbH, Germany
OLED applications. “You need to pre-screen a material before you synthesize it. That’s what this software could potentially help in doing,” outlines Dr Andrienko. “A second important point is the optimisation of the efficiency of an entire OLED, which depends on the orientation of emitter molecules, light outcoupling, and other issues.” This research is a core element of the overall agenda within MOSTOPHOS, yet Dr Andrienko says there are other strands of research in the project. Alongside investigating blue phosphorescent emitter lifetimes, researchers are also developing a package to obtain OLED characteristics from scratch, from the chemical structure, while there are also other avenues of exploration. “We are also looking at other concepts. CYNORA, a company that is joining the MOSTOPHOS project consortium is particularly interested in a different way of harvesting triplet states in an OLED device, namely thermally
activated delayed fluorescence, or TADF,” says Dr Andrienko. This type of work could hold the key to improving the efficiency and performance of OLEDs, which holds important implications for several area of industry; with this in mind, Dr Andrienko is keen to maintain strong links with the commercial sector, with a view to pursuing further research in future. “Discussions with company representatives are currently ongoing,” he says. The project itself has a clearly defined funding term, but Dr Andrienko says the conclusion of the project will not mark the end of this research. With global competition in the OLED field growing increasingly intense, continued research is essential if European companies are to maintain a strong presence in the market. “This project has helped us to develop a set of clearly defined goals, and to get funding for specific target applications. But the overall research will not stop with the end of the project,” stresses Dr Andrienko.
Project Coordinator, Dr Denis Andrienko Max Planck Institute for Polymer Research Ackermannweg 10 55128 Mainz Germany T: +49 (0)6131 379147 E: email@example.com W: http://www.mostophos-project.eu/wp/ W: www.mpip-mainz.mpg.de/~andrienk/ P. Kordt, J. M. van der Holst, M. Al Helwi, W. Kowalsky, F. May, A. Badinski, C. Lennartz, D. Andrienko, Modeling of organic light emitting diodes: from molecular to device properties, Adv. Funct. Mater., 25, 1955-1971, 2015 P. Kordt, P. Bobbert, R. Coehoorn, F. May, C. Lennartz, D. Andrienko, Simulations of organic light emitting diodes, In “Handbook of Optoelectronic Device Modeling and imulation”, Vol. 1, 473-522, 2017 https://www.taylorfrancis.com/books/e/9781498749473
Dr Denis Andrienko
Dr Denis Andrienko obtained his PhD in optics/structural transitions in liquid crystals from the Institute of Physics, Ukraine in 2000. In 1999 he joined the group of Prof. M. P. Allen at Bristol, UK, where he obtained a second PhD on computer simulations of complex fluids (2001). He then moved to Max Planck Institute for Polymer Research as a Humboldt Fellow doing theoretical studies of slippage effect and nematic colloids. In 2004 he joined the group of Prof. K. Kremer as a postdoctoral fellow working on multiscale simulations of polycarbonate melts. Since 2005 he is a group leader working on the development of multiscale simulation techniques for organic semiconductors.
Taking innovative action on industrial organic synthesis Toxic materials are used in most industrial chemical processes, but researchers are looking to develop greener alternatives using biocatalysis. Researchers in the CarbaZymes project are drawing inspiration from the natural world to develop more sustainable processes, work which will have a positive impact on both society and industry, as Professor Wolf-Dieter Fessner explains
Engineering novel enzymes using advanced bioinformatics tools and screening for optimized catalyst properties.
The process of
assembling simple fragments to build large, complex products by connecting carbon-carbon bonds (C–C bonds) is an essential technology at the heart of industrial organic synthesis. The current methods for creating C–C bonds are heavily reliant on metal catalysis, particularly involving rare noble metals, such as palladium or rhodium complexes. Ores for these materials can only be mined in a few locations across the world, an activity resulting in very significant pollution. “These mining sites are among the most polluted places on earth, causing global environmental damage via release of millions of tonnes of toxic waste into the air every year,” explains Professor Wolf-Dieter Fessner. “Many traditional chemical operations not only involve the use of such toxic materials, but also are very energy-intensive, as extremely high temperatures are generally required. It’s sometimes not only necessary to operate under such harsh conditions, but also to use activated reagents - the manufacture of these compounds themselves results in the accumulation of significant amounts of environmentally unfriendly waste materials.” As the Principal Investigators of the CarbaZymes project, Professor Fessner is exploring an alternative approach to forming C–C bonds by using enzymes from nature as efficient eco-friendly catalysts. The work of Professor Fessner and his colleagues in the project is largely motivated by concerns
around the sustainability of current methods. “A general greening of industrial processes would be very much in the interests of European citizens, and those beyond of course. There’s a strong trend in the chemical and pharmaceutical industries nowadays towards developing more sustainable biocatalytic processes,” he stresses. This is a topic that lies at the heart of the CarbaZymes project’s agenda. Whereas with traditional chemical methods product selectivity is difficult to achieve, biochemical catalysts for C–C bond formation synthesize the same products chemo-, regio- and stereo-selectively with exquisite precision, leading to the formation of chiral molecules containing up to two new adjacent chiral centres of known configuration. Enzymes can also achieve unparalleled rate acceleration under mild temperatures and pressures in a near neutral aqueous reaction media,
Sensitive high-throughput screening of enzyme arrays to rapidly identify novel enzyme activities..
thereby consuming much less energy while eliminating the use of hazardous solvents and reagents, with concomitant minimisation of waste and cost. “The ultimate ideal is to utilise renewable resources as starting materials. For example, taking compounds from nature, processing them with the aid of enzymes under sustainable conditions, and producing materials that can be recycled,” says Professor Fessner. A number of hurdles remain, however, before such a circular bioeconomy can become a reality. The research required is multi-disciplinary in nature, with Professor Fessner and his colleagues aiming to develop an unprecedented platform of novel C–C bond forming enzymes, in particular a specific class called lyases. “Specifically we’re looking at aldol reactions, which are actually well known to the organic chemist. Man’s efforts, however, pale into insignificance when compared to those of nature. Over millions of years, nature has evolved lyases for the efficient synthesis of a huge variety of different target compounds, many of which are totally unknown” he continues. “However, most such enzymes exhibit strict specificity for highly functionalised natural compounds, and are thus unlikely to be of immediate interest to industry, where low-functionalised substrates are the norm.” “It was actually with this challenge in mind that the CarbaZymes consortium
was formed. The consortium now brings together capabilities enabling not only the discovery of suitable natural enzyme starting points, but their immediate engineering into demonstrated effective industrial biocatalysts for highly desirable aldol reactions tolerant to low-functionalised substrates.”
Enzyme catalysis The CarbaZymes project is responding to the heightened level of industrial demand by investigating the use of biocatalytic synthesis as a basis for the development of widely needed chemicals and active pharmaceutical ingredients. In this work researchers are drawing inspiration from the natural world, aiming to develop robust enzymes for C–C bond forming reactions. “We work with an SME that has a lot of expertise in utilising meta-genomic information. They have
for a polymer material, for chemicals that are utilised in bulk quantities across the world,” says Professor Fessner. “It’s clearly very important to establish more sustainable pathways to make such monomer materials as soon as we can. Even preventing a small fraction of pollution stemming from their production would still have a huge impact, as an enormous amount of these materials are produced chemically every year.” The wider, longer-term goal is to utilise sustainable resources as starting materials for next-generation synthetic biological processes, reducing our dependence on fossil fuels, which in the long run will help protect the environment and boost the competitiveness of European industry. While this is very much a long-term goal, Professor Fessner says that important progress has already been made in the course of the project. “Alongside
We aim to build on existing knowledge, to develop a platform with many robust enzymes and variants that could be applied for different purposes, under different conditions and for different new reactions useful in industry the capability to produce a large variety of enzymes via gene synthesis,” explains Professor Fessner. A second element of the project’s work centres around utilising and modifying known enzymes. “We aim to broaden their catalytic capabilities. This means broadening their substrate scope and broadening their window of tolerance for high concentrations of substrates, to achieve high concentrations of products that can be more easily isolated,” outlines Professor Fessner. “We are developing reactions that are likely not found anywhere in nature. These reactions are highly relevant as a more sustainable approach to major industrial processes.” Many of the enzymes commonly found in nature can in principle be applied in carboligation reactions, yet only a few have been investigated to a level where they can be utilised for industrial processes. Widening the knowledge base on these enzymes will bring them significantly closer to - or even into - practical application. “We aim to build on existing knowledge, to develop a platform with many robust enzymes and variants that could be applied for different purposes, under different conditions and for different new reactions useful in industry,” says Professor Fessner. This research is designed to respond effectively to market needs, and help boost the competitiveness of the European chemical and pharmaceutical industries. “For example, we are looking at monomers
identifying suitable enzymes we’ve also been working to develop them to a stage where they can be directly applied in industrial settings,” he outlines. So far two patents have been filed, and now researchers are looking towards assessing the effectiveness of the reactions that have been developed. “We intend to scale-up those reactions that we have developed at lab scale, to the pilot level and maybe even larger scales. We will test the validity of our approach, and the suitability of these enzymes under applied conditions,” outlines Professor Fessner. “This work will be done in leading industrial labs. Ultimately, we hope CarbaZymes will represent a major stepping stone in paving the way towards a greener future!”
CARBAZYMES Sustainable industrial processes based on a C-C bond-forming enzyme platform Project Objectives
C-C bond forming reactions are at the heart of industrial organic synthesis, but remain largely unexplored due to the lack of broad biocatalytic reaction platforms. CARBAZYMES addresses this challenge by promoting innovation in the field of biocatalytic C-C bond formation at large scale, to strengthen the global competitiveness of the European chemical and pharmaceutical industry. Sustainable processes resulting from this research will have an environmental impact by replacing energy and resource intensive traditional processes.
Funded by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 635595
Professor Wolf-Dieter Fessner, Ph.D TU Darmstadt Dept of Organic Chemistry & Biochemistry Alarich-Weiss-Str. 4 D-64287 Darmstadt Germany T: +49 6151 16-23640 F: +49 6151 16-23645 E: firstname.lastname@example.org W: http://carbazymes.com https://www.youtube.com/watch?v=iNlfyDa093g &list=PLvpwIjZTs-LjYqeOiYYqRWlegdihyjGgu
Wolf-Dieter Fessner, Ph.D
Wolf-Dieter Fessner obtained his Ph.D in 1986 and worked as a postdoctoral fellow with George Whitesides (Harvard University) and George Olah (USC, L.A.). Before assuming the chair in Organic Chemistry at the Technische Universität Darmstadt in 1998, he was professor at RWTH Aachen. His research interests are in the area of biocatalysis for applications in organic synthesis, with particular emphasis on the discovery and development of novel enzymes for stereoselective carbon-carbon bond forming reactions and the synthesis of complex oligosaccharides.
Analysis of structure-function principles of complex aldolase structures.
A number of different methods are available to monitor and probe the dynamics of chemical reactions, now researchers are developing a novel approach using ultrafast EUV pulses. This approach could help scientists gain new insights into structural rearrangements and build a deeper understanding of how chemical bonds are made and broken, as Professor Daniel Strasser explains The development of
ultrafast pulse techniques allows researchers today to monitor chemical dynamics within a molecule, which holds important implications for our understanding of the underlying quantum mechanisms behind chemical reactions. Based at the Hebrew University of Jerusalem, Professor Daniel Strasser and his colleagues are developing a novel approach for timeresolved imaging of structural dynamics at an unprecedented level of detail. “We are trying to use a new type of radiation source. In the high-order harmonic generation (HHG) source, amplified femtosecond laser pulses in the near-infrared (IR) are focused onto an atomic gas target producing ultrafast bursts of EUV light,” he explains. This approach generates pulses at a high photon energy as well as a very high bandwidth, opening up new possibilities in research. “You can open up a new window beyond the femtosecond (10 -15 of a second) time scale, going into the attosecond (10 -18) regime, where we could look then not just at the dynamics of atomic nuclei moving inside a molecule, but even at the dynamics of electrons, which form the chemical bond,” says Professor Strasser.
Reaction dynamics This research is built out of a recognition of the difficulty of calculating the properties of complex chemical systems. While quantum mechanics can be applied to describe how atoms, molecules and electrons behave, calculating how a system will behave on the quantum level remains a very challenging task. “This is a difficult computational task to perform exact calculations, so to speed-up computations we use certain approximations. In order to know which approximations can be applicable, we need to first understand the
underlying chemical mechanism,” outlines Professor Strasser. While state-of-the-art quantum chemistry calculations have been successfully implemented to calculate and predict the molecular ground state, and provide detailed insights into the molecular structure, Professor Strasser says there are limits to our existing knowledge. “Currently, ab-initio theoretical descriptions of chemical reactions have limited predictive power to describe excited state dynamics in which one chemical bond is broken and another is formed. This is essentially what we need to better understand in order to improve the synthesis of new materials, of new chemical compounds,” he explains. “Presently, this know-how is still largely the result of trialand-error experiments, designed based on the professional intuition of very experienced individuals.” A general method for monitoring reaction dynamics over very short timescales would give researchers more information and lead to deeper insights into chemical reaction mechanisms, a goal which lies at the core of Professor Strasser’s work. Researchers in the project aim to develop a new experimental instrument, including an HHG source for ultrafast EUV pulses, so that scientists can visualise chemical reactions during the making and breaking of chemical bonds. “Our experiments can then guide theorists towards making improved approximations and developing better calculation codes that can be tested against the experimental data. This detailed dynamical understanding of how one bond breaks and another is formed could then potentially guide the synthesis of new materials,” explains Professor Strasser. For example; “Imagine that you discover that a molecule in a particular shape would be a
great drug for treating cancer,” continues Professor Strasser. “State-of-the-art quantum chemical calculations could be performed to predict the stability and structure of the potential new drug. However, the ability to synthesize the new molecule would still rely heavily on intuition and a lengthy trial-anderror approach.” While some quantum chemistry codes have been developed to treat dynamics, even the most sophisticated computers struggle when asked to predict a chemical reaction in which a chemical bond breaks, an issue Professor Strasser and his colleagues are working to address. “My idea is that if we can experimentally visualise bond breaking, and test against state-of-the-art calculation methods, at some point we’ll be able to achieve the same understanding of reaction dynamics as we currently have for ground-state structure,” he outlines. A lot of the existing time-resolved techniques are spectroscopic methods which rely on prior theoretical understanding of the molecular system, but Professor Strasser and his colleagues are developing a different approach. “We are trying, in principle, to minimise the need to rely on prior knowledge. One way to directly visualise the structure of small molecules is called Coulomb explosion imaging,” he says. Looking at new exciting photo-chemical processes such as multiple-detachment of molecular anions.
Photographs by Liron Vazina.
Watching how chemical bonds break
Coulomb explosion imaging This method takes advantage of the Coulombic repulsion between nuclei that occurs when the electrons responsible for chemical bonding are removed. The positively charged nuclei are in a repulsive state, which leads to a molecular explosion, offering scientists a window into the structure of the molecule. “The Coulomb force is inversely proportional to the distance square. If you have charges that are close together, they will release more energy, while if charges are further apart, they will release less energy. Using our coincidence fragment imaging method, in which we measure the released energy and the correlations between different fragments of a single molecule, we get a direct glimpse into the structure of the molecule when the electrons were removed,” explains Professor Strasser. Using this method at different stages during a chemical
are exchanged in concert or sequentially. “Some researchers believe it is a sequential proton transfer, meaning that a first proton is transferred on a time scale of a few hundred femtoseconds, and only a few picoseconds later is the second proton transferred,” explains Professor Strasser. “There’s still some disagreement over this, with different theories to support different hypotheses.” This is a topic that Professor Strasser plans to revisit once the imaging method has been refined further, while it will also be tested on other simple molecules, such as methanol and methyl iodide. A large amount of work has already been done on methanol, yet Professor Strasser says the project team has opened new interesting questions. “For example – when you remove two electrons from the Methanol system, you can drive dynamics in which a proton migrates from the methyl to the
My idea is that if we can visualise this experimentally, and test against state-of-the-art calculations, at some point we’ll be able to achieve the same understanding of reaction dynamics as we currently have for ground-state structure reaction will allow researchers to probe the time evolving molecular structure. “Using the ultrafast bursts of high energy photons produced by HHG will allow us to remove multiple electrons at a controlled time and initiate time-resolved Coulomb explosion,” continues Professor Strasser. “We basically want to produce a movie of the chemical reaction dynamics evolving in real time.” A good example of such a scenario is double proton transfer, which is known to be an important DNA damage mechanism. While the importance of double proton transfer has been known about for many years, debate continues about how it happens, a topic which Professor Strasser says could be investigated in greater depth using the new time resolved imaging method. “A DNA base pair, which is connected by hydrogen bonds, can be excited into a proton transfer state, leading to a sudden change of structure, called tautomerization, where each part gives its proton to the other,” he says. Earlier time-resolved experiments left certain questions around this unanswered, in particular whether the protons
oxygen, forming a water H2O+ cation observed in coincidence with a CH2+. In other events we observe breaking of all three C-H bonds of a single methanol molecule and the formation of two new bonds producing a stable H3+ fragment,” he outlines. Once the method has been tested on these molecules it will then be applied on less well-known systems, in which researchers don’t know what is happening in certain reactions, while Professor Strasser also plans to continue his research into the development of new methods. “My main interest is in performing better experiments,” he stresses. “We’re always expanding our toolbox of methods for looking at new exciting photo-chemical processes such as multipledetachment of molecular anions. My main goal is to understand how chemical bonds break and how they are made.”
ULTRAFASTEUVPROBE Ultrafast EUV probe for Molecular Reaction Dynamics Project Objectives
Using the emerging table-top high order harmonic generation technology we are developing and validating a novel approach for time resolved imaging of structural dynamics. Laser initiated ultrafast structural rearrangement and fragmentation dynamics are probed using single photon Coulomb explosion imaging (CEI) with ultrafast extreme UV (EUV) pulses.
ERC-SG-PE4 - ERC Starting Grant - Physical and Analytical Chemical sciences. EU contribution: EUR 1 499 000
Professor Roi Baer Hebrew University of Jerusalem
Principal Investigator, Professor Daniel Strasser Institute of Chemistry, The Hebrew University of Jerusalem, Building: Los Angeles Room: 38 Givat Ram Campus 91904 Jerusalem Israel T: +972-(0)2-6585466 E: email@example.com W: http://chem.ch.huji.ac.il/~strasser/ W: http://cordis.europa.eu/project/ rcn/105071_en.html
1. Luzon I., Jagtap K., Livshits E., Lioubashevski O., Baer R. Strasser, D. (2017) , Single-photon Coulomb explosion of methanol using broad bandwidth ultrafast EUV pulses Phys. Chem. Chem. Phys. 19, 13488 , PCCP cover 2. Kandhasamy, DM S; Albeck, Y S; Jagtap, K. S; Strasser, D. PI (2015) 3D Coincidence imaging disentangles intense field double detachment of SF6- . J. Phys. Chem. A, 119, 8076 3. Shahi A., Albeck Y., Strasser, D. (2017) IntenseField Multiple-Detachment of F2-: Competition with Photodissociation, J. Phys. Chem. A. 121,3037
Professor Daniel Strasser
Professor Daniel Strasser works in the Institute of Chemistry at the Hebrew University of Jerusalem. The main focus of his research is on using intense femtosecond laser pulses to excite and probe ultrafast molecular dynamics.
The Link Between Climate Change and Extreme Weather Events Recently, we’ve seen a high frequency of devastating natural disasters. For instance, Hurricane Irma and Maria, the monsoon floods of Bangladesh, the Californian wildfires and mudslides and the so called ‘Bomb Cyclone’ over the east coast of the USA, that produced record low temperatures in the region. Whilst it is true that hurricane seasons are normal and floods and fire are part of nature’s cycle, it is also feasible that climate change is fueling more intense weather patterns and plays a part in triggering weather related disasters. It’s time to look at the evidence. By Richard Forsyth
ven for those who believe in it and understand its significance, climate change is sometimes miscomprehended as a challenge we need to control in a future sense. The truth is that climate change is happening – it’s a process we are experiencing today. It stands to reason we should notice some changes in the weather. The Intergovernmental Panel on Climate Change (IPCC), comprised of the world’s most well-regarded experts in environmental fields, released a press release way back in November 2014 confirming ‘that climate change is being registered around the world and warming of the climate system is unequivocal’. Thomas Stocker, Co-Chair of IPCC Working Group said: “Our assessment finds that the atmosphere and oceans have warmed, the amount of snow and ice has diminished, sea level has risen and the concentration of carbon dioxide has increased to a level unprecedented in at least the last 800,000 years.” Only four years since that press release, the concentration of carbon dioxide in the atmosphere, as of January 2018, reached an alarming 408 parts per million – that’s the highest in around 3.6 million years – since a period known as the middle Pilocene, which had concentrations of carbon dioxide over 400. Obviously, in that era, the problem was not manmade but it’s known that our industrial activities are accelerating warming at an alarming rate. As a result, it’s estimated that around 800 million people are now vulnerable to climate change impacts. The IPCC predicted greenhouse gases in the atmosphere will likely increase land surface temperatures, leading to droughts. Such a change would also, they speculated, fuel intense cyclones with high wind speeds, create wetter Asian monsoons and significant mid-latitude
storms. We could expect major floods, sea level rises and killer heat waves. Apart from the death, destruction and cost of extreme weather events, there would also be migration of populations escaping disaster. Of course, some of those ‘future’ weather predictions reflect the daily news reports we see today. So, is climate change already at play with weather events and can we expect more intense weather as the norm?
More rain and more heat It’s important to realise that global warming creates very volatile weather systems. A study published in Nature Climate Change in 2015, revealed that global warming in the last century has meant that extremes in heat that occurred once every 1,000 days, happen around five times more often. If we do manage to keep the global temperature limited to a 2 degrees C rise, which is an aim for many governments, we can still expect 60% more extreme rain events and 27 extremely hot days on average, in any given place on the planet. If we carry on going beyond a 2 degree C global temperature rise, we can expect sustained, widespread, very dangerous weather related phenomena. The fact is, it’s getting hotter each year on average. According to data by NASA, 2017 was the second hottest year on record, without an El’ Niño – and average global temperatures show that the top 10 warmest years recorded have all occurred since 1998.
How climate change fuels extreme events So far, the oceans are absorbing around 90% the world’s excess heat, hence, ocean temperatures have been rising consistently for the last three decades. When oceans absorb heat, they expand, causing sea
How European Weather is affected by Climate Change News reports on
Wind destruction increases in UK
Ski resorts without snow
An analysis of data suggested that the UK will be at the mercy of fiercer storms and damaging wind due to climate change. Windstorm destruction will increase more than a third with a mere 1.5 degree C rise in temperature.
climate change effects in Europe are numerous. Here are a few examples of what is happening now and what will happen in the future because of global warming.
In 2015 and 2016 several ski resorts were lacking the one thing they needed for visitors, as snow was sparse to nonexistent. This might be a glimpse of things to come. Climate scientists have said that by 2099 there will 70 % less snow in the Alpine ski resorts.
Change in timings of floods A study published in Science, described how climate change is shifting timings of European floods. Data on the timing of floods in Europe over the last 50 years has been analysed to suggest that warmer temperatures have led to earlier spring snowmelt floods in northeastern Europe. Also, winter storms were delayed due to polar warming and caused later floods around the North Sea and the Mediterranean coast.
level rise – this is in conjunction with melting polar ice from global warming. Consider that many of the largest cities in the world are situated on coastlands or next to rivers with millions of inhabitants and important infrastructures. Cities like Shanghai, New York, Rio de Janeiro, Alexandria, Osaka and London would become submerged with a rise of just 3 degrees Celsius. Look at cities such as Venice and Miami, that are losing to the encroaching sea level, to glimpse the challenges to come. Cities will also be increasingly prone to floods from storms. Global warming intensifies storms, which drives storm surges into coastal cities.
Ferocious storms The storms are getting worse and that’s official. The global warming effect gives us an explanation to why. Storms feed off latent heat. The more heat that accumulates and gets sucked into a storm, the faster it churns. Research with satellite altimeter data, carried out over two decades, discovered that hurricanes become more intense much faster today than they did 25 years ago – so storms reach the category three stage around nine hours faster than in the 80’s. Storms also last longer and have higher sustained wind speeds. There is additional evidence that extra water vapour in the atmosphere means that storms are wetter, which might be why the biggest storms produce 10 % more precipitation, so the downpours can be extreme, which in turn can cause flash floods and mudslides. It’s not just rain either, there is an increase in snowstorms. National Oceanic and Atmospheric Administration (NOAA) scientists, in the USA, looked over 120 years of data to find out there were twice as many extreme regional snowstorms in the years between 1961-2010, compared to 1900-1960.
The temperature rises in France Mega heatwaves are predicted for France because of climate change. The last one in 2003 killed 15,000 in France and 9,000 across Europe with temperatures over 44 degrees C. Researchers predict that future heatwaves will exceed 50 degrees C.
The south of Europe gets hit hard The highest number of adverse impacts combined, in Europe, such as floods, droughts, storms and heatwaves will be in the south and southeastern parts according to scientists, where the high frequency and intensity of extreme weather events will cause havoc.
If we do nothing to prevent an escalation of global warming and continue on the path we are on, we could accelerate toward an 8 degree rise in temperature by 2100. To put this in perspective, our children’s children could witness a four foot rise in sea level, ferocious hurricanes, prolonged droughts, large regions affected by desertification and there will be places on Earth that will simply, for the first time, be truly unlivable. The latest target being touted by the UN, is to aim for a 3 degree C limit by 2100 – still a figure that, despite requiring a huge commitment from industry, politicians and consumers – would lead to our maps being redrawn. With a 3 degree C rise we can expect weather to be consistently extreme. Weather aligned with this temperature increase would threaten our crops, animals and eco systems, with devastating consequences for the environment and our food security. To pinpoint a single weather event on climate change is something scientists are reluctant to debate although some of the media is less cautious. It is true that extreme weather events happen year to year as part of the Earth’s natural processes but when looking at trends, there is evidence that weather seems to be evolving new patterns and in line with global warming. We are regularly hearing the words ‘new record’ when talking about weather phenomena in recent years and that suggests a potential lack of stability in weather patterns. When you consider between 1980-2013, climate-related extreme events accounted for around 400 billion Euros of economic loss in EEA countries and over 85,000 deaths, the real costs of global warming become more alarming with every increased degree of heat. What is certain is that climate affects weather, so we should be prepared for our planet and its weather to change.
If we do manage to keep the global temperature limited to a 2 degrees C rise, which is an aim for many governments, we can still expect 60% more extreme rain events and 27 extremely hot days on average, in any given place on the planet www.euresearcher.com
The GLOBAL VALUE project has been one of the biggest research projects on Corporate Social Responsibility (CSR) ever funded by the European Commission. Its purpose was to enable multinational corporations to measure their impacts on global sustainable development, as Project Manager, Norma Schönherr explained
Tools for multinationals to measure their impacts on Global Sustainable Development “From the outset,
the overarching objective of the GLOBAL VALUE project was to create knowledge, tools and resources that multinational corporations can use to comprehensively assess and better manage their impacts on global sustainable development,” began Norma Schönherr. The impacts of big business on people and the planet had been in the limelight for years. Trust in multinationals to discharge their corporate responsibility and drive positive social change was at a low-point. New ways needed to be found to turn this around and ensure that people were satisfied these companies were accountable and aware of their impacts. In 2015, 150 world leaders agreed to adopt 17 clearly identified Sustainable Development Goals (SDGs), created by the United Nations – ambitious aims such as ending poverty and extreme hunger, as well as providing clean water, gender equality, education, decent work & economic growth. For these aims to be in any way realistic, it was clear that it would require businesses, especially multinationals, to take a leading role in achieving these desired outcomes. Two years prior to this declaration, the GLOBAL VALUE project had already been kicked-off and was developing resources that would ultimately play a key part in finding ways to track multinational corporations’ impacts on societies and environments across the globe. The project responded to a call in FP7 that asked for the generation of practicable knowledge on measuring the impacts of multinational corporations on global sustainable development, grounded in solid research. The call and subsequently the GLOBAL VALUE project came about around the same time when the European Commission revised its definition of CSR as ‘the responsibility of enterprises for their impact on society’. “GLOBAL VALUE was an important interface in changing the discourse from focusing on
Field research in Tanzania - GLOBAL VALUE was a collaboration between partners from Europe, Africa and Asia.
CSR activities towards consideration of how business impacts sustainable development globally,” said Norma Schönherr. GLOBAL VALUE had three clear aims – improving the knowledge around enhancing positive impacts, raising awareness around how multinationals affect surrounding socio-ecological systems and developing resources to measure and manage impacts on sustainable development, in the context of the SDGs. “GLOBAL VALUE made significant strides in both, academia and business practice, towards a better understanding of business responsibility and subsequently better management of business impacts and contributions to the SDGs,” added Schönherr.
The right tools Companies have been faced with increasingly complex demands from stakeholders on one side, and ever more offers of instruments for addressing these demands. CSR standards and tools have proliferated, with new ones regularly hitting the market, having been developed by businesses, business associations, public and non-governmental organisations, as well as multi-stakeholder
initiatives. In this plethora, finding the right tools that enable corporations to measure, manage and improve their contributions to sustainable development was a challenge, even to most experienced professionals. “It is true that there is no one-size-fitsall approach here, and we do not provide a ready-made ‘how-to’ blueprint for managing impacts on sustainable development. However, what the GLOBAL VALUE toolkit does is support managers in identifying and answering the critical questions that better enables them to deal with this complexity of demands and available tools,” said Schönherr. As a result, the project developed practical resources for businesses to access and use. These resources included the GLOBAL VALUE tool navigator, a unique and free database that helps filter through hundreds of measurement tools to navigate to the most relevant ones for a business, in just three steps. The navigator can be accessed here: https://www.global-value.eu/navigator.php. Each tool in the database has been analysed in terms of which SDGs it helps address, how much of the value chain it covers, what results it provides and many more features. There are practical showcases of 15 of the most widely used tools that have been practically tested in cooperation with three multinational corporations in three different geographical contexts and industry sectors. In addition, businesses can find thematic guides to measuring and managing impacts as well as sector profiles. Since its launch in June 2017, the GLOBAL VALUE toolkit has attracted more than 8,500 visitors, around 750 of which have become registered toolkit users. A vast pool of experience was necessary to provide the knowledge needed and to make it possible to develop the tools accurately. The project created an ‘expert crowd’, a crowd intelligence platform of 262 stakeholders from 60 countries to draw expertise from. Those crowd members were involved in various activities during the process of
developing the GLOBAL VALUE toolkit – from supporting the collaborative stocktaking of existing tools and approaches in the very beginning of the project, to discussions on what defines good impact assessment tools, to testing the GLOBAL VALUE tool navigator at the end of the project. In addition to the online toolkit, the project has also produced training and teaching resources to further support managers. For example, thematic webinars and workshops were developed to help with specific questions when measuring and managing impact. GLOBAL VALUE has also worked with organisations wanting to improve their business impact assessment. For instance, project members have taken a consultative role with the Austrian Development Agency, the advisory board of the Natural Capital Protocol toolkit and co-hosted events with stakeholders, such as the Austrian Initiative for Business and Global Development, CorporAID. “We also received quite a bit of interest from consultancies and tool developers, business associations, development agencies that engage in business partnerships or public authorities that partner with business in public-private-partnerships (PPPs),” elaborated Schönherr. “I personally find that very encouraging because, essentially, no company can advance the sustainable development agenda or impact measurement practice alone. If we are to move beyond check-box exercises towards meaningful engagement with the SDGs, different actors will need to collaborate and talk to each other. We are very glad to contribute to this movement with the knowledge and insights we have built up in the past 4 years.”
business performance need not be antithetic and can even be mutually supportive. “Many companies see sustainability and corporate responsibility as reputational or brand value issues. However, the benefits for societies and environments, which in turn come back as benefits to the business, go beyond that. In the end, it comes down to two major drivers that will continue to incentivise more businesses to engage with sustainable development: the necessity to maintain trust in business as an institution and ensuring the long-term viability of companies in an increasingly uncertain environment.” Many business leaders support this view. The SDGs provide an array of new business opportunities and become an important driver of innovation. This perception is echoed by a recent report by the Business and Sustainable Development Commission, a high-level forum of business leaders from multi-nationals, as well as from other private sector and civil society organisations. The business case for engaging with the SDGs is strong and rewards are estimated to amount to at least US$12 trillion in new business opportunities. Schönherr concluded: “This also requires shifts in corporate cultures and the development of new skills among managers who increasingly will not only be held responsible for market shares and shareholder profits, but for the effects their companies have on communities and the environment in the future. They will have to have the skills to notice and fully grasp the opportunities sustainable development may present their business.”
What’s in it for them?
The GLOBAL VALUE project officially ended on 30 June 2017. However, its participants are committed to maintaining and growing the toolkit over the coming years. They have recently joined a new partnership that specifically deals with advancing environmental and natural capital impact assessment in business. This is one of the arenas where it is hoped those involved will continue to advance the discussion on enabling business to measure and manage their impacts on SDGs.
Big businesses were often perceived to not prioritise their far-reaching effects on people and the environment but Schönherr argues that sustainability improvements are increasingly being recognised among business managers as drivers for profit and growth. “I believe that it is a myth that engaging with corporate responsibility must necessarily come at the expense of business success. We have a lot of authoritative research that suggests, quite to the contrary, that CSR and
The ongoing objective
GLOBAL VALUE Assessing the Impacts of Multinational Corporations on GLOBAL Development and VALUE Creation Project Objectives
The GLOBAL VALUE project had a budget of 3 million Euros, with the overarching focus of helping multinational corporations assess, measure and manager the impact they have on sustainable development.
CP-FP-SICA - Small/medium-scale focused research project for specific cooperation actions dedicated to international cooperation partner countries(SICA)
The project is implemented by the Institute for Managing Sustainability, together with 11 leading research institutions, civil society organisations and companies from Europe, Asia and Africa. • www.global-value.eu/toolkit/consortium
André Martinuzzi, Norma Schönherr, & Adele Wiman Institute for Managing Sustainability Vienna University of Economics & Business Vienna, Austria T: +43-1-31336-4698 E: firstname.lastname@example.org W: www.global-value.eu
André Martinuzzi (a.Prof, Dr.) Norma Schönherr (Msc)
Dr André Martinuzzi is Head of the Institute for Managing Sustainability and associate professor at the Vienna University of Economics and Business. He has more than 20 years of experience in coordinating and leading EU-wide research projects for the European Commission, as well as for international organizations and ministries. He is an expert in the fields of evaluation research, CSR, sustainable development, and knowledge brokerage. Norma Schönherr (Msc) is a research fellow and project manager at the Institute for Managing Sustainability at WU Vienna. Before joining the institute, she held positions in an international environmental NGO, a development agency, and a free research institute. She’s an expert in impact measurement and management. Her main areas of expertise are corporate social responsibility, international sustainability governance and corporate impacts on global sustainable development.
Pioneering yam systems research for improved food security West Africa accounted for over 90 percent of yam production in 2014, yet soil degradation and climate change are having a significant impact on yields. More effective soil and crop management in yam systems could help improve crop productivity and boost food security and income, as Professor Emmanuel Frossard, the Principal Investigator of the YAMSYS project, explains Tuber crops grown
throughout the tropics, yams are an important staple food for millions of people around the world and particularly in West Africa, where they hold high socio-economic and cultural importance. However, soil degradation in West Africa represents a significant threat to tuber yields, an issue that researchers in the YAMSYS project are working to address. “The idea behind the project is to investigate acceptable and feasible methods of improving soil fertility, in order to improve yam productivity,” explains Professor Emmanuel Frossard. The approach has been constructed together with stakeholders, with the goal of developing soil and crop management innovations which are adapted to the various socio-economic and cultural contexts of West Africa.
or disappear,” outlines Professor Frossard. Another major issue with current management methods is the retention of large quantities of the harvest for re-planting, which limits yields. “Traditionally, farmers typically use about a quarter of the harvest as the planting material for the next season,” says Professor Frossard.
The idea behind the project is to investigate
methods of restoring soil fertility, in order to improve yam productivity
Current yam cropping practices
Understand the diversity of yam systems
The soil itself needs to be highly fertile in order for yams to flourish in the first place, yet current crop management techniques lead to soil degradation. Traditionally, yams are grown after relatively long fallow periods without external inputs. “This means that the soil has been cultivated but then left for many years. We have found that farmers use this previously long-term fallow land because initially they can get high yields. According to farmers, repeated yam cultivation on the same plot rapidly leads to low crop yields. Furthermore, because of the increase in population density, the remaining surfaces under long term fallow decrease rapidly
The goal of developing acceptable and feasible methods to improve soil fertility and yam productivity starts with a proper understanding of the diversity of agroecosystems, and also of the prevailing socio-economic contexts. This is why the first step of the project in January 2015 was to characterise the existing soil, vegetation, and yam cropping systems across four sites in Burkina Faso and Ivory Coast. Alongside this work, researchers are also characterising these sites with respect to their socio-economic characteristics, investigating a number of questions.
“Which groups of people are present at a specific site? Are they native to the area or are they migrants? When did the migrants arrive? What is the relationship between the native people and the migrants? What is the most important crop? Is it yams, cocoa, or cotton?” asks Professor Frossard. The project will also characterize the agricultural extension systems in the two countries, aiming to learn more about the current strategies and operational goals of these institutions, and to design tools to be used in their national and regional outreach strategy for sustainable yam systems. This first step has already helped researchers to reach a deeper understanding of each of the sites, the yam consumption patterns and the local economic conditions more generally. For instance, in the centre and north of the Ivory Coast, yams are cultivated both for food and cash, while in the south of the country where cocoa is the main crop, yams are grown for self-consumption. “A lot of Baoulé people emigrated from the centre of Ivory Coast to the south of the country in the ‘80s to produce cocoa. This southwards movement of people is important, as Baoulé people traditionally consume a lot of yams, but they are less able to produce them now because most of the land is occupied by plantation crops such as cocoa. So they have to import tubers from areas where yams are still produced, which is expensive,” explains Professor Frossard.
These are important factors to consider in terms of developing innovations that are relevant to local needs. “Farmers in the south of the country are extremely interested in our innovations, because they could help boost production and reduce their expenses for food,” says Professor Frossard.
Innovation Platforms The innovation platforms established at each of the four sites play a key role in the development of these innovations. These platforms bring together interested parties, including yam farmers, traders and administrative figures to consult and identify potential solutions to specific problems, which Professor Frossard believes will help the project to achieve a lasting impact. “When we design a specific type of innovation, we want to involve the people who can have an impact on the behaviour of farmers,” he explains. This could be micro-finance experts or national authorities, who may be able to heighten awareness and generate more interest. The dialogue established in these innovation platforms has allowed researchers to identify and rank the key bottlenecks in yam production, which are land scarcity, unpredictable rainfall, soil fertility depletion and bad quality yam planting material (yam seed). Since land scarcity and rainfall pattern are difficult to modify, it was decided at the innovation platforms to develop innovations to improve seed quality and soil fertility. Following these discussions, plots were established in 2016 at the four sites to demonstrate the production of high quality seeds. These plots have since been used as school fields to train farmers, and also as a source of planting material for farmers and the project.
Integrated soil fertility management Discussions in the innovation platforms allowed researchers to co-design innovations to improve soil fertility and yields. These innovations were developed in a site-specific manner using the concept
of integrated soil fertility management (ISFM). “When we speak about ISFM, we speak about using high quality yam seed of improved cultivars, adding commerciallyproduced mineral fertiliser with or without organic matter residues, and of growing yams in rotation,” outlines Professor Frossard. These innovations are currently being tested in field experiments conducted by the YAMSYS team. At the end of the first year, in Winter 2017, farmers evaluated the results, and some have already started to implement their preferred innovations in their own fields. The YAMSYS team does not intervene in the farmers’ field, but analyses how farmers modify their preferred innovations to suit their constraints and opportunities. The team also looks at how farmers record the tuber yields, analyses farmers’ attitudes vis-à-vis the innovation, and also assesses the economic benefit resulting from the innovations more generally. This approach could potentially be applied to other crops aside from yams. Yams themselves have not historically attracted a lot of research attention; the project is making an important contribution in these terms, helping to strengthen the research base. “A very important output from the project will be the training that we provide to a generation of students. They can then take the lessons from the project and apply them in their own communities,” says Professor Frossard. Over the longer term, Professor Frossard and his colleagues are considering how they can heighten awareness of the project’s results and influence soil and crop management beyond the four sites that were the initial focus in research. “We see both in Burkina Faso and the Ivory Coast that people working in the agricultural sector at a national level are showing an interest in our approach. We’ve even received proposals to work with them at national level, not just at the local level,” says Professor Frossard. “Thanks to the collaboration with these national organisations, we hope that it will be possible to upscale our approach to other situations.”
YAMSYS Biophysical, institutional and economic drivers of sustainable soil use in yam systems for improved food security in West Africa Project Objectives
To develop innovations in the soil fertility management of yam systems in selected agroecological zones of West Africa, with the purpose to increase crop productivity, food security, income of the actors involved in the yam value chain and environmental sustainability.
The YAMSYS project (www. yamsys.org) is funded by the food security module of the Swiss Programme for Research on Global Issues for Development (www.r4d.ch) (SNF project number: 400540_152017/1).
• Emmanuel Frossard, ETH Zurich, Switzerland • Beatrice Aighewi, International Institute of Tropical Agriculture, Nigeria • Séverin Aké, Université Felix Houphouët Boigny, Côte d’Ivoire • Urs Niggli, FiBL, Switzerland • Hassan Bismarck Nacro, Université polytechnique de BoboDioulasso, Burkina Faso • Daouda Dao, CSRS, Côte d’Ivoire • Lucien N. Diby, World Agroforestry Centre (ICRAF) • François Lompo, INERA, Burkina Faso • Johan Six, ETH Zurich, Switzerland
• Valérie Kouamé Hgaza, CSRS Côte d’Ivoire • Innocent Kiba, ETH Zurich
Professor Emmanuel Frossard ETH Zürich Institute of Agricultural Sciences Research station Eschikon CH-8315 Eschikon-Lindau Switzerland T: +41 52 354 91 41 E: email@example.com W: www.yamsys.org
Professor Emmanuel Frossard
Emmanuel Frossard obtained his PhD in agricultural sciences in 1985, from the INPL, Nancy, France. He is currently full professor of plant nutrition at ETH Zurich where he is conducting a process-oriented research to understand drivers controlling nutrient fluxes to contribute to the development of ecologically efficient agricultural systems.
Stratagem to Tackle Damaging Pathogen Harming Farming Fungal pathogens represent a significant threat to global food crops, reducing yields and adversely affecting quality. A deeper understanding of the mechanisms by which pathogens are able to infect their plant hosts will help lay the foundations for improved control strategies, as Dr Angela Feechan and Dr Anna Tiley explain The fungal pathogen, Zymoseptoria tritici
Nitric oxide burst
causes Septoria Leaf Blotch, one of the major diseases of wheat across the world, which both adversely affects crop quality and reduces yields. On recognising the presence of a pathogen, plants respond with a nitric oxide (NO) burst which has antimicrobial effects, yet the effects on the attacking pathogen are not fully understood, an area that forms a core focus for researchers in the NO Crop Pathogen project. “The NO burst is a way in which the plant can defend itself against the fungus. Our theory is that maybe the NO burst is released by the plant, and then that might interact with proteins that the fungus releases to try and infect the plant,” outlines Dr Anna Tiley, a post-doctoral researcher working on the project. These proteins effectively help the fungus to invade the host; researchers are investigating how the NO burst interacts with these proteins. “We believe the NO burst might interact with these proteins and de-activate them, but this is still debated,” explains Dr Tiley.
The NO burst itself has attracted a lot of research attention, and it is known to play an important role in a plant’s immune system, yet its impact on a pathogen has not been fully established. This is a topic that researchers in the project are investigating. “We’re particularly interested in fungal pathogens. How does that NO burst impact on their ability to infect? How does the NO that the plant produces impact on proteins that were secreted from the pathogen? How does the pathogen deal with that NO onslaught?” says Dr Angela Feechan, the project’s Principal Investigator. This research is largely laboratory-based, with Dr Feechan and her colleagues in the project looking to identify the proteins that are secreted during infection and understand their impact.
NO CROP PATHOGEN How does Nitric Oxide (NO) regulate crop pathogen virulence? Dr Angela Feechan School Of Agriculture and Food Science Belfield, Dublin 4 T: +353 1 7167779 E: firstname.lastname@example.org W: http://www.ucd.ie/research/people/ schoolofagriculturefoodscience/ drangelafeechan/
Dr Angela Feechan moved to University College Dublin in 2013. She was awarded a Marie SkłodowskaCurie Career Integration Grant in 2014 and a Science Foundation Ireland Career Development Award in 2016. Her group is focussed on fungal pathogens that cause disease in cereal crops particularly, Zymoseptoria tritici, which causes Septoria tritici Blotch (STB) in wheat. Prior to a research scientist position at CSIRO Australia, she completed a post-doc at the University of Copenhagen and PhD at The University of Edinburgh. Dr Anna Tiley is a Post-Doctoral Researcher at University College Dublin, and completed her PhD on Zymoseptoria tritici at the University of Bristol in 2016.
“We’re trying to see whether Zymoseptoria tritici has a specific version of the enzyme, GSNOR and whether this might help it to defend itself from the NO burst that comes from the plant during infection,” outlines Dr Tiley. Researchers are looking to delete the GSNOR gene in this specific pathogen. “We previously did some bioinformatic analysis, looking at the genes encoding this enzyme in other species. We used other sequences of the known GSNOR gene, then aligned that with the Zymoseptoria tritici genome to identify potential candidates,” says Dr Tiley. This work holds broader relevance in terms of potentially protecting crops from infection and preventing fungal diseases. While this is not an immediate prospect, Dr Feechan says the project’s research could help contribute to the wider goal of improving
We’re particularly interested in fungal pathogens. How does that nitric oxide burst impact on their ability to infect? How does the NO that the plant produces impact on proteins that were secreted from the pathogen? The pathogen may have its own enzymes to try and turn over the NO that’s being directed at it, another topic of interest to Dr Feechan. “We’re also trying to characterise one of these enzymes in the pathogen, that might play a role in regulating the NO,” she explains. This particular enzyme is S-Nitrosogluthathione reductase (GSNOR), an enzyme that’s highly conserved in different species and can be used to control NO.
food security in future. “If you knew which proteins were important for disease, you could design strategies to try and target them,” she explains. The project’s research is largely exploratory in nature however, and Dr Feechan plans to continue her work in this area in future. “We’d like to find out more about these fungal proteins, particularly in terms of what proteins they interact with in the plant,” she continues. Diagram summarising function of GSNOR in S-nitrosylation of proteins.
Wheat leaves infected with Zymoseptoria tritici (a) (left) healthy control leaf (right) infected leaf showing symptoms of necrosis (arrow); (b) magnified image of infected leaf showing Z. tritici asexual fruiting bodies (arrow); (c) magnification of curved asexual spores (pycnidiospores) stained with lactophenol cotton blue.
Wheat field trials at UCD Lyons Research Farm, Lyons Estate, Celbridge, Naas, Co. Kildare, W23 ENY.
The value of biomass Our biomass resources have historically been under-exploited, due to both economic and logistical factors, but now researchers are taking a fresh look at the topic. We spoke to Doctor Tarja Tamminen about the MOBILE FLIP project’s work in developing flexible, mobile processes to treat biomass resources, helping to gain additional value out of the raw materials A lot of attention
in research is currently focused on biomass resources, with scientists seeking to develop and refine processes to help convert these materials into commercially valuable products and intermediates. Based at the VTT Technical Research Centre of Finland Ltd, Doctor Tarja Tamminen is the coordinator of the MOBILE FLIP project, an EC-backed initiative which aims to develop and demonstrate processes to treat these materials. “We are studying raw materials from both forestry and agriculture. We are looking at a wide variety of different types of biomass,” she outlines. Researchers are investigating several different thermal processes to treat these different types of biomass. “One of them is torrefaction, which essentially means removing water from biomass and making
it more compact,” explains Dr Tamminen. “We are also investigating hydro-thermal carbonisation, which is suitable for wet biomass streams. This can be used to produce HTC char, which can then be further activated for various end-products. Higher up in temperature, we are also looking at slow pyrolysis, which produces a different type of char.” Researchers are investigating these processes and using them to treat biomass, which could then have a higher commercial value. Alongside these processes, Dr Tamminen and her colleagues in the project are also looking at pelletising, a process by which organic matter is compressed. “By pelletising general biomass you can make it more useable and reduce transportation costs. Pelletising can also be combined with torrefaction, to make torrefied
pellets, which are even more compact,” she says. By bringing together partners from across Europe, the project consortium encourages researchers to share their knowledge and expertise in particular techniques and methods. “We have partners from different parts of Europe, so we are working with very different types of biomass,” says Dr Tamminen. “We organised an open event in northern Sweden for example, which was very much focused on forestry. We also organised a similar event in an important agricultural region in France, where our French partners showed the Scandinavians how they work with agricultural biomass, and we will organise the third open event in Helsinki area and southeastern Finland from 14th to 16th of May 2018, focusing on biomass conversion into biofuels and biochar.”
This Figure expresses slow pyrolysis mobile unit concept and annual carbon balance in the Forest Residues – case in Finland. Products (process heat, tar, distillates) will be further utilized in energy production which means that fossil CO2 to the atmosphere will be reduced. One of the products, biochar, can be further applied in soil amendment when soil productivity will be increased and biomass decay decreased accordingly.
Valorising agricultural biomass residues in France.
Fractionating challenging forest residues in Sweden.
Biomass resources The individual thermal treatment processes may be better suited to specific types of biomass, a topic which researchers in the project have addressed. In the early stages of the project, Dr Tamminen and her colleagues looked at which raw materials were expected to be suitable for each process. “From there, we then coordinated collection and pre-treatment, and delivery of these raw materials to the partners in the project, who are investigating these different processes,” she outlines. Researchers are assessing the performance of these different types of biomass, with an eye on the wider commercial and socio-economic picture. “We try to find the best value for each product that results from these treatment processes. For example, a Finnish partner in the project is studying the use of biochars in agriculture, for soil amendment, which could help to increase agricultural productivity,” outlines Dr Tamminen. “Different biochars are produced from different processes, and researchers are testing how they perform. We’re also looking to evaluate the different raw materials, in terms of their potential for different end-uses.” These raw materials have historically been under-exploited in Europe, due to a variety of reasons. One important issue in this respect is that they are often found in quite remote locations, so it can be relatively expensive to transfer them to major industrial sites, while Dr
Tamminen says that other factors can also limit efforts to exploit certain biomass resources. “For example, certain residues from quarry sites can be valuable. But the residue stream is sometimes still quite small, which makes it difficult to interest major companies,” she explains. A core goal in the project is to develop flexible, mobile conversion units, which will change the overall economics around utilising biomass resources, while researchers are also
converting biomass may vary according to the nature of the raw materials and where they are located. “We will evaluate different cases. We are looking at issues like where the unit should be built, or if it would be easier to transport it to where the raw materials are located. What should be transported? Should the biomass be transported to the unit? We are looking at different scenarios and assessing feasibility,” outlines Dr Tamminen. The mobility of these
We are also investigating hydro-thermal carbonisation, which is suitable for wet biomass streams. This can be used to produce HTC char, which can then be further activated for various end-products. Higher up in temperature, we are also looking at slow pyrolysis, which produces a different type of char trying to identify other areas in which they could potentially be applied. “We are trying to find some new end-uses for biomass resources, in areas where they’re not currently used,” continues Dr Tamminen. “For example, certain residues are currently turned into animal feed. But using these innovative new processes, they could potentially be converted into something with higher economic potential.” This could mean fuels for co-combustion, biodegradeable pesticides, or chemicals for the wood panel industry, along with a variety of other possibilities. The ideal approach to
processes is a key aspect of the project’s work, yet Dr Tamminen says there is room for flexibility in these terms, as it may not always be necessary for the processes to be mobile, or financially realistic when smaller quantities are involved. “Mobility could mean something that can easily be put on a truck or a container. But in many cases, it is not economically feasible to do these processes at very small scales,” she explains.
Economic feasibility The social and environmental case for utilising and treating biomass resources more effectively
MOBILE FLIP Mobile and Flexible Industrial Processing of Biomass Project Objectives
Europe has a large potential of underexploited agro- and forest biomass side streams, mainly due to their diversity, seasonality and dispersion. The MOBILE FLIP project aims to enhance their usage through the demonstration of flexible and mobile conversion units based on several technologies: pelletizing, torrefaction, slow pyrolysis, hydrothermal carbonization and hydrothermal treatment for saccharification that aim at converting the biomass raw materials into biobased products.
Horizon 2020, SPIRE-02-2014 program Budget: € 9.77 million This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 637020−MOBILE FLIP.
VTT, Finland • CEA, France • FCBA, France • Luke, Finland • SLU, Sweden • RISE, Sweden • Biogold, Estonia • CHIMAR, Greece • Raussi, Finland • SPC, Sweden • RAGT, France • ETIA, France
Contact Details Tarja Tamminen (coordinator, VTT) and Kimmo Rasa (Luke) are presenting tangible results of the MOBILE FLIP project in the exhibition of The second European Industry Day in Brussels on the 22nd of February.
may seem to be clear, yet these processes also need to be economically feasible if they are to be more widely adopted. This is something of which Tamminen and her colleagues in the project are well aware. “We are working with experts, who are bringing together experimental data with data on logistics and other issues, to build business cases and identify what solutions would work well in practice,” she says. The selected processes are now being built into demonstration units by the project’s industrial partners, looking towards the longer term exploitation of the research. “We have the experimental data that we require, so from this point we’re looking to bring everything together and make the overall assessment,” continues Dr Tamminen. “So we’re building the demonstration units, demonstrating how they function using the selected raw materials, and assessing their effectiveness in terms of producing the products that have been identified as feasible options.”
This area of research is quite prominent at the moment, with many scientists investigating how biomass resources can be used more effectively. While there is a great deal still to learn in this respect, Dr Tamminen is hopeful that the project’s work will have a wider impact, both in terms of practical applications and in informing future research initiatives. “We aim to derive some very practical outcomes, in terms of developing mobile processes for biomass treatment, but in addition, we have also created scientific excellence as well. So we hope that people will find our research, use it, and refer to our papers,” she outlines. This will help in laying the foundations for future research. “A lot of the research results that have been gained during the experimental work have been used in the project, but they can also be used more widely in future projects. So we are disseminating our results quite actively,” continues Dr Tamminen.
Project Coordinator, Dr Tarja Tamminen VTT Technical Research Centre of Finland Ltd Solutions for Natural Resources and Environment Tietotie 2, ESPOO, P.O. Box 1000, FI-02044 VTT, Finland T: +358 40 5324962 E: email@example.com W: www.mobileflip.eu Riitta Kervinen, Senior Scientist, EU-project administration T: +358 40 582 7290 E: firstname.lastname@example.org W: www.vttresearch.com Open Demonstration Event. Finland, May 14-16, 2018. MOBILE FLIP session in WasteEng, Prague, Czech Republic, July 3, 2018.
Tarja Tamminen is Principal Scientist at VTT, with expertise in biomass processing and biorefining chemistry. She previously worked at Oy Keskuslaboratorio, KCL as a Senior Scientist with expertise in several areas, including chemistry related to chemical pulping, particularly connections between residual lignin structure and its reactivity.
Group photo taken at the MOBILE FLIP consortium meeting at FCBA in Grenoble 8th February 2018.
Evidence-based decision-making turns knowledge into power Many professionals rely on their personal experience and knowledge when making critical decisions, yet the underlying assumptions on which these decisions are reached is not always clear. The Bayes-Knowledge project is developing new methods to support evidence-based decision-making, as Professor Norman Fenton explains A professional called
upon to make an important decision in their field will typically draw on their own knowledge and experience, as well as the data available, before reaching a conclusion. Mathematical modelling techniques can give professionals across a wide variety of disciplines stronger foundations on which to make critical decisions, a topic that lies at the core of Professor Norman Fenton’s research in the Bayes Knowledge project. “We are building Bayesian networks which allow for the inclusion of expert judgment in identifying variables, for which data may not be available,” he outlines. A Bayesian network can be used to model probabilistic relationships; a classic example is the relationship between symptoms and disease. “With the Bayesian network approach, you can infer from symptoms back to cause, or you can predict the symptoms from the causes. You can work both ways, which is what makes it different to the statistical models normally used in medical diagnosis for example,” explains Professor Fenton.
Bayesian networks The project uses this approach to improve evidence-based decision-making, developing novel strategies and so giving professionals a firmer basis on which to reach decisions, even
where no data is available. In the absence of data, people often make decisions based on their past experience of similar situations or their gut instinct; Professor Fenton says the project brings the assumptions behind this kind of reasoning into the open. “We’re forcing people to ask; ‘what are the assumptions I’m making here? What are the causes and effects? What’s the justification for believing that if something is likely to happen, then some other thing is likely to happen?’” he outlines. This information can then be used to help improve mathematical
the important variables in a given situation. “Where there might be limited data, or even no data, we look to augment that with expert judgment,” explains Professor Fenton. A good example is modelling the likely progression of a patient with a specific set of symptoms. “A patient may display certain symptoms, and a Doctor will use that to diagnose them and predict their likely health outcomes. They aim to forecast their health prospects, given the presence of these different symptoms,” says Professor Fenton. “They’ll have data on this, which will inform statistical models
By incorporating these causal Bayesian networks you get better decision support. It gives you better predictive capabilities and allows you to analyse a particular situation at greater depth models. “Our view is that exposing all the underlying assumptions in your decisionmaking process helps in improving a mathematical model,” continues Professor Fenton. This is a core part of the project’s overall agenda, with researchers bringing together data, expert opinion and risk and uncertainty information to further refine and optimise Bayesian network models. A key aspect of the project’s approach is that there is scope to include expert judgment in identifying
describing the survival prospects of a patient displaying these specific symptoms, helping Doctors assess whether they require urgent treatment.” A standard statistical model might not include data on the impact of a particular intervention however, so now Professor Fenton and his colleagues in the project are exploring a new approach. “With our model, we can look at the relevant causal structure of a problem, and realise where certain interventions may have an impact. Even if
you haven’t got specific data on those points, you can use expert judgment about what will happen in cases where you do intervene, given the condition of the patient,” he outlines. A medical expert can provide insights into what type of intervention is feasible under which circumstances for example, information which can then be incorporated in a Bayesian network model. “It’s about eliciting genuine expert causal knowledge. An expert will know for example that certain things will happen before others, and the impact of certain interventions, while you’ve also got time constraints to consider,” says Professor Fenton.
Health economics This research holds important implications for the healthcare sector, where staff are regularly called upon to make decisions which will not only affect the health of the patient, but will also have social and economic consequences further down the line. A more rigorous method of developing models will give staff more solid foundations on which to take these kinds of decisions. “By incorporating these causal Bayesian networks you get better decision support. It gives you better predictive capabilities and allows you to analyse a particular situation at greater depth,” explains Professor Fenton. With the costs of healthcare a major concern across several European countries, Bayesian network models could play an important role in assessing the impact of specific interventions. “We’ve achieved much better incorporation of so-called utility decision models, where effectively every intervention and decision has utility, so that you can look at tradeoffs,” continues Professor Fenton. “You can effectively do a complete cost-benefit analysis of an intervention.”
The healthcare sector is a major area of interest in terms of the application of this research, yet Professor Fenton is keen to stress that the methods developed in the project are also being used in other areas. Alongside medicine, the project’s methods have been tested on case studies in law, forensics and transport. “We had a six-month programme in the Isaac Newton institute in Cambridge investigating Bayesian approaches to the law,” outlines Professor Fenton. These Bayesian network models are also used quite extensively in certain areas, such as in auditing the operational risk of major companies, while Professor Fenton believes there are also further potential applications in government, finance, and other areas of commerce. “Lots of people are interested in doing work using our research in cyber-security. That’s an area where I expect to see growth,” he says. “We’ve also just started a major EPSRC project called PAMBEYSIAN, which is about putting these Bayesian network-based models into small medical devices in the home.” This is part of wider efforts to reduce the burden of care on the NHS by moving the management of chronic conditions into the home. The challenge here will be to create a decision support system that ordinary people, the vast majority of whom will not be medically trained, can easily interact with. “It’s about dealing with all of that complexity and providing the simplest possible inputs and the simplest possible outputs, so that people know when they need to take their medicine, to increase the dose, or to contact their doctor,” explains Professor Fenton.
BAYES-KNOWLEDGE Effective Bayesian Modelling with Knowledge before Data Project Objectives
The Bayes-Knowledge research team uses Bayesian Networks to assess risk and aid decision-making in a wide range of applications. Bayesian networks are built on a structure of causes and effects, into which information of many types may be integrated. Bayes-Knowledge methods bring together data, expert opinion, risk and uncertainty into a formal statistical framework, allowing decision-makers to see their choices clearly. They may be used in the same way as classical methods, where data is the primary source of information. However, Bayesian Networks can also handle those scenarios where data are sparse, or unreliable, by allowing other sources of information to be tapped. The research team develops novel strategies for Bayesian Networks, using new algorithms to extend and refine the use of Bayesian Networks in decision-making.
ERC Advanced Grant (ERC-2013-AdG339182BAYES_KNOWLEDGE) - Computer science and informatics. EUR 1 572 562
Principal Investigator Professor Norman Fenton Director of Risk and Information Management Research Group School of Electronic Engineering and Computer Science Queen Mary University of London London E1 4NS T: +44 020 7882 7860 E: email@example.com W: http://bayes-knowledge.com W: www.eecs.qmul.ac.uk/~norman/
Professor Norman Fenton
Norman Fenton is Professor of Risk Information Management at Queen Mary London University. He works on quantitative risk assessment, which typically involves analysing and predicting the probabilities of unknown events using Bayesian statistical methods. This type of reasoning enables improved assessment by taking account of both statistical data and expert judgment.
Manufacturing intelligence to support innovative services Vast amounts of information are available today on how products and services are used, from which companies can draw valuable insights to improve design. Karl Hribernik tells us about the FALCON project’s work in developing a framework which will help to both improve product-service systems and shorten the development cycle Many consumers today express their opinions about specific products and services via social media and other channels, while large volumes of information about how products are used are available from other sources as well. This data can offer valuable insights to the commercial sector, helping companies to improve the products and services they offer, a topic central to the work of the FALCON project. “We wanted to look at these sources of product-usage information. So this is information that is generated when a product is being used – from sensor systems for example. We also wanted to take into account social media and non-structured information that is generated by users on Facebook, internet forums and so on,” outlines Karl Hribernik, the project’s Technical Coordinator. “We investigated which of those product usage information channels could be used in product-service design, and for which type of product. How could we technically integrate that information into design and improvement processes?”
Product usage information This is not just about improving the design of a product, but also the associated services around it. The value of a product is no longer determined purely at the manufacturing stage, but also through the provision of additional services that supplement the initial offering; the project’s work holds clear importance in these terms. “We try to provide tools to visualise, analyse and apply product usage information from different sources. We aim to help companies leverage that information in the services that they offer, beyond the actual products,” explains Hribernik. This research holds relevance to several different areas of industry; Hribernik says four business scenarios have been explored in the project, two of which are business-to-consumer sectors. “We have white and brown goods, such as washing machines, televisions and refrigerators, that are directly marketed to consumers. The second scenario in the business-to-consumer sector is clothing textiles,” he outlines.
FALCON Virtual Open Platform for product-service design and product lifecycle management
The remaining two scenarios are businessto-business cases, where researchers looked at healthcare products and high-tech products. Different channels of product usage information are more important to specific sectors than others, an issue that Hribernik and his colleagues in the project investigated. “For example, there’s a lot of information about clothes on websites like Youtube, information which is valuable to the fashion industry. Social media is less valuable in the healthcare sector, as there is no valuable information out there on healthcare products. There are very good reasons for that,
misrepresented for example, while people in certain areas of the world may prefer a looser fit to those in other locations, something which researchers in the project took into account. “We effectively pre-process all the information, then make it available to the product designers on the platform, so that they can verify it. So they can ask; ‘ok, where did this information come from?’ ‘Who actually expressed this opinion about this product?’ ‘How old are they?’ And so on,” outlines Hribernik. In the other use cases, product usage information may be more structured. “For example, with sensors
We try to provide tools to visualise, analyse and apply product usage information from different sources. We aim to help companies leverage that information in the services that they offer, beyond the actual products in particular patient privacy,” he points out. A semantic model of the information relevant to each of the use cases has been developed, which lies at the core of the project’s platform. “For example, in the clothing use case there are product characteristics – you’ll find information on popular colours, sizes, and types of garments. Synonyms for these terms are also included in the model,” explains Hribernik. A lot of information in this particular case is by nature unstructured and subjective, with individuals expressing their own views and opinions on clothing, while there may also be differences in interpretation. Colours may be
on a product you might get a temperature value, which is very structured information – you know exactly what it means, and you know exactly what component of a product it relates to,” says Hribernik. This kind of information is relatively easy to interpret, while by contrast an Amazon review of a particular product would be significantly more complex. A user could use many different terms to express their opinion, and it may be difficult to understand whether the review relates to any specific component or function, so Hribernik says it’s necessary to look a little deeper. “We take that text apart, interpret it
and extract information out of it, that we can then machine-process later,” he explains. The results of structured and unstructured data can be related to each other, from which it is easier to identify the root causes behind any problems with a product. “If your washing machine isn’t working that well, sensor values about how it’s been functioning can help identify the problem,” outlines Hribernik. “When we analyse the data, we can figure out that the washing machine hasn’t been cleaning very well because it hasn’t been used properly. Maybe the machine has been over-filled for example.”
Maintenance and support Many commercial companies are keen to understand how their products are used, so this research holds broad relevance. With large volumes of data on how a specific product is being used, it could be possible to identify patterns across a larger sample size, and then assess whether further action is required. “If a certain proportion of people are putting too much washing into the machine then it might be necessary to either revise the manual or improve the design of the washing drum,” says Hribernik. In the case of the healthcare products, the FALCON tools can be used to identify which functionalities are used more often than others, insights which can then be applied to make systems easier to use. “We’re looking at how doctors use the user interface. Which buttons are they pressing? Do they press certain combinations more than others? Are there ways in which the user interface could be improved?” outlines Hribernik. “For example, if someone presses the same combination of five buttons ten times a day, maybe it would make sense to have one button for that function.” The technology can be adapted in line with how people use it, rather than the other way round, with user feedback a significant influence on product development. The
The Product Lifecycle with MOL highlighted.
FALCON Feedback mechanisms Across the Lifecycle for Customer-driven Optimization of iNnovative product-service design Project Objectives
project’s work could have a significant impact on the production process in general, helping to shorten the design phase of productservices. “By understanding what customers and users actually want faster and to a greater level of precision, it will be possible to meet their demands faster and more accurately. The effect could be to accelerate cycles of product and service innovation,” explains Hribernik. This is not limited to specific sectors, while it could also help smaller companies develop and build on their ideas faster. “We’ve not worked just with larger companies - we’ve also worked with smaller businesses, who are also able to use the FALCON system without the need to invest heavily in IT equipment. You can basically pull in social media and other data, and analyse it,” continues Hribernik. “So it’s good for very small companies, and it can be adapted to different sectors.” There is also scope to include further information within the framework, something which Hribernik and his colleagues may explore in future. This could mean analysing image data from social media, giving further insights to product developers. “I think it would be quite interesting to investigate further in that direction, to include not only the semantic analysis of text, but also of images and other media available on the internet. So that’s another area we could branch out into,” says Hribernik.
FALCON aims to optimise the (re-)design of products and associated services by exploiting and processing product usage information. The FALCON virtual open platform will acquire and process usergenerated content from the Internet and data generated by sensors embedded into products, enabling a systematic and customer-driven (re)design of products, services and product-service systems.
RIA - Research and Innovation action
BIBA - Bremer Institut für Produktion und Logistik GmbH, Germany • EPFL - Ecole Polytechnique Federale de Lausanne, Switzerland • TU Delft - Technische Universiteit Delft, Netherlands • UBITECH - Gioumpitek Meleti Schediasmos Yloppiisi Kai Polisi Ergon Pliroforikis Etaireia Periorismenis Efthynis, Greece • Holonix SRL, Italy • Softeco Sismat SRL, Italy • I-Deal S.R.L.C.R., Italy • Arcelik A.S., Turkey • PHILIPS Medical Systems Nederland BV, Netherland • Dena Cashmere, Mandelli Laura, Italy • Datapixel SL, Spain • Mews Innovation, France • Isadeus, France
Karl Hribernik, Manager of the department of Intelligent ICT for Co-operative Production BIBA - Bremer Institut für Produktion und Logistik GmbH Hochschulring 20 28359 Bremen T: +49 421 21850108 E: firstname.lastname@example.org W: www.falcon-h2020.eu • K. A. Hribernik, J. Lützenberger, E. Coscia, K.-D. Thoben, Feedback Mechanisms across the Lifecycle for Customer-driven Optimisation of Innovative ProductService Design, Proceedings of the I-ESA Conference 2016, 29 March-1 April 2016, Guimarães, Portugal.
Dipl.-Inform. Karl A. Hribernik
Dipl.-Inform. Karl A. Hribernik is manager of the department of Intelligent ICT for Co-operative Production at BIBA - Bremen Institute for Production and Logistics. He is the technical coordinator of FALCON and has been involved more than 15 years in many European projects related to Product Lifecycle Management.
Data driven e-services for water management Water management is a complex area of research and crosses several disciplinary boundaries, so close collaboration is essential to the development of innovative, effective solutions that are supported by e-services. We spoke to Professor Mariana Mocanu about the Data4Water project’s work in coordinating and supporting research in an increasingly important area of scientific endeavour A clean and
reliable supply of water is taken for granted in large parts of the world, yet water is becoming increasingly scarce in some locations and concern is rising over the continued depletion of our resources, leading scientists to investigate new methods of managing them more effectively. This is a complex area, requiring a high level of expertise in several different areas of technology, a context in which the work of the Data4Water project takes on clear importance. “The primary focus in this project is not research, but rather coordination and support that helps researchers to exchange information, to network, and to participate in various training programmes,” explains Professor Mariana Mocanu, the coordinator of the project. Data4Water is a twinning initiative funded under the Horizon 2020 programme, aiming to strengthen links between different institutions, which Professor Mocanu says will help boost research capacity in Romania, a key priority for the project. “Twinning projects like this give us the opportunity to join advanced research groups, and also to develop a more sustainable research base,” she outlines.
Sharing expertise The project will play an important role in these terms, bringing together partners from four different European countries, with the wider goal of sharing expertise and laying the foundations for future research into services to support water management. Based herself at the University POLITEHNICA of Bucharest, Professor Mocanu says the project will make an important contribution to the development of a research base, looking at the key issues around water management. “We try to encourage young people, at an early stage in their careers, to get involved in research in this area. We aim to involve them in this project and give them the opportunity to attend conferences, workshops and handson training schemes,” she continues. This approach is designed to give students a solid grounding in the water management field, and also the technologies and methods that could be used to help address the wider issues around it. The project is playing a key role in training the next generation of researchers to deal with key challenges in the field. “Training is an important pillar of the project. Over the course of this project
we will hold three summer schools, each covering a specific topic. The participants will get technical support from the project partners,” outlines Professor Mocanu. The participants are also encouraged to publish papers and several are already in the pipeline. These papers will be made available on the Knowledge-Lake platform, which is designed to facilitate collaboration between researchers. “The Knowledge-Lake platform has two main functions. The first is to host our research papers, so that they’re available to our partners,” says Professor Mocanu. The second key function is to offer project partners the opportunity to collaborate at networking events, reflecting the importance of inter-disciplinary collaboration to the wider development of the water management field. Further training opportunities are also available through the project, including exchange programmes and workshops, so that researchers are aware of the wider picture in the field. “Training is also offered on artificial intelligence methods for instance, which are used in the development of water management solutions,” continues
DATA4WATER Excellence in Smart Data and Services for Supporting Water Management Project Objectives
The overall objective of the project is to strengthen research in the field of smart data driven e-services in water resources management, made available to international community and/or specific stakeholders such as companies, citizens and authorities.
This project has received funding from the EU Research and Innovation programme Horizon 2020, Call: H2020-TWINN-2015, under grant agreement No 690900.
University Politehnica of Bucharest (UPB) Romania (Coordinator), University of MilanoBicocca (UNIMIB) Italy, Fraunhofer Institute FOKUS (FOKUS) Germany, IHE Delft (IHE) Institute for Water Education Netherlands.
Contact Details Professor Mocanu. The next step is to apply this knowledge to the development of effective new e-services. “We have worked on some projects on cyber-physical systems, and certain mechanisms can be applied in various disciplines. You need to have deep knowledge, and also to understand the field in which the system will be applied,” points out Professor Mocanu. “There may be certain constraints in the field of application. So it’s not enough to just develop tools, the tools have to be effective in the field.”
now able to harness the power of technology to build a deeper overall picture. “Using ICT gives us the possibility to deal with complex processes that influence each other, so we can see the whole picture,” stresses Professor Mocanu. A great deal of information has been gathered by project partners, for example data on water levels at specific points in the river Danube, which can inform the development of e-services for water management. One service which has
Project Manager, Professor habil. Mariana Mocanu, PhD, University Politehnica of Bucharest Faculty of Automatic Control and Computers, Department for Computers Room ED402 313 Splaiul Independentei, Sector 6, Bucharest, 060042 T: +40.21.4029646 E: email@example.com Knowledge-Lake: http://data4water.pub.ro/ W: http://data4water.eu Professor habil. Mariana Mocanu, PhD
Training is an important pillar of the project. Over the course of this project we will hold three summer schools, each covering a specific topic. The participants will get technical support from the project partners Water management The wider goal in the water management field is to develop effective, reliable e-services that enable us to make better use of the resources available, which by nature involves elements of different disciplines. While Professor Mocanu’s background is in computer science, she says collaboration with colleagues in other disciplines has given her a deeper perspective on how ICT could be applied to water management. “We had some contact with our colleagues in civil engineering, who specialise in hydrology. They gave us some case studies and explained to us where improved ICT could prove relevant in water management. So we looked at how we could combine our expertise,” she outlines. While historically knowledge of issues around water management was limited to specific sectors, researchers are
been developed is a monitoring system, connected to an alarm, which Professor Mocanu says can provide alerts if the water quality changes. “If there is an accident, or the hydrocarbon level goes over a certain threshold, then an alarm is raised. We also have some services that compute the water and pollutant propagation,” she explains. From this point it is possible to compute the risk that polluted water will reach a specific location downstream; Professor Mocanu and her colleagues aim to develop services for wider use. “We tried to avoid using expensive software, by instead using pre-stored cases of propagation, and picking up insights from a kind of database of propagation cases. With our service it’s cheaper and easier to issue warnings, which helps in terms of making it more widely available,” she outlines.
Professor habil. Mariana Mocanu, PhD, head of the Computer Science Department at the University Politehnica from Bucharest coordinates the team for Interoperable products and services for decision support, based on geospatial data, and has a long experience in developing information systems for industrial and economic processes, as well as in project management. Her teaching and research is focused on software engineering, systems integration, cyberphysical systems and logic design. Prof. Mariana Mocanu was involved in several national and European research projects, being now principal coordinator of the H2020 Twinning project: Excellence in Smart Data and Services for Supporting Water Management - Data4Water.
Transforming Business Processes into the Cloud While a number of major companies have invested in cloud technology to improve efficiency and gain flexibility, some organisations still face challenges in harnessing the benefits of new technologies. We spoke to Robert Woitsch about the CloudSocket project’s work in introducing the concept of Business Process as a Service (BPaaS), that could bring organisations closer to the cloud The pace of IT innovation offers significant opportunities to the commercial sector, as the emergence of new technologies enables companies to re-think the way they deliver their services, improving efficiency, gaining flexibility in their business models or reducing costs. Many major companies are investing in cloud computing for example, yet smaller companies with less resources typically find it more difficult to harness the benefits of new technologies, a prime motivation behind the work of the CloudSocket project. “Our common goal is to support organisations in the digitalisation of their business, based primarily on transforming their business processes,” explains Robert Woitsch, the project’s Principal Investigator. The project aims to support those organisations in their digital transformation, some of whom may not have a high level of technical expertise, by introducing a new way of using business processes. “The organisations specify their business processes, and based on that, we then propose an alignment of those business processes with cloud offerings,” continues Woitsch.
CLOUDSOCKET Business and IT-Cloud Alignment using a Smart Socket Dr Robert Woitsch, Project Coordinator BOC Asset Management GmbH Operngasse 20b 1040 Wien Austria T: +43-1-905 10 56 E: firstname.lastname@example.org W: www.cloudsocket.eu BOC: www.boc-group.com ADONIS: https://uk.bocgroup.com/adonis/ ADOIT: https://uk.bocgroup.com/adoit/ Dr Robert Woitsch is the Managing Director of BOC Asset Management GmbH, where he is responsible for innovation and knowledge management. He has worked in EC innovation projects across several areas over the last years, including knowledge management, information and data management, and Industry . .
This work centres first on understanding how a company operates and what they want to achieve with their business processes, such as invoicing for example. While a start-up may only send out a small number of invoices at first, this may grow as the company expands, at which point they may want to adopt a more efficient method, which entails changing their IT. “The business processes have changed, but the IT infrastructure may not have evolved in the same way,” points out Woitsch. The project is developing the concept of Business Process as a Service (BPaaS) via a marketplace, which Woitsch believes will help start-ups to use cloud technology. “A trusted third party, such as publicly funded incubators, acts as a broker
benefits from the opportunity to match their business needs to trusted IT in a flexible way. “The challenge is to gain flexibility. An organisation uses a business process as a service for a certain time period, and if the business process has been changed, a more appropriate IT infrastructure in the cloud is aligned,” explains Woitsch. In particular, organisations elaborating new business models need a high degree of flexibility – they want to only use what they need. This also brings benefits for IT providers. “An IT provider can offer their solutions to a focused target group, which supports the development of a niche market. They can offer tools in the context of business processes through such marketplaces and interact with customers with respect to their business needs.”
The challenge is to gain flexibility. An organisation uses a business process as a service for a certain time period, and if the business process has been changed, a more appropriate IT infrastructure in the cloud is aligned. and operates the marketplace. Different providers offer solutions on that marketplace, whereas customers search for solutions,” he explains. “The marketplace offers IT solutions in the form of business process models, and hence this reduces the need for technical expertise in order to make an appropriate choice for a particular business need.” The more precisely a customer can define their requirements, the more effectively the BPaaS modelling framework based on ADONIS can help in identifying the appropriate BPaaS in the cloud. The broker is effectively a new service, potentially a new source of employment, while the customer
This means providers can focus on their core area of expertise, while also building a better understanding of their customers’ needs. The project’s research also holds relevance to the emerging paradigm of ‘industry 4.0’ or the ‘factory of the future’, in which companies will utilise many different IT sources, machines and providers to improve efficiency. “We are proposing this business processes oriented framework for supporting digitalisation also for factories,” outlines Woitsch. In particular there is the challenge of accompanying product delivery with the delivery of services like training, consulting or maintenance. This is what CloudSocket is doing with the broker. They are offering services in addition to the software, and it is expected that this trend will also grow more prominent in industry.
Helping SMEs tap into big data Demand is growing for data-intensive applications that harness the potential of big data technologies, yet smaller companies face many challenges in accessing this market. We spoke to Dr Giuliano Casale about the DICE project’s work in developing a methodology and tools that will help accelerate software development, opening up new commercial opportunities for SMEs A vast amount of data is available today on European consumers and the wider economy, leading to increased demand for software capable of exploiting it and delivering commercial benefits. However, smaller companies are typically relatively limited in their ability to develop this type of software, which was a prime motivation behind the work of the DICE project. “We aim to help SMEs to create new data-intensive applications to embrace the big data market,” explains Dr Giuliano Casale, the project’s Principal Investigator. This work centres around developing a framework intended to help accelerate the development of big data software. “The framework is comprised of two families of tools,” outlines Dr Casale. “One is the development environment, which is based on eclipse. It allows us to model the requirements and architecture of the application. It’s also possible to simulate performance and reliability.”
automate this as far as possible, so that the application can go through a sequence of improved prototypes over time,” he explains. “This is effectively a way of accelerating the release process of an application.”
The DICE solution supports the testing and automated configuration of an application, while it also gives developers the opportunity to dig into the monitoring data. Analysis of this data allows researchers to gain deeper
We aim to help SMEs to create new data-intensive applications to embrace the big data market. DevOps The tools being developed within the project cover the full cycle of design, development, deployment and ongoing iterative improvement of a data application. The project’s approach is in line with a wider movement in software engineering called DevOps, which emphasises the importance of automation. “The idea is that you create an application through a sequence of prototypes. You test it, deploy it, run it and analyse how it behaves. You keep updating your code and releasing new versions of the application with the appropriate fixes,” says Dr Casale. Updating, re-releasing and testing an application can be quite laborious, so Dr Casale says the DICE framework is designed to automate this process. “We aim to
insights into what happened during the test, which can help further improve the quality of an application. “We are able to correlate the monitoring data to the architecture that was designed during development. We can then make recommendations on what the user should change to improve the application,” outlines Dr Casale. This process is automated, greatly improving efficiency and allowing developers to reach the market faster. “Our framework supplies tools that enable companies to analyse the properties of an application and to improve it,” continues Dr Casale. This work holds great relevance for smaller companies looking to make use of big data technologies. Currently a company looking to develop a new data-intensive application
would have to learn how to use multiple different tools; the DICE methodology and associated tools offers an attractive alternative. “In DICE we have created a unified framework, which means companies have to learn just one approach to develop these applications. Everything is in the same place, with the same paradigm and with compatible tools,” says Dr Casale. This will help SMEs to not only develop new software on faster timescales, but also to improve the quality of their big data applications. “The DICE methods will enable companies to create big data applications with better performance and reliability than they would have been able to achieve otherwise,” continues Dr Casale.
DICE Developing Data-Intensive Cloud Applications with Iterative Quality Enhancements DICE aims at defining a framework for quality-driven development of Big Data applications. DICE offers a novel UML profile and tools to help software designers reasoning about reliability, safety and efficiency of data-intensive applications. Project Coordinator Dr Giuliano Casale Department of Computing Imperial College London T: +44 20 759 42920 E: email@example.com W: www.dice-h2020.eu
Dr Giuliano Casale is a Senior Lecturer at Imperial College London, UK. He teaches and does research in performance engineering, cloud computing, and Big data, topics on which he has published more than 100 papers. He served in the program committee of over 80 research conferences. He serves in the ACM SIGMETRICS Board of Directors.
Wishful Thinking: Open software architectures for wireless testbeds Wireless testbeds play a central role in the testing and validation of new solutions, yet the increasing complexity of these facilities represents a significant barrier to technical innovation. The WiSHFUL project aims to develop software architectures and open interfaces that will accelerate the experimentation process, as Spilios Giannoulis and Ingrid Moerman explain A wide range
of functions today are performed by wireless solutions in offices, factories and homes, and new systems and solutions continue to emerge. However, while wireless systems are growing increasingly ubiquitous, testing and validating these solutions in wireless testbeds is an ever more complex challenge, as Ingrid Moerman, overall coordinator of the WiSHFUL project, explains. “Doing an experiment on a wireless testbed for the first time is difficult, there is quite a steep learning curve,” she says. Researchers in the project aim at lowering the barrier for wireless innovation by accelerating the experimentation process and thereby reducing the development costs. “One of the reasons that the WiSHFUL project was established was to examine ways to lower the overhead of experimenting on wireless systems. This includes lowering the overhead to setup up the experiment environment, getting to know the tools, and finally get familiar with the development environment related to each radio hardware and software platform,” outlines Spilios Giannoulis, the technical coordinator of the project.
Software platforms This work is closely in line with the needs of those involved in development, who have to deal with ever more complex scenarios and a diverse range of technologies in the validation of wireless systems. Researchers in the project are developing open and flexible software platforms with Unified Programming Interfaces (UPI), which Moerman believes will help lower the barriers to innovation. “By providing experimenters with UPIs, they can control very heterogeneous wireless hardware, using similar commands. It’s like talking to different devices in the same language,” she says. “There are different types of UPIs with respect to certain radio and network monitoring and configuration capabilities. The number of measurements and events that we can monitor or parameters we can control is dependent on the specific
WiSHFUL SW architecture
technology. Some generic capabilities, such as setting the center frequency, are shared by many technologies, while other capabilities can be very technology-specific.” Some examples of how the project helps to lower the barriers to wireless innovation are presented below. For instance, LTE technology has a flexible bandwidth that can be adapted at runtime. Moerman says the project’s approach allows innovators to change the bandwidth without needing to trawl through specification documents and software tools. “We have effectively made an abstraction. We hide the complexity of the hardware and just offer them the UPI interface, which is easy to use,” she explains. Another example involves around the fact that wireless innovators, while typically very technically knowledgeable, may be less familiar with other hardware platforms outside their own specialist area. “They can just set up an experiment saying – ‘I’m going to have some Wi-Fi nodes, some other nodes from different technology, and I want to see what the level of interference is.’ This offers huge opportunities,” stresses
Giannoulis. “Even if they are Wi-Fi experts, we also offer many other advanced features on top of the UPI commands to facilitate the management of full networks. For example, currently, if you want to change the configuration of a set of access points, you have to do that one by one. With the UPI functionality, you can do the configuration across the network in a centralised way with a single UPI command.”
Experimentation testbeds The initial step for a wireless innovator is to set up the experimentation environment. First, they need to select a suitable testbed, then search and reserve the wireless nodes required to build their scenario. No matter which testbed is selected, they can log in with the same authentication process, giving them the opportunity to test their system or solution in multiple different scenarios and/or wireless environments. “Once you are in the testbed you can use the same experimentation tools and run the script for your experiment, using unified tools developed in the FED4FIRE project,” says
WISHFUL Moerman. “Once developers can work in one testbed, it’s very easy for them to go and work with another testbed. Unification is very important here, not only at the testbed level, but also at the platform level. By using the UPIs, a huge number of parameters can be assessed in these wireless testbeds, allowing experimenters to rigorously test their solutions.” The majority of test-beds are currently located in a fixed environment, which is a limitation in terms of testing a solution and assessing its wider effectiveness. The way wireless signals propagate in an office environment with thick walls may be very different to how they propagate in an industrial environment for example, an issue of which Moerman is well aware. “If testing is always done in the same environment, then it’s not clear if the solution can be applied in different environments or if a solution is only being optimised for that specific wireless environment. That’s why we enable people to do tests in environments that are closer to reality. For this purpose we offer portable facilities that can be deployed at any location, where you can involve real users,” she outlines.
WiSHFUL enabled wireless solutions The project’s research provides solid foundations for developers to thoroughly assess their solutions, which is of course an important issue in terms of their wider applicability. One area which could benefit from runtime WiSHFUL control is smart cities, as Giannoulis explains. “Wireless solutions offer a lot of additional services in a smart city, such as sensor networks that can monitor air and water quality, monitor noise, even help in utilising parking spaces efficiently. It is however difficult to guarantee the stability and reliability of wireless solutions,” he points out. “A lot of the experiments we see are about offering more guarantees. A wireless network is not like a cable, which has a fixed capacity, like 100 megabytes a second, which you know is there,” says Moerman. “You can never say that in a wireless network, because the capacity changes depending on the environment. A wireless link has different operating modes, each mode having a different capacity, and these operating modes needs to be adapted according to the context. WiSHFUL enables large-scale runtime adaptation”
By providing experimenters with UPIs they can
control very heterogeneous wireless hardware, using similar commands Access to WiSHFUL The project interacts with third parties, external wireless developers and researchers, who use the testbeds together with the software platforms and UPIs. Our key researchers are keen to continue working with external parties to improve the usability and offered functionality of WiSHFUL . So far five open calls have been held to explore the wider potential of the WiSHFUL platforms and investigate how they could be enhanced further. “Three out of five calls were just about doing experiments and developing solutions. In two of the calls, external parties have extended our platforms with different technologies,” continues Moerman. “For example, at the start of the project we didn’t support LTE – we only had Wi-Fi and sensor technologies. Through an extension by an external party, we added LTE functionality and hardware to our offer.” The WiSHFUL software will remain publically available to interested parties free of charge, even after completion of the project.
The weather or interference by other wireless systems using the same spectral band could also have a significant impact on the quality of wireless transmission for example, leading to delays caused by retransmissions. Wireless networks are therefore typically used in scenarios where any delay in transmission will not have too serious an impact. “If you are using wireless communication for critical functions in an industrial environment then you need 100 percent reliability, just like a wired connection. These issues have not yet been resolved, but WiSHFUL offers many enablers to increase overall reliability. We need to continue with research and innovation to increase spectrum efficiency, and minimise latency and packet loss,” says Moerman. The WiSHFUL project will have a wider impact in these terms, giving researchers the opportunity to develop and test wireless solutions for free, helping them to accelerate the process of technical innovation. “WiSHFUL aims to change the way we experiment but also the way we control a deployed wireless network in runtime in the years to come,” says Giannoulis.
Wireless Software and Hardware platforms for Flexible and Unified radio and network control Project Objectives
The WiSHFUL project aims to reduce the threshold for experimentation in view of wireless innovation creation and by increasing the realism of experimentation by offering open, flexible & adaptive software and hardware platforms for intelligent radio and network control allowing rapid prototyping of innovative end-to-end wireless solutions and systems in different vertical markets. The WiSHFUL project further offers portable wireless test facilities that can be deployed at any location allowing validation of innovative wireless solutions in the real world.
EC funding: 5.171.000,00 € of which 1.990.000 € for Open Calls.
• Interuniversitair Micro-Electronica Centrum (IMEC), Belgium • The Provost, Fellows, Foundation Scholars & the other Members of Board of The College of The Holy & Undivided Trinity of Queen Elizabeth near Dublin (TCD), Ireland • Consorzio Nazionale Interuniversitario Per Le Telecomunicazioni (CNIT), Italy • Technische Universität Berlin (TUB) Germany • nCentric Europe BVBA (NCENTRIC), Belgium • Rutgers, The State University of New Jersey (RUTGERS), United States • Seoul National University (SNU), Korea (Republic of) • Universidade Federal Do Rio De Janeiro (UFRJ), Brazil
Professor Ingrid Moerman Interuniversitair Micro-Electronica Centrum (IMEC), Belgium 956KAPELDREEF 75 3001 LEUVEN Belgium T: +32 9 331 49 26 E: firstname.lastname@example.org W: www.wishful-project.eu
Professor Ingrid Moerman
Ingrid Moerman is a part-time professor at the Faculty of Engineering and Architecture at the Ghent University (UGent). She is also a staff member at IDLab, a core research group of imec with research activities embedded in UGent and University of Antwerp. Ingrid Moerman is coordinating the research activities on mobile and wireless networking, and is leading a research team of about 30 members at UGent. Ingrid Moerman is coordinator of the WiSHFUL project.
21st Century Benefits. The Future of the Welfare State Many European nations established welfare states during the twentieth century to provide social protection for all citizens through social insurance and social services, yet the existing model is coming under increasing pressure. Researchers in the Welfare State Futures programme are investigating fundamental questions about the future of welfare, as Professor Ellen Immergut explains The modern welfare state has its roots in events towards the end of the nineteenth century, as many countries responded to rapid industrialisation by exploring new social insurance models. While this was an important period in terms of re-defining the role of government with respect to the social and economic health of citizens, the more dramatic changes came later on, as Professor Ellen Immergut explains. “The really big changes around the development of an expanded welfare state came after the end of the Second World War,” she outlines. The welfare state has since become a pillar of many European societies, yet the social, industrial and economic circumstances of today are very different to those of the 1940s, and established welfare models are coming under increasing strain. “Trends around demographics and migration, and changes in family structures, all pose significant challenges to the welfare state,” says Professor Immergut.
Welfare state These trends raise important questions about the future of the welfare state, a topic which is a central part of Professor Immergut’s
WSF Thematic Workshop Gothenburg.
research agenda. Based at the European University Institute in Florence and Humboldt Universität zu Berlin, Professor Immergut is the coordinator of the Welfare State Futures programme, an initiative comprised of 15 different projects engaging nearly 200 researchers to investigate various aspects of this important topic. “We’re investigating
questions around social work and families, healthcare, unemployment and many others. There are many sub-topics,” she says. Several projects are looking at healthcare provision, a core pillar of the welfare state. “Almost 100 percent of the European population has health insurance coverage. Healthcare is largely publicly provided in Europe, but there
WSF Thematic Workshop and Summer School Groningen 2017.
HEALTHDOX Project @ CES Conference Glasgow 2017.
TRANSWEL Project Lecture Series Vienna 2017.
are variations in the structure and payment arrangements,” outlines Professor Immergut. “We’re looking at questions like: What’s the impact on health inequality? Does that play a role in how people seek treatment? What about people in superdiverse neighborhoods— how do they access care? And how adequate is this care? More generally, what does inequality look like today and what are the important dimensions?” A significant proportion of European citizens also have private healthcare insurance, allowing quicker access to high-quality care. However, while those who purchase private insurance still contribute to the public system through their taxes, this may affect public support for the welfare state, an issue of great interest to Professor Immergut in her work studying health inequalities to studying public opinion on health. “A big question for me is does having more private options undermine social solidarity? If you reduce your waiting time by purchasing private insurance, do you stop supporting the public system?” she asks. The future of publicly-funded healthcare systems depends to a large degree on public consent and people’s willingness to contribute through taxation, another topic the project is addressing. “We are looking at the financial sustainability of healthcare systems in terms of people’s willingness to pay the taxes that it costs to provide care,” explains Professor Immergut. There are many different strands to this work, bringing together sociological, political and economic research. One project within the wider Welfare State Futures initiative looked at the level of public acceptance of the healthcare system
in four countries; the UK, Norway, Slovenia and Germany. “A group of deliberative forums was held in the project. People discussed their level of satisfaction with their healthcare system and how much they supported it. It was quite interesting to see the differences,” outlines Professor Immergut. These discussions revealed deeper insights into public perceptions of healthcare systems, and how fair they are. “People in the UK really love the NHS for example. However, they are very worried that some people are taking advantage,” explains Professor Immergut.“They had
in Slovenia have improved, people feel upset about private profit coming into the health system,” outlines Professor Immergut. While researchers found that in general people are supportive of the principle of publicly-funded healthcare systems, there are pockets of discontent, which Professor Immergut says politicians should be aware of. “This does have the potential to be mobilised by populist parties. Even though people like the welfare state, there are things about it that people feel are very unfair,” she says. This could be around access to healthcare
A big question for me is - does
having more private options undermine social solidarity? If you reduce your waiting time by purchasing private insurance, do you stop supporting the public system? strong feelings of solidarity towards people they considered deserving of access to the NHS – but they wanted to keep other people out.” Research into people’s attitudes in Norway, Slovenia and Germany showed a different set of concerns however. One major issue across these countries is people who are perceived as wasting the public system, who fail to take responsibility for their own health, while there are also differing attitudes to the role of private healthcare, which in part is a legacy of communism. “In Slovenia, people are very concerned about the emergence of private medical care, because they’re used to a public system. Even though health services
for example, such as when the perception takes hold that recent arrivals to a country have gained rapid access to healthcare services without necessarily paying into the system. The European population is very mobile, and immigrants may have very different attitudes towards welfare, another topic researchers are exploring. “One project looks at how immigrants to different European countries perceive the welfare state. What do they know about it? What are their attitudes? Do they maintain ideas they had from their country of origin, or do they adopt attitudes that are similar to the country they migrated to?” says Professor Immergut. There is a strong comparative element to this research. “This is a broad programme, bringing together
WSF Welfare State Futures Project Objectives
• to advance excellent inter-disciplinary and comparative research on Welfare State Futures on a pan-European basis • to support capacity building for welfare state research on a cross-national basis throughout Europe • to disseminate research-based knowledge on welfare state issues of societal, practical and policy relevance in cooperation with relevant users and experts
Project Partners and Funding
The programme is funded by 15 NORFACE partners and the European Commission (ERA-Net Plus funding, grant agreement number 618106). The Swedish Research Council for Health, Working Life, and Welfare (Forte) has made an additional contribution to the programme. M€ 19 of funding is allocated to fifteen projects that have started from late 2014 to early 2015. After the inception of the WSF Programme, NORFACE has gained four new partners, such that there are now currently 19 members. More information about the NORFACE network and its partners can be found on the NORFACE website (http://www.norface.net/).
NORFACE Partners Austrian Science Fund (FWF) Czech Academy of Sciences (CAS) Independent Research Fund Denmark (IRFD) Deutsche Forschungsgemeinschaft (DFG) Estonian Research Council (ETAG) Foundation for Science and Technology (FCT) L’Agence nationale de la recherche (ANR) Luxembourg National Research Fund (FNR) Netherlands Organisation for Scientific Research (NWO) Research Council of Lithuania (RCL) Slovenian Research Agency (ARRS) Swiss National Science Foundation (SNSF) The Academy of Finland (AKA) The Economic and Social Research Council (ESRC) The Icelandic Centre for Research (RANNÍS) The Irish Research Council (IRC) The National Science Centre (NCN) The Research Council of Norway (RCN) The Swedish Research Council (VR) The Social Sciences and Humanities Research Council (SSHRC) of Canada is associate partner and participates in several NORFACE programmes.
people from many countries. This gives us the opportunity to collaborate not just within projects but across them, which allows for bespoke comparison,” stresses Professor Immergut. The underlying issue in terms of fairness across different countries is often that people feel they are putting more into the system than they are getting out, leading to calls to link access to the welfare state more closely to contributions. This raises fundamental questions about the whole concept of welfare, which Professor Immergut and her colleagues in the project are investigating. “Should we re-think personal responsibility for welfare? How can we re-conceptualise it?” she asks. Researchers are looking at people’s attitudes towards different methods of making access to welfare conditional, and their willingness to contribute to welfare provision. “What do people find fair? How do people change their behaviour in different circumstances?” continues Professor Immergut. “We’re finding that people’s individual concepts of fairness play a very important role in determining how much they want to contribute.”
Behavioural economics This is of course an important issue in terms of the financial sustainability of the welfare state, and a number of tax authorities are keen to explore methods of increasing receipts. The work of behavioural economists holds clear relevance here, with researchers finding that subtle changes in wording can help to increase receipts, which is borne out by work in the Welfare State
Futures programme. “We held an interesting experiment on taxation, together with the Norwegian tax authorities,” outlines Professor Immergut. Letters were sent out to Norwegian taxpayers with subtly different wording, which affected people’s willingness to pay. “In one letter taxpayers were informed that it was time to report their foreign income. A second letter stated that there were punishments if people did not report their foreign income, while a third variant said it was people’s moral duty to report this income,” says Professor Immergut. “Surprisingly, the third group responded very well, and the tax authorities gained a huge amount of additional revenue.” A number of national governments have picked up on these kinds of ideas, and Professor Immergut is keen to make the project’s findings more widely available, both to the wider public and as an evidence base to inform welfare policy. While providing sufficient resources is clearly central to the long-term future of the welfare state, Professor Immergut believes it’s also important to consider political factors and the level of public support. “The amount of resources you can commit to the welfare state depends on how enthusiastic people are about paying into the system,” she points out. This rests to a large degree on whether people feel a sense of social solidarity, and that everybody is making a fair contribution. “My sense is that having that feeling of a shared common good is very important to people’s sense of fairness and their willingness to support the welfare state,” continues Professor Immergut. WSF Thematic Workshop, Gothenburg.
Professor Ellen M. Immergut, Scientific Programmme Coordinator Ziegelstraße 13c, Room 325, 10117 Berlin, Germany | Via dei Roccettini 9, San Felice 28, 50014 San Domenico di Fiesole (FI), Italy T: +49 30 2093 1456 E: email@example.com W: www.welfarestatefutures.org Professor Ellen M. Immergut Ellen M. Immergut is Professor of Political Science at the European University Institute in Florence and Humboldt Universität zu Berlin, having previously held professorships at the Universität Konstanz and the Massachusetts Institute of Technology. She has published on Health Politics, Pension Politics, and more generally on welfare state reform.
Getting to the core of social innovation Social innovation projects help to change the way we live and work, yet the field itself is relatively under-researched, now the SI-DRIVE project is taking a fresh look at the topic. We spoke to Jürgen Howaldt, Christoph Kaletka and Antonius Schröder about their work in extending knowledge on social innovation and laying the foundations for further research The concept of social innovation has a
major role to play in addressing contemporary social and economic challenges, as Europe moves towards a more knowledge-based economy. While technological innovation has clearly been crucial in shaping modern society, social innovation is also a major factor in changing the way we live, work and travel. “Social innovation can be broadly defined as new social practices that diffuse into wider society and influence change processes,” says Jürgen Howaldt. Based at the Technical University of Dortmund, Howaldt is a key member of the SI-DRIVE project team, an EC-backed initiative investigating and analysing the concept of social innovation, together with his colleagues Antonius Schroeder and Christoph Kaletka. “We are looking at social innovation in a broad sense,” says Kaletka. “One of the misunderstandings which we believe has emerged over recent decades regarding social innovation is that it is quite narrowly defined. Social entrepreneurship is a very important element of social innovation for example, but it’s certainly not the only one.”
Researchers in the project are taking a broader view, looking at more than , social innovation initiatives across the world in seven major policy areas, with the aim of building a deeper understanding of their nature, characteristics, and wider impact. These social innovations are often related to specific societal challenges, commonly on the local level. “It might be that established systems have failed in some way, or that new demands have emerged from say government or civil society, or other motivated actors and innovators. Social innovation is a way of finding new solutions and changing the social practices of the population,” outlines Schröder. This might mean a car-sharing scheme to combat traffic congestion for example, or the development of new healthcare models; one of the main goals of the project is to improve our understanding of the relationship between social innovation of this kind on the ground, and wider social change on the macro level. “While of course some social innovation initiatives influence social change, most social innovators do not
actually start out with the ambition of creating social change,” says Kaletka. The focus for social innovators is more typically on addressing specific social challenges in their local area, such as alleviating poverty, or tackling loneliness among elderly people. Technology can play an important role in addressing these types of issues, yet Howaldt says it must be embedded in social practice if it is to have a sustained impact. “Technology alone is not the solution, but in some cases it enables new social practices to develop, to cope with major societal challenges or deal with emerging demands,” he explains. The relationship between social and technological innovation is a major area of interest in the project, and researchers have been looking at case studies across each of the seven different policy areas. “For example, in the eHealth field, technology plays a very important role, but it’s less integral in fighting against poverty,” continues Howaldt. “It’s really a very interesting picture, where we can look to understand the relationship between social innovation and technological innovation,
across different case studies and social innovation initiatives.” A social innovation project may be initially rooted in a local area, but technology can help to heighten awareness of its impact, potentially then inspiring people in other areas to establish similar initiatives. This point of how a social innovation is perceived by wider society, and whether it is then taken up by other actors, is an important aspect of the project’s research. “Is that social innovation diffused into society, is it widely accepted? Does it lead to the establishment of new institutions that help us to deal with those challenges? SI-DRIVE is looking at the impact of social innovation initiatives in terms of social change,” says Howaldt. Researchers adopt an objective perspective in this regard, looking at the full impact of social innovation, not just the positive effects. “From a scientific and research perspective we always try to understand not only the positive outcomes of a social innovation, but also the possible negative repercussions of such developments. It’s very difficult to say if a specific social innovation is intrinsically ‘good’,” explains Kaletka. An example could be a social innovation organising the re-distribution of excess food from restaurants and supermarkets to the homeless for example. While this has positive
social effects, bringing food to people in need and helping to allocate resources more efficiently, it may also lead to some level of disruption for others. “Some actors might be negatively affected. For example, those who maybe previously found work in generating energy out of food waste,” points out Kaletka. These different perspectives need to be considered in terms of understanding the impact of social innovation, and also the
whether it is part of a wider eco-system, with links to other stakeholders. Many NGOs and not-for-profit organisations are involved in social innovation initiatives, along with other actors. “When considering solutions to a social challenge, it is very important that there is a kind of social innovation eco-system, integrating all the relevant stakeholders from different sectors and areas. For instance, it could be relevant to integrate the church, the
When considering solutions to a social challenge, it is very important that there is a kind of social innovation eco-system, integrating all the relevant stakeholders from
different sectors and areas potential for these types of initiatives to be replicated elsewhere and contribute to wider social change. “An important question is whether an initiative or idea is sustainable. Could it be replicated or diffused to other regions? Or at least, if it has been implemented in this specific city, has it been successful? Has it been maintained over a longer period? So the sustainability question is very important,” stresses Kaletka. This may be affected by the organisation of the specific social innovation initiative and
public administration, local businesses and other civil society actors to address a problem in a common and sustainable way,” says Schröder. Analysis of social innovations shows that research institutes and universities are major players in a relatively small proportion of cases, a finding which surprised Howaldt. “Universities and research institutes played an important role in less than percent of the initiatives that we analysed,” he says. “I think there is undeveloped potential in social innovation.”
SI-DRIVE This stands in stark contrast to technological innovation projects, in which universities typically play a far more prominent role in research and development. The scaling and diffusion of social innovation projects is another important point in this regard. “In some cases social innovation initiatives may have developed, and yet the participants are maybe not aware that something similar may already have been done in other areas,” says Schröder. This points to a need for more effective information-sharing and support for social innovation. “It’s not only about support for the development of scaling strategies, that’s only one side of the diffusion of research and innovation. It’s also about enhancing the capability of society to take up and imitate solutions that have been developed in other parts of the world,” outlines Howaldt. “We found from our mapping that a greater part of the social innovation initiatives that we analysed had utilised ideas from other social innovation initiatives. Imitation and innovation are closely connected.”
Eco-systems There are clearly a multitude of different factors to consider in analysing social innovations and understanding why some
scale successfully, while others fail to have a lasting impact. Researchers in the project aim to investigate these factors, to build a clearer picture of the innovation eco-system, with the wider aim of informing social policy development. “We are developing a policy declaration that we will present at our final conference, together with our colleagues in the project. We will describe the insights that have been drawn from the project,” says Howaldt. The project team is also closely involved in the development of a more comprehensive innovation policy in Germany. “We focus on social innovation as part of a comprehensive innovation policy, describing new ways and concepts of promoting social innovation,” outlines Howaldt. The second major outcome of the project will be to further strengthen the social innovation research community, laying the foundations for continued investigation. A European School of Social Innovation has been established, bringing together researchers from different countries, and Kaletka believes it’s important to encourage continued collaboration between researchers. “Different social innovation projects and research communities are starting to exchange their views and help one another,” says Kaletka.
Social Innovation – Driving Force of Social Change Project Objectives
The project’s research is guided by the following four objectives and expected outcomes: • To determine the nature, characteristics and impacts of social innovation as key elements of a new paradigm of innovation (strengthen the theoretical and empirical base of social innovation as part of a wider concept of innovation that thoroughly integrates social dimensions) • To map, analyse and promote social innovations in Europe and world regions to better understand and enable social innovations and their capacity for changing societies • To identify and assess success factors of social innovation in seven particular policy areas, supporting reciprocal empowerment in various countries and social groups to engage in social innovation for development, working towards Europe 2020 targets and sustainable development (e.g. Sustainable Development Goals (SDG)) • To undertake future-oriented policy-driven research, analyse barriers and drivers for social innovation; develop tools and instruments for policy interventions.
FP7 Programme for Research of the European Union – Collaborative project Socio-economic Sciences and Humanities SSH.2013.3.2-1 Social Innovation – empowering people, changing societies?
Please visit website for full details
Antonius Schröder, Member of Management Board - European Research / Infrastructure Research Area 3 “Work and Education in Europe” Sozialforschungsstelle Dortmund - sfs Technische Universität Dortmund Evinger Platz 17 D-44339 Dortmund T: +49-(0)231-8596-243 E: firstname.lastname@example.org W: www.si-drive.eu Antonius Schröder, Professor Jürgen Howaldt and Dr Christoph Kaletka
Antonius Schröder (Left) is a Senior Researcher and member of management board of the social research centre (sfs) at TU Dortmund University. Professor Jürgen Howaldt (Centre) is Director of Sozialforschungsstelle Dortmund, TU Dortmund University and professor at the Faculty of Economics and Social Sciences. Dr Christoph Kaletka (Right) is a Senior Researcher and member of the management board at Sozialforschungsstelle, central scientific unit of TU Dortmund University (TUDO).
The Negative Impacts of Social Media Social media can be our preferred method of communicating to friends and family. It’s a media for broadcasting our thoughts, feelings and political views and for some, it’s the very essence of daily living. Despite its usefulness for communication, there is a dark side to social media. Research indicates these online platforms make us feel inadequate and end up reflecting only our own views back at us. It’s time to ask ourselves, is social media really that social? By Richard Forsyth
ou are likely to have one or more of the most popular social media platforms on your mobile phone. Despite their widespread prevalence, platforms like Facebook, Twitter, Instagram, YouTube and LinkedIn are still growing in user numbers. According to Statista, in the third quarter of 2017, Facebook had 2.07 billion active monthly users and Twitter had 330 million. Statista revealed that Instagram, popular with younger adults, reached 800 million active monthly users in September 2017. The video upload platform YouTube is watched by 95% of Generation Z, says AdWeek and sees 300 hours of video uploaded every minute, with 5 billion videos watched daily. It’s not just fun and kitten pictures either, as the business network, LinkedIn had around 500 million users from April 2017. The point is, social media is becoming the ‘go-to’ place for all our social and information needs and makes up a significant part of our everyday lives. As we carry digital devices with us everywhere, social media has become woven into the fabric of daily life, from people posting what they ate at breakfast to sharing their latest business meetings, or for a way to see the latest news within minutes of it occurring. What’s transparent about all these platforms is that they are focused on attracting and accumulating likes, views, shares and comments. The singular goal of social media is popularity, not so much the value of the information, nor the credibility but simply, the amount of people engaged with what you post. This can lead to some disturbing impacts on both individuals and on whole nations.
The sadness of comparison Let’s start with a deeper look at how an individual can be negatively affected by social media. The American Academy of Pediatrics warned that ‘kids and teens’ are particularly vulnerable on social media when
it comes to cyberbullying but also with what’s termed as ‘Facebook Depression’, triggered by comparison to others. This phenomenon is relevant to all age groups. People often post their best pictures, the events they go to, the warm family images, pictures of themselves looking good and of course there will be other users seeing those posts and believing they are missing out, left behind or not as successful in life. Comparisons like this, when relentless, can affect a person’s perception of self-worth – whether that is being unhappy with body image or feeling like a failure in terms of wealth or simply, feeling like everyone else is living a better life. Social media encourages you to fish for the reward of likes as verification you have shared your feelings successfully and to put a stamp down that those feelings are acknowledged. If friends or the public don’t give you likes or are openly critical in front of your audience of peers, it can be damaging to your sense of self and prevent you from wanting participation. Surprisingly, it has been proven to be equally affecting, when you decide to hit the like button, for other people’s content. A study titled, Association of Facebook Use With Compromised Well-Being: A Longitudinal Study, by Holly B Shakya and Nicholas A Christakis, revealed that over the course of a year, when using Facebook daily – physical health, mental health and life satisfaction were reduced when liking other people’s content. The process equates to an erosion of your own worth, triggering envy and lower self-esteem by appraising others in your peer group for their achievements in happiness. It’s worth mentioning that the research also aligned negative impact with quantity of use as opposed to quality of use. The report says: ‘Exposure to the carefully curated images from others’ lives leads to negative self-comparison, and the sheer quantity of social media interaction may detract from more meaningful real-life experiences.’
Over the course of a year, when using Facebook daily – physical health, mental health and life satisfaction were reduced when liking other people’s content. The process equates to an erosion of your own worth, triggering envy and lower self-esteem by appraising others in your peer group for their achievements in happiness.
An important parallel drawn in this research, is that with in-person, face to face social interaction, this effect, this decline in our health, does not occur. When we are with people physically, there are all sorts of interpersonal cues and signs that are missing online, as well as that feeling of being with a living, breathing person, as opposed to a screen. When we are on social media – we are often sitting and sedentary, with our head bowed down – oblivious to the real world around us. During interaction we can be reduced to short responses that may not adequately express ourselves or we might be one of many responding to a post – so it’s not personal one-to-one communication – it’s like shouting into a room of people, clapping your hands in an audience or heckling in a crowd. The framework of social medias, whilst sharing in nature, when used frequently as a main form of communication, can lead to feelings of isolation, rather than togetherness. Research by the LooseEnds app creators found that people who use three or more social media accounts go for a fortnight without seeing a friend, socialising about once a month on average – whereas those with just one social media account, tended to go out more than six times a month typically, to see people face to face. The evidence seems to be that we can sometimes drift away from our friends if we rely on social media. For instance, Steven Strogatz of Cornell University found that social media can diminish the connections with the people that matter to us most, as we invest more time in browsing and interacting with acquaintances – so it damages our more meaningful relationships.
Addictions and abuse It’s possible for people to form real addiction to social media. A study published in Psychological Reports: Disability and Trauma, featured research that compared Facebook addiction to cocaine addiction, saying the same parts of the brain are affected, which process the significance of events and emotions and anticipation of rewards. Professor Ofir Turel of California University explained that in true addiction: “there is very strong acceleration associated with the impulsive system often coupled with a malfunctioning inhibitory system.” The most serious impact to an individual is when depression turns to suicidal thoughts. There is increasing evidence to suggest in some circumstances that using social media can lead to a path toward suicide. Suicide is a leading cause of death worldwide according to the World Health Organisation. Cyberbullying, in particular, has led to higher suicide rates. In the research piece, Prevalence, Psychological Impact, and coping of Cyberbullying Victims amongst College Students, it was discovered that ‘victims of cyberbullying had significantly more suicidal ideations, planning, and attempts’. Social media is place for bullies to thrive, as it allows for ‘ganging up’, turning people against others, sharing inappropriate media or people’s secrets and it is easy – because it is remote and from any digital device.
When nations fall With the high number of users of social media in society it’s an obvious tool for political influence and power – but it’s a power that can be
wielded by people for people to change political direction. We know that the so called Arab Spring was largely organised through social media. In 2011 it’s widely accepted that social media was a key instrument to rally Egyptian citizens to action and to topple the dictatorship via a forced resignation. It also led to the fall of the Tunisian Government and sparked anti-government movements in Algeria, Syria, Morocco and Yemen. Social media is recognised by Governments around the world as the media of the people, meaning it is hard to control. Precisely for this reason, it is monitored intensely or banned in some existing dictatorships. However, it has played a role in Western Society too in galvanising populations to vote in certain ways. Take the recent controversies to the Western economy such as the Brexit vote for the UK and the election of Donald Trump as the US President. Social media is said to have played a significant part in both these outcomes, using statements that were not always based in facts and which elicited emotional responses from audiences to steer their decision making for voting. Trump’s use of Twitter as an aggressive tool to demean opposition and stoke the fires of his supporters is well known.
More sinister than a public rant on Twitter is the suggestion that there was the use of botnets to influence people in both the referendum to leave the EU and the Presidential Election. Botnets are online software that can be programmed to manipulate public opinion on politics and current affairs. Artificial intelligence and algorithms can build profiles of people, target people and harvest data that can be used to influence individuals with specific advert types. Bots according to research by University College London, can create tweets around topics and make it look like a community is agreeing on a subject, when it is in fact a fake, sponsored campaign. According to findings from the organisation Freedom House, ‘online manipulation and disinformation tactics played an important role in the elections of 18 countries over the past year, including the United States’. This kind of deception was said to be rife in the recent Presidential election in America. After Donald Trump became President, Facebook and Twitter declared they had identified hundreds of fake accounts and thousands of ads run in secret by the Russian State – pushing out messages promoting Trump, false statements and posts designed to intensify division.
After Donald Trump became President, Facebook and Twitter declared they had identified hundreds of fake accounts and thousands of ads run in secret by the Russian State â&#x20AC;&#x201C; pushing out messages promoting Trump, false statements and posts designed to intensify division. The Bubble This leads to one of the most thought-provoking challenges of social media, one that we are party to creating. It is the idea that we become trapped in a social bubble where our beliefs and thoughts are reflected and fed back to us without being challenged. We are effectively trapped in a bubble of our own belief. We attract the same kind of people, with the same kind of views and feed off the same information â&#x20AC;&#x201C; creating a little ecosystem of only the views we want to see. Curated content will
feed off and amplify these views and bolster beliefs and communities that hold these beliefs, free from counter arguments or balance. Social media has the potential to undermine democracy and more importantly, the truth itself. As much as we rely on it as a way to connect to others and our world, there is much to be wary of when you decide to be guided and engaged through a platform that tells you what it thinks you should hear and who it thinks you should listen to.
Understanding the impact of mass migration Tens of millions of people crossed the Atlantic during the age of mass migration, which had a profound social and economic impact on both Europe and America, as well as the migrants themselves. Researchers are combining different sources of data to build a deeper picture of the period, as Professor Imran Rasul explains The period between around 1850 and
the end of the First World War was marked by mass migration between Europe and the United States, as tens of millions of people crossed the Atlantic to start new lives. This had a profound impact on wider society in both Europe and America, a topic that forms the primary focus of the Migration project. “The project as a whole is about trying to understand the impacts of this mass migration on the migrants themselves, on the receiving economy and on the sending economy, in terms of both economic and social outcomes,” outlines Professor Imran Rasul, the project’s Principal Investigator. Researchers are using administrative records collected from Ellis Island in New York, the main point of entry for migrants to the US. “The records provide information on migrants coming into the US between and . In some years we find that more than a million migrants came into the US, predominantly from Europe. We used that data in a number of different projects,” explains Professor Rasul.
MIGRATION The Economics of Mass Migration: Theory and Evidence Project Coordinator, Professor Imran Rasul Centre for Research and Analysis of Migration Department of Economics, Drayton House, 30 Gordon Street, London WC1H 0AX T: +44 20 7679 5853 E: email@example.com W: https://www.imranrasul.com/
Administrative records A key part of project centered around looking at how many of the migrants decided to stay in the US, combining the administrative records with census data to build a more complete picture, while researchers have also investigated the impact of changing rules around migration. While in the early part of the period anybody who wanted to travel to the US was able to settle permanently in the country, over time stricter rules came in. “For example, the US introduced a literacy test around the time of the First World War,” says Professor Rasul. This was introduced partly in response to concern about the social and economic impact of low-skilled migrants, while other measures were also introduced. “The US also increased
these changes that the US introduced legislation to try and stem the flow of migrants. Some of these restrictions did have an impact in terms of changing the number of migrants entering the US and their country of origin,” he outlines. Professor Rasul and his colleagues are also investigating how US policy-makers responded to migrants once they were in the country. “There was always a concern that new migrants would somehow be difficult for the US economy and society to absorb,” he outlines. The introduction of compulsory schooling laws in some states, at a time when there were already high levels of voluntary school enrolment, is a major topic of interest. The intention behind this legislation was unlikely to be to target American-born children, given
The project is about trying to understand the impacts of this mass migration on the migrants themselves, on the receiving economy and on the sending economy, in terms of both economic and social outcomes the fees payable at the point of entry. Later in our sample, in the s, a number of pieces of legislation were passed, known as quota acts,” continues Professor Rasul. “These regulated the number of migrants coming from a particular country.” This effectively favoured migrants from countries who had already seen an outflow of people to the US in the early part of the sample period, in particular Germany, Britain, Scandinavia and Ireland. Later on more came from southern Europe, then towards the end of the period there was a higher level of migration from central and eastern Europe, shifts which Professor Rasul says were a factor in later legislative changes. “It was partly in response to
that voluntary enrolment was already high, so Professor Rasul and his colleagues are exploring a different hypothesis. “Were compulsory schooling laws introduced in order to target incoming migrants? To expose them to American values in American schools and help integrate migrants into American society?” he asks. This would be a means of instilling American values in children and aiding in cultural integration. “We are trying to understand whether the passing of compulsory schooling laws in a given state is related to the country of origin of migrants, and whether that country already had compulsory schooling laws in place or not,” continues Professor Rasul.
Imran Rasul is Professor of Economics at University College London codirector of the Centre for the Microeconomic Analysis of Public Policy at the Institute for Fiscal Studies and research codirector of the Entrepreneurship Research Group of the International Growth Centre His research interests include labor development and public economics.
Intelligent manufacturing for factories of the future Many companies today offer a combination of products and services to their customers, a trend which opens up new business opportunities. The Manutelligence project is developing a collaborative engineering platform, designed to help optimise products and improve overall efficiency, as Maurizio Petrucciani and Sergio Terzi explain Many manufacturers today
are focused not just on developing products, but also on providing additional services which will help the customer gain further value throughout the lifecycle of the product. This emerging trend towards combining products and services opens up new business opportunities, an area which forms the primary research focus for the Manutelligence project, an EC-backed initiative bringing together academic and commercial partners. “The main goal of the project is to develop a platform to support this new kind of business model around the overall design and manufacturing of products and services,” says Maurizio Petrucciani of Dassault Systèmes, one of the companies in the project consortium. Some companies in the consortium have already changed their business model, towards a mixed mode combining services and products, now the project aims to help improve efficiency in the design process. “We aim to provide better, software-based tools to these industrial companies, to help them design products and services in an integrated way,” outlines Sergio Terzi of Politechnico Milano, the project’s Scientific Counselor.
market-validated solutions. That means this project is quite close to wider commercial relevance, in terms of exploiting the outcomes of this project,” continues Petrucciani. “We are developing this platform, and then our industrial partners test it, to give us feedback on the practical usability of the solutions. This is the way we are working across each of the different use cases.” Many companies across different areas of industry already use data gathered during development and testing to optimise the design of their products. The novel feature of the Manutelligence architecture is that these types of systems are integrated with Internet
of Things (IoT) enabled systems, giving designers access to a wealth of information about the usage of the product, throughout the entire lifecycle. “This architecture is able to capture information about the practical usage of the product. For example with cars, they capture relevant information, and in the case of ship-building, they can capture information about any issue that might happen on-board during operational usage,” explains Petrucciani. Designers can then search for this information, which may be important in terms of improving the product. “The point is to provide information about the practical usage of a product in an integrated
Use cases The project is working on four different use cases, in the automotive, ship-building and construction sectors, as well as in a laboratory, all of which could potentially benefit from stronger cross-disciplinary collaboration. In the automotive sector, researchers are working with Ferrari to help the company gather and use information about car usage, which can then be used to inform product design. “In this case, we developed and implemented a unique software platform, which is able to support the specific case of Ferrari,” says Petrucciani. This platform is built on existing solutions, so researchers are not reinventing the wheel, but rather adding new elements to established foundations, which could then be useful for developers in specific scenarios. “We are starting from
system, that can then be used as an input to introduce changes in the design and manufacture,” says Petrucciani. This ability to capture and rapidly transmit in-depth information could hold real importance in the ship-building industry for example, particularly given the complexity of a ship’s structure and the fact that workers at several different sites may be involved in development and construction. The engineering department may not be located on the same site as the shipyard itself, so Petrucciani says it’s important to share information efficiently. “The information is transmitted automatically. This means that there is no need to have operational-based data exchange,” he outlines. The architecture is designed to be used throughout the product lifecycle, from the early design stages, right through to the eventual usage. “We can capture information throughout the lifecycle of the product, and use that to improve the design,” says Petrucciani. “Clearly this depends on the type of product you are producing though, as developing white goods is very different to building a ship.” The information itself could come from a wide range of sources during the entire course of the product lifecycle, including not
only those involved in development, but also the eventual users. One area of debate during the early stages of the project centered around identifying the key stakeholders in each specific use case, who could provide information. “We found that a large number of people can be involved,” outlines Petrucciani. In the case of shipbuilding for example, designers may want to get not only the opinions of specialist engineers on design issues, but also the
Product lifecycle The volume of information available and the stage of the product lifecycle at which it is generated varies across the different use cases, while the partners in the consortium also have different priorities. For Ferrari, the services they offer are not closely integrated with the product lifecycle, which is reflected in their priorities in terms of the engineering platform. “The big interest for Ferrari in the engineering platform was how it could help
We aim to provide better, software-based tools to these industrial companies, to help them design products and services in an integrated way views of passengers on the ideal location for a restaurant or theatre on board. “We are looking at the development of a specific type of tool which can capture this kind of information,” continues Petrucciani. “There would be quite large volumes of information, so capturing it is a major challenge, and we are addressing several important points around that. Then, with this huge amount of information, there’s also the challenge of how to manage it effectively.”
them improve the design phase of the car,” explains Petrucciani. This also holds important implications in terms of the overall costs of product development, which is always a major priority for commercial companies. “A physical prototype is a huge cost for Ferrari, so reducing the need for physical testing can lead to significant financial benefits,” stresses Petrucciani. “Another major advantage of the engineering platform is the fact that you’re able to calibrate different models.”
MANUTELLIGENCE Product Service Design and Manufacturing Intelligence Engineering Platform Project Objectives
04-05 May, Consortium Meeting Turku
This allows designers to identify any issues and possible improvements at an earlier stage, which in the long run leads to cost savings. Petrucciani and his colleagues in the project are now looking to assess the capabilities of the platform; it has been tested by Ferrari, at the Fiorano circuit in Northern Italy. “We tested its ability to capture the IoT-derived information – the information produced during the time the car was running on the circuit – as well as its ability to input this data to the experience platform,” he says. Tests have also been held relating to the other use cases, including construction, as in the case of Lindbäcks, a Swedish SME producing modular wooden houses. “We looked at the real data captured by a sensor which was applied in an apartment. The same information was also sent to the engineering department, so that they could analyse it,” outlines Petrucciani. “There is also the case of FundacioCIM, which is a laboratory based in Barcelona. This is a different use case, with more of an environmental focus, where we can experiment and look to develop new ideas.” The project’s research also holds important implications in terms of sustainability, which is an increasingly
prominent issue in the manufacturing sector. One part of the project centered around developing software to evaluate the environmental impact of a product. “We did a demonstration showing that you can evaluate the impact of a product in the early stages of design. You can then choose materials for production accordingly, to help minimise the environmental impact, ” says Petrucciani. This is a pressing issue across large parts of the commercial sector, so while there are four use cases within the project, Terzi believes their research holds wider relevance beyond these specific examples. “What we have done can theoretically be used in other contexts,” he outlines. “Researchers, IT developers and industrial manufacturing companies have come together in the project. We are creative prototype demonstrators, which we will then bring to a wider audience.” This includes not only major companies like Ferrari and the shipbuilders Meyer Turku, but also smaller enterprises like Lindbäcks. “A consulting company may want to look into the usage of this platform, in particular extending it to small and mediumsized enterprises that maybe cannot afford to buy the entire platform,” he says. Manutelligence platform architecture
• Creating a cross-disciplinary collaborative management environment for ProductService engineering, able to increase the efficiency in the design process, with a potential for wide market adoption. • Integrating completelly Product Lifecycle Management and Service Lifecycle Management, using methodologies and tools to support cross development. • I nvolving all the key partakers in the value chain, including customers. •D eveloping a platform for Product-Service Design and Manufacturing Intelligence. • E xtending and improving the use of Simulation and optimize it through use of field data. • Improving precise and quick measures and simulations of cost and Sustainability issues, through Life Cycle Cost (LCC), Life Cycle Analysis (LCA) and CO2 footprint.
Funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement no° 636951.
12 partners from six European Countries.
Project Coordinator, Maurizio Petrucciani Dassault Systemes Italia Srl Via dell’ Innovazione 3 20126 Milano Bicocca, Italy T: 39 02 3343061 E: firstname.lastname@example.org W: http://www.manutelligence.eu Maurizio Petrucciani
Maurizio Petrucciani is Senior Project Manager at Dassault Systèmes Italia Srl. and is responsible for the management of the MANUTELLIGENCE project. He received his PhD in Aeronautical Engineering at University of Pisa and manages complex Product Lifecycle Management (PLM) projects in different business areas (aerospace, consumer goods, apparel, etc.) to improve product development processes by reducing time to market, better addressing the target costing, enhance the collaboration, and reducing the inefficiencies costs. Sergio Terzi (m) is Associate Professor of Product Lifecycle Management at Politecnico di Milano, Department of Economics, Management and Industrial Engineering. He got his degree in Industrial Engineering in 1999, his Master in Business Administration in 2001 and finally his PhD in 2005 (on the topic of product lifecycle management). He is member of the Editorial Board of the International Journal on Product Lifecycle Management and member of the IFIP WG 5.1 and 5.7. He is also one of the founders of the International Conference on Product Lifecycle Management.
A power grid that’s fit for the future The level at which energy prosumers both generate and consume energy is inherently volatile, which can affect underlying grid infrastructure. We spoke to Markus Taumberger about the Flex4Grid project’s work in developing a framework to help flexibly manage both energy demand and generation, making the power grid fit for the future The power grid
has historically been quite centralised, yet the emergence of more distributed power sources is leading to significant change, with more energy from renewable sources entering the grid. This leads to new challenges in terms of the management and operation of the grid, an issue that lies at the core of the Flex4Grid project. “We are trying to help in the transition from a centralised to a de-centralised power grid. Our role is to support flexibility management of end-users,” says Markus Taumberger, the coordinator of the project. The emerging concept of ‘prosumers’, who not only consume energy but also generate it themselves, is central to this work. “More and more people today are generating solar power from panels on their roofs which are connected to the grid for example, and this is changing how the grid works. So electricity is not only flowing from big generators to consumers, it’s also flowing in the other direction,” explains Taumberger.
Energy consumption and production A high degree of flexibility is required in the grid to accommodate these different sources of energy, which is a prime motivation behind the project’s work. Part of the wider aim in the project is to help localise energy consumption and production, which Taumberger says is quite a significant problem in some parts of Europe. “Some countries have a big imbalance between energy production and consumption,” he says. A number of components are being developed in the project to help address this, moving towards more flexible management of both energy demand and energy generation. “One component would be at the level of endusers, enabling monitoring and control for end-users in their homes. We have a device that makes it possible to connect appliances to our system via the end-users’ phone,” outlines Taumberger. “We also have a data-
cloud service, in which we are collecting data, aggregating it and anonymising it.” The project’s work also encompasses the development of interfaces for third parties, which opens up the possibility of the data being used to gain insights into consumer behaviour and potentially opening up a new income stream for utilities. However, while this holds important commercial implications, data management is not typically a core concern for utility companies. “It may be the case that a third party will actually run the system that we are developing,” explains Taumberger. The project’s work holds more immediate relevance for Distribution System Operators; another component under development is designed to enable utilities to manage their customers more effectively in terms of load balancing in the grid, which Taumberger says is always a priority. “The main interest for utilities is in ensuring grid stability,” he explains.
This is an increasingly complex task, as the amount of power generated from renewable sources like photo-voltaics and wind is dependent to a large degree on weather conditions. Levels of energy demand are also subject to fluctuations, making it difficult for grid operators to plan and adapt, so innovative methods are required to maintain grid stability. “If a utility wants to maintain the stability of the power grid, they need to find ways of influencing the behaviour of prosumers,” outlines Taumberger. This could mean offering incentives to prosumers to shift energy loads to certain times of the day; Taumberger and his colleagues in the project are developing an open data service framework and certain tools to enable closer interaction between utilities and prosumers. “Utilities would need to create incentives for end-users, and then our tools would be used to automate this,” he explains. The automation of these incentives is an important aspect of the project’s work, as currently it is quite difficult to advise endusers of changing conditions in the grid and to encourage them to adapt their behaviour accordingly. An automated system by contrast will enable closer interaction, while Taumberger says there is also the possibility of providing third party services on top of existing data, so that energy demand and generation can be managed
more effectively. “ We collect data from end-users, from smart meters for example. The data we collect could be analysed, and then aggregated results could be provided to the utility operator,” he says. There is room to include data from several different sources. “We use weather information, and we also have several years worth of data on individual households’ power consumption,” continues Taumberger. “We use artificial intelligence to find patterns in the data and to improve the predictive power of our models.” This holds important operational implications, as consumption peaks are a major headache
for utilities, putting a lot of strain on the underlying infrastructure. A more accurate method of predicting consumption patterns could allow utilities to try and shift usage peaks so that the grid could be used more effectively, while also minimising the need for costly investments in infrastructure. “This is very interesting for utilities, as they are able to reduce the deep load in their network and to accommodate more of these renewable energy sources, without needing to renew the infrastructure,” says Taumberger. Energy is often less expensive outside peak times, which acts as a financial incentive to end-users; Taumberger says the project’s research can
Flex4Grid System Architecture
FLEX4GRID Prosumer Flexibility Services for Smart Grid Management Project Objectives
Flex4Grid aims at creating an open data and service framework that enables managing flexibility of prosumer demand and generation, utilising cloud computing for power grid management and opening Distribution System Operator infrastructure for aggregator services. The Flex4Grid system will include a) a data cloud service with security and privacy mechanisms for data exchange and service management, b) prosumer generation and demand flexibility, and c) a viable business model to accelerate the deployment. The major innovations are a) opening the market for new entrants by cloud data and energy management services, b) data management and analytics services for Smart Grids, and c) the use of co‐creation to bring end users into the value creation process. System validation will be carried out in real‐ world pilots in three live electricity networks with scenarios ranging from deployment during smart meter rollout and retrofitting to large scale operation and federated demonstration of multi‐site pilots.
Total Cost: 3.2 M€, EC Contribution: 2.7 M€
• VTT Technical Research Centre of Finland Ltd (Finland) • SAE - Automation, s.r.o. (Slovakia) • Smart Com d.o.o. (Slovenia) • Institut “Jožef Stefan” (Slovenia) • Fraunhofer Institute for Applied Information Technology (Germany) • Elektro Celje d.d. (Slovenia) • Bocholter Energie und Wasserversorgung GmbH (Germany)
Project Coordinator. Markus Taumberger (VTT) VTT Technical Research Centre of Finland Ltd Kaitoväylä 1 90571 Oulu FINLAND T: +358 50 465 2474 E: email@example.com W: www.flex4grid.eu
help heighten awareness of price differentials. “We have methods to pro-actively involve the user, and to ensure that they are aware of peak times and the availability of different tarriffs,” he outlines.
Pilot schemes The overall system will be validated in three pilots, of which two are in Germany and one in Slovenia. The specific circumstances around these pilots vary, with different types of data available. “Smart meters have already been rolled out in Slovenia, so they have quite an advanced grid infrastructure. We can use a lot of data gathered from smart meters,” explains Taumberger. The
are we able to do that at the right time?” explains Taumberger. This work is ongoing, and the results so far look very promising. “We have the control group and a pilot group, and we can see that the pilot group is really shifting their consumption peaks,” outlines Taumberger. “The pilot studies so far have been undertaken with a relatively small sample size, but we can extrapolate outwards from this to assess its impact on a larger group of participants.” This brings clear benefits to consumers, particularly in the Slovenian case, where end-users are currently subject to a critical peak tarriff on energy during peak times. Energy is significantly more expensive
We collect data from end-users, from smart meters for example. The data we collect could be analysed, and then aggregated results could be provided to the utility operator situation in Germany is different, where the introduction of smart meters has been delayed. “In Germany we don’t have smart meters, so we have to rely more on data that we’re collecting ourselves,” continues Taumberger. “The data is less detailed – for example we don’t have data on household consumption.” Researchers are now looking to assess the performance of the system in these different pilots. The Flex4Grid system will be used to create a so-called ‘peak event’, then researchers will evaluate the response of end-users in terms of the project’s wider objectives. “To what extent are we able to influence the behaviour of end-users? Are we able to shift consumption peaks? And
during those periods; shifting consumption to other times helps end-users save money. “End-users benefit financially from reducing consumption during peak times. They are not paying this high network fee during that period,” points out Taumberger. It’s not just Slovenian and German end-users who want to reduce their energy bills of course, and with the results of the pilot studies positive so far, Taumberger and his colleagues are looking towards the wider exploitation of their results. “One possibility is that we will continue providing this service beyond the end of the project,” he says. “We now have some more time to update the project. We’re also actively discussing in the consortium the possibility of continuing with our research.”
Markus Taumberger is a Project Director at VTT. He leads a team of 23 in the area of Computing Platforms for Communications Systems, and alongside his work on the Flex4Grid Project, he also holds budgetary responsibility for several other initiatives.
Powering the future of renewable energy Many people around the world still live without access to electricity, while there are also challenges around meeting demand in developed countries, leading to a concerted focus on the development of renewable energy systems. The Reelcoop project aims to both develop new generation systems and enhance research cooperation around the Mediterranean, as Professor Armando Oliveira explains The power systems of the future are likely to include a greater proportion of energy from renewable sources, a transformation process which is already well in train. In Portugal for instance, around 60 percent of the electricity supply already comes from renewable sources. Now Professor Armando Oliveira and his colleagues in the Reelcoop project aim to develop new renewable electricity generation systems. “In our project, we focused on solar photo-voltaics (PV), solar thermal (ST), concentrated solar power (CSP), and biomass,” he outlines. Three different prototype systems were developed in the project, aiming to provide a reliable supply of electricity, while also taking account of concerns around sustainability. “In two of the prototypes we are combining solar (ST or CSP) with biomass. The problem with solar of course is that the sun only shines during the day, and if you want a 24-hour system, you need another source,” explains Professor Oliveira. “For the biomass, we use olive oil production waste, which is abundant in many countries around the Mediterranean, burning it in one prototype, and gasifying it in another.” A third prototype is based on solar PV with a ventilated facade, with an electrical output of 6 kW. While the prototypes are being produced at relatively small scales, Professor
View of P3 solar field, with parabolic trough collectors
Oliveira says the technologies and concepts being developed in the project are more widely applicable. “We could potentially scale up the prototypes, using the same principles. Indeed, a plant in Spain is using this idea of combining CSP with biomass,” he stresses. The next major step in the project is to transfer this knowledge and expertise around the Mediterranean, including not only
needs. While the conventional electricity supply model is highly centralised, Professor Oliveira believes this is set to change over the coming years. “I think that in future there will be more distributed generation instead of centralised generation,” he predicts. This represents a means of more closely matching supply to demand, and potentially helping extend supply to more of the 11 billion people
One of the prototypes is a concentrated solar power system, and we are combining it with biomass. The problem with solar of course is that the sun only shines during the day, and if you want a 24-hour system, you need another source European nations, but also countries in the Middle East and North Africa. “We have partners across the region, including from Morocco, Algeria, Tunisia and Turkey. We are looking to install and test similar systems in these countries, as well as in southern Europe and other locations,” says Professor Oliveira.
De-centralised generation The wider aim in the project is to encourage closer collaboration between research partners around the Mediterranean, as countries in the region update their electricity generation systems in line with modern
across the world who currently live without access to electricity. “A decentralised approach means distributing the generation, so generating electricity in the places where it is needed, such as the places where people live, the companies where they work, and so on. This means installing smaller power systems,” explains Professor Oliveira.
REELCOOP REnewable ELectricity COOPeration Professor Armando Oliveira (Project Coordinator) University of Porto - Dept of Mechanical Engineering Portugal T: +35 1-22 204 1768 E: firstname.lastname@example.org W: http://www.reelcoop.com
Armando Oliveira is Professor of Mechanical Engineering at the University of Porto, where he is Director of the Centre for Renewable Energy Research. He has participated in 19 European research and development projects related to the development of new and sustainable energy systems, managing a total budget in excess of 11 million euros. He is Executive Editor of the Int Journal of Low Carbon Technologies (Oxford University Press) and a Member of the EPSRC Peer Review College (UK).
Smarter Grids for Renewable Energy Renewable energy sources account for an ever-greater proportion of our overall energy supply, and are set to rise further in future in line with EU strategy. New real-time control strategies will be required to help the EU grid adapt to changing circumstances, one of the challenges that Luciano Martini and his colleagues in the ELECTRA project are working to address The development and
deployment of renewable sources of energy is widely recognised as a major priority, as European nations look to move towards a more sustainable model of energy provision. Renewable sources of energy have already reached high grid penetration levels in some countries, notably in Denmark, Germany, Italy and Spain, and other European nations are set to follow suit, bringing new challenges in terms of the operation of the grid. “First of all, we need to take measures to enable this higher penetration. Secondly, we need to take the measures necessary to make the grid fit for receiving such a large amount of power from renewable sources, dealing with the technical issues resulting from the fact that this generation is not fully predictable. For example, levels of energy from wind and solar sources can change rapidly, however the grid should always be in balance, between generation and load levels,” explains Luciano Martini, the Principal Investigator of the ELECTRA project. “This requires potentially a new grid architecture, new control functions and new algorithms.”
ELECTRA project This forms a core part of the agenda for the ELECTRA project, an integrated research programme on smart grids bringing together partners from across Europe to both pursue further technical investigation, and to help coordinate EU research more effectively. The grid itself was planned several decades ago, at a point when the industrial and domestic load was relatively low, but the situation has of course since evolved. “Today there are hundreds of millions of generating units of different sizes, which are connected to the grid at different
voltages: low, medium and high,” outlines Martini. The way these units are controlled needs to change, in line with the changing nature of the grid. “We now have sophisticated sensors available, and the possibility for realtime monitoring, which is what we will use to balance generation and load at a local level,” continues Martini. “We have to be able to adapt the generation side, the demand side, and distributed energy storage to better manage the operation and resilience of the grid. What we propose is not tackling the grid with a
concept of enabling local control in grids is not entirely new, as a great deal of attention has previously centered on the development of micro-grids, whereby a local grid which can be operated, islanded or interconnected. While micro-grids can help improve resilience, Martini says they are not the end of the story in terms of the stability and efficient management of the power system. “An additional feature of the WoC concept is in the cells’ ability to cooperatively support each other in the delivery of frequency and voltage control,” he explains.
We have to be able to adapt the generation side, the demand side, and distributed energy storage to better manage the operation and resilience of the grid. What we propose is not tackling the grid with a unique centralised control, but to make information and intelligent decisions available throughout the grid, closer to where increased amounts of flexibility are appearing. unique centralised control, but to make information and intelligent decisions available throughout the grid, closer to where increased amounts of flexibility are appearing.” A new grid control architecture has been proposed within the project called “Web-ofCells” (WoC), which is designed to enable the local control of the grid. The grid is effectively sub-divided into smaller cells, each connected to neighbouring cells, which Martini says can then provide cooperative support in terms of relevant grid services. “We think this collaboration between neighbouring cells is important, as it enhances the stability of the grid and means any issues can be addressed faster,” he explains. The
The solutions being developed within the project are intended to be applied widely, yet Martini and his colleagues are mindful that the specific circumstances and technical challenges may vary in different locations. Over the course of the project, meetings and roundtables have been held with key experts from around the world to share ideas, and some clear principles have emerged. “We agreed that it is really important to push forward the idea of decentralised control,” stresses Martini. These changes to the grid could open up new commercial opportunities. “New commercial arrangements between transmission system operators, distribution system operators and
other new entrant energy actors such as aggregators are becoming more and more important to secure grid operation,” outlines Martini. “And, we believe that what we are proposing will open up further new markets, for example in opening up the utilisation of new sources of flexibility. Moreover the increased deployment of low cost sensors throughout the grid will provide greater opportunities to identify critical situations developing in different parts of the grid at an earlier stage providing greater scope for rapid intervention.” A further consideration is the behaviour of individual consumers. Those cells involving lower voltage networks will utilise the action of industrial customers and clusters of smaller consumers playing a more active role, for example in responding to changing levels of demand and adapting their energy consumption patterns. “Individuals or organisations receiving a price signal can change their consumption plans accordingly, through, for example, their EV smart chargers responding automatically,” explains Martini. This may eventually prove to be a more cost-effective approach, which could help encourage consumers to think about their energy consumption patterns and adapt their behaviour. “This could motivate people to use energy when it’s available and to reduce their consumption when the output from local renewables is limited,” says Martini. “This is already happening to a degree. Each of us can get information about our own consumption patterns, and we can adopt what we believe is more sustainable behaviour, and consistent operation on this basis may limit and defer the need for further network reinforcement and subsequently improved efficiency in network operations.”
Coordination and support The development of smart grids is a highly active area of research, with many different European programmes dedicated to investigating innovative new solutions. Alongside the technical investigation, several workpackages in ELECTRA are focused on coordination and support activities (CSA), helping to train the
next generation of researchers. “Two of the CSA activities were about using research infrastructures effectively, and developing a sustainable mobility scheme (“ELECTRA REX”), especially for researchers at an early stage of their careers,” outlines Martini. Through ELECTRA REX researchers have been exposed to different techniques and facilities, built relationships with key international collaborators, and equipped themselves with new skills to face future challenges. “We hope to have the opportunity to continue our research over the next decade or so, and this will take place in the framework of the European Energy Research Alliance (EERA),” says Martini. “All the partners in the ELECTRA consortium are key members of the EERA Joint Programme on Smart Grids, which is focussed on medium to long term research goals. We are currently in the process of approving the next iteration of our description of work for the next - years research and development activity.” The new grid architecture based upon the WoC concept and the new advanced algorithms for voltage and frequency control are all set to be included in this new description of work, with researchers collaborating to build further on earlier work. This includes collaborations at the European level, and also more widely, with Martini saying links have been established with other international initiatives. “We’ve had positive feedback about what we’re proposing from China and the US, and also from India. There is a large new programme in the US called Grid Modernisation, that involves several key national labs,” he explains. There is also a possibility of working within the framework of Mission Innovation, a global clean energy initiative involving countries, further reinforcing the importance attached to this area of research. “We have begun working together, sharing information and comparing approaches and national strategies. The new grid architecture is one of the topics that has been identified as an important task, and one in which we would benefit from international cooperation,” says Martini.
ELECTRA IRP European Liaison on Electricity Committed Towards long-term Research Activities for Smart Grids Project Objectives
The ELECTRA Integrated Research Programme brings together the partners of the EERA Joint Programme on Smart Grids to reinforce and accelerate Europe’s medium to long term research cooperation in this area for stable operation of the EU power system of 2030+. The high penetration of DER into the European power system requires a radical new approach to the real time grid operation. ELECTRA is active in the definition of the new system requirements, deploying flexibility resources to be triggered in response to proper grid observables inputs.
ELECTRA IRP receives funding from the European Union Seventh Framework Programme (FP7/2007-2013) under the grant agreement n° 609687.
RSE (Coordinator), AIT, VITO, LABORELEC, DTU, VTT, CEA, Fraunhofer IWES, CRES, ENEA, IPE, SINTEF, IEN, INESC Porto, TECNALIA, JRC, TNO, TUBITAK, University of Strathclyde, DERlab, OFFIS
Luciano Martini ELECTRA IRP Coordinator Director - T&D Technologies Dpt. Ricerca sul Sistema Energetico - RSE S.p.A 54, Via R. Rubattino I-20134 Milano - Italy T: +39 02 3992 5376 E: email@example.com E: firstname.lastname@example.org W: http://www.electrairp.eu/ Luciano Martini
Luciano Martini works for RSE, a national research center based in Milano, where he is the Director of the “Transmission and Distribution Technologies” Department. He has more than years’ experience on R&D activities dealing with renewable energies, superconductivity, and smart grids. He is the Coordinator of the Smart Grids Joint Programme of the European Energy Research Alliance and of ELECTRA - the European Integrated Research Programme on smart grids. He is the vice-Chairman and Italian delegate within the Executive Committee of the IEA Technology Collaboration Programme ISGAN (International Smart Grid Action Network). Within Mission Innovation, Martini is co-leading, together with representatives from India and China, the activity of Innovation Challenge # on smart grids.
Knowledge transfer in the transport sector While millions of people use Europe’s roads, railways, airports and sea-ports on a daily basis, our transport infrastructure still needs to develop in line with modern demands. We spoke to Dr Thierry Goger of the EC Horizon 2020 FOX project, about their work in supporting research and encouraging knowledge transfer across different modes of transport A reliable and
efficient transport network is the lifeblood of the economy, providing a channel for the passage of people and products, connecting cities and countries and bringing us the goods and materials that we all rely on. Every day, thousands of tonnes of raw materials and products are transported by road, rail, air and sea across Europe, so it’s essential that our transport networks are robust, reliable and efficient. The European transport network is generally of a high standard, yet our roads and railways require regular inspection and maintenance, while there are also significant regional variations in the availability of specific modes of transport. Continued research into the construction, maintenance and inspection of transport infrastructure, as well as the efficient recycling and reuse of materials, is central to its continued evolution.
FOX project This topic lies at the core of the FOX project, an initiative bringing together partners from across Europe to collaborate and coordinate research into challenges affecting the major transport modes. The wider goal is to encourage crossmodal research into innovative techniques and also knowledge sharing between the different modes of transport, building a network of relationships that will endure beyond the project’s funding term. The cross-modal research environment is designed to help in the development of an integrated transport infrastructure, reflecting the way we travel today. Many people start the commute in their car for example, before taking the train and then getting a bus; while there are clear differences between these modes of transport, they also face similar challenges, such as trying to modernise infrastructure and reduce their environmental impact, all at a time of rising demand. A great deal of attention in the project has centred on coordinating research into transport infrastructure. This has been built on a foundation of detailed understanding of how companies and organisations in these four different modes of transport currently operate, the technologies they use and the challenges they face, from which FOX
Project Coordinator, Dr Thierry Goger and his colleagues aimed to identify the most effective solutions. “The project has been primarily about looking at which technical solutions, operations and regulations could play a role in the development of an integrated transport infrastructure,” he outlines.
Technical innovation While technical innovation is key to addressing many major challenges, Dr Goger says that it’s also important to share existing knowledge and identify other areas in which it could be applied. “In some cases, the technical solution already exists, or is already under development in a different mode of transport. So instead of re-inventing the wheel, why not try to transfer the technical knowledge?” he asks. There are a number of common challenges between road and rail for example, particularly around managing the network. New signalling technologies allow more trains to be run simultaneously, while road transport also faces challenges around congestion; Dr Goger says the project is also looking at other technologies. “We are working to transfer the on-time information systems which were developed for bus systems onto the rail network, and also to the water-based transport system. That’s a good example of technology transfer,” he outlines. The relevance of a technology will depend on the circumstances in which it is to be applied. The project has investigated transport infrastructure across Europe, in very different climates and economic conditions. “Norway for example has different standards, techniques and approaches to countries in southern Europe. In the majority of cases we have identified relatively generic solutions, which can then be adapted for different modes of transport,” says Dr Goger. This can then help to inform the development of a plan for the development of the transport sector, a key part of the project’s agenda. “Our main role has been to develop a roadmap, to help policy-makers and funding bodies to identify research priorities,” explains Dr Goger. With traffic
levels forecast to increase further, our ageing transport infrastructure is set to come under even greater strain, underlining the need to support research. “This project lays down the basis for future work,” continues Dr Goger. The impact of the project will be felt well beyond its funding term, as a well-established research environment, together with a network of deeply engaged technical experts, will be in place to pursue further investigation in future. One topic high on the agenda is generating renewable energy from transport infrastructure; three technologies have been identified as holding particularly rich potential, namely photovoltaics, combined renewables and regenerative braking. The cost of some of these technologies has dropped significantly over the past decade, at the same time as concern has deepened over the environmental impact of the transport sector. There have also been moves to accelerate the transition towards electric vehicles, with France announcing plans to ban the sale of petrol and diesel cars by 2040. With change in the air, continued research is essential to ensure that our transport infrastructure keeps us moving, at an environmental and financial cost that we’re all willing to pay.
Roadmap development One significant achievement of the project is the Summary report of recommendations that is available on the website. Throughout the project, the FOX consortium has worked closely with its sister project USE-iT, which focused on User information, Safety and Security and Energy & Carbon. The report incorporates USE-iT results, as well as those of the associated project REFINET. All the partners selected specific research topics, identifying actions and working out timeframes to produce a common roadmap linked to the cross-modal challenges. A significant number of technologies have been identified, which were validated and improved with stakeholder interviews and a workshop. These technologies were then prioritised, along with further interviews and a second stakeholder workshop. Each challenge needed to cover at least two
transport modes, resulting in 42 challenges. For each research challenge, a headline ‘from-to’ statement detailing the current state and the desired future state – should the research topics identified within each research challenge be successfully undertaken – was put forward, with an indicative timescale of 2017 to 2030 (see the example below). Each challenge was mapped as to whether the research challenge was focussed on one or more domains, covering technology, infrastructure, governance or customers, in accordance with FEHRL’s Forever Open Road, Rail, Runway and River (FORx4) programme, which will be the mechanism in which the outputs of the projects are taken forward. In addition to the modes and domains identified in the FORx4 programme, the challenges have also been mapped against a ‘level of application’ identified in the REFINET project, determining the area in which a particular research challenge is most applicable, namely; urban mobility, long distance corridors, multi-modal hubs and a system level. For simplicity of reading, icons have been developed and used to illustrate the different modes, domains and levels of application, as shown in the diagram below. Each individual research topic was mapped against the FORx4 modes (road, rail, air, water and multi-modal) and domains (governance, infrastructure, technology and customer) and also the ‘level of application’ identified in the REFINET project. For each research challenge, a headline ‘from-to’ statement detailing the current state and the desired future state, should the research topics identified within each research challenge be successfully undertaken, with an indicative timescale of 2017 to 2030.
The research challenges were assigned to nine ‘Drivers influencing co-modal transport research’ based largely on those developed for the FORx4 co-modal transport initiative ‘point of view’ document, as detailed below. It was recognised that a number of research challenges could be relevant to more than one driver; rather than repeating the information, they were assigned to the most appropriate driver and referenced to others they could affect. Drivers influencing co-modal transport research: 1. Change in transport demand 2. Globalisation: 3. High costs of operation and use 4. Ageing infrastructure 5. Scarcity of natural resources 6. Decarbonisation of transport and environmental and social impact 7. Safety 8. Security 9. Rapid development of technology and social behaviour The research challenges identified will be used as an investment plan to both research funders and as an investment or strategy document for public and private infrastructure owners, operators and contractors. An exploitation and implementation plan has also been prepared, which outlines the business priorities and risk appetite of the stakeholders and suggests how they might use the results of the project.
FOX Forever Open infrastructure across (X) all transport modes Project Objectives
The 24-month USE-iT project’s vision is to better understand the common challenges across transport modes to develop common research objectives. In the longer term, create a vibrant community of stakeholders and a European transport network that is safer, more secure, with lower carbon emissions and focussed on user needs. The 30-month FOX CSA project aims to develop a highly efficient and effective cross-modal R&D environment and culture which meets the demanding requirements of transport and connectivity. FOX will identify common needs and innovative techniques in the areas of construction, maintenance, inspection, and recycling & reuse of transport infrastructure.
Project Funding 930,000 euros
FEHRL • BASt • ZAG • IFSTTAR • TNO • EURNEX • STAC • NETIVEI • IBDiM • DNDI • CDV • VGTU • AIT • LNEC Full details can be seen at: • http://www.useitandfoxprojects.eu/ contact/fox-consortium
Project Coordinator, Dr Thierry Goger FEHRL – Europe’s National Road Research Centres Boulevard de la Woluwe 42/B3 1200 Brussels | Belgium T: +32 2 775 82 45 E: email@example.com W: www.useitandfoxprojects.eu/welcome-to-fox Ongoing projects related to multimodality are already being developed within FORX4 and its sister programme Forever Open Road, a FEHRL’s flagship initiative.
Dr Thierry Goger
Dr Goger has been the Secretary-General of FEHRL since January 2014. He works closely with the European Commission and is a member of the European Road Transport Research Advisory Council (ERTRAC) and European Construction Technology Platform (ECTP). Dr Goger worked in the Transport and Urban Development section of EU COST.
Laser focus ignites fuel efficiency Laser-based ignition systems could help to improve fuel efficiency in combustion engines and reduce our overall carbon footprint. Continued collaboration, knowledgesharing and effective training are central to the ongoing development of laser spark plug technology, says Dr Nicolaie Pavel of the LASIG-TWIN project A Dacia car that was run at INFLPR by laser spark plugs is shown.
A reliable, robust
and effective laser ignition system could help mitigate the environmental impact of combustion engines, improving fuel efficiency and leading to e.g. greener cars in the automotive sector. This topic lies at the heart of the LASIG-TWIN project, an EC-backed initiative bringing together researchers from four different European countries. “One of the subjects we are working on is laser ignition. The project is a collaboration with a number of institutes from around Europe,” outlines Dr Nicolaie Pavel, a Senior Researcher at the INFLPR in Magurele, Bucharest, the coordinator of the project. With a laser ignition system, more of the fuel is burned, improving combustion efficiency and so improving the performance of a vehicle. “With laser ignition, there is an increase in power, which is a key point in terms of reducing fuel consumption,” continues Dr Pavel. There are some significant technical challenges to deal with before these engines can be more widely applied however, which Dr Pavel and his colleagues aim to address in the course of LASIG-TWIN. The project itself is a twinning initiative, designed to enhance research capacity in Romania by building teams of excellence to develop new ideas, as well as by strengthening links with other institutes and sharing knowledge with international partners. “Some of my colleagues from our institute travel periodically to collaborate with our partners, and they discuss specific topics related to laser ignition,” says Dr Pavel. “For example, our partners at Bayreuth University have a lot of experience in characterising combustion, while we here in Romania have deep knowledge of laser spark plugs. So we have some complementary areas of expertise.”
Insets present laser spark plug prototypes in comparison to standard spark plugs.
certain aspects of laser ignition, he says they still benefit from exposure to new concepts and ideas. “We don’t know how to characterise certain things around the engine, for example how plasma behaves. So one aspect of the project is around gaining new insights into this kind of area,” he
The first workshop (September , Magurele, Romania) looked at the history, status and future of laser ignited combustion engines, while further events are planned over the course of the project addressing a range of topics, including the measurement and characterisation of combustion, or solutions for the integration
We have already had some discussions with private companies about laser ignition, following on from the LIC ‘ in Bucharest which was held earlier this year. There is interest in laser ignition for certain aerospace and satellite applications
outlines. There is also a strong focus on training within the project, with Dr Pavel and his colleagues aiming to widen the knowledge base, strengthening the foundations on which future research can be built. “Alongside spending time with our partners, we are also holding workshops,” he says.
and packing of a laser spark plug. There is also scope for further collaboration beyond the project; the th international Laser Ignition Conference (LIC ‘ ) was recently held in Bucharest, in June , which Dr Pavel says proved to be a very successful event. “The LIC ’ was held this year in Bucharest, which was
Knowledge base This gives researchers the opportunity to gain knowledge and skills in different areas, which they in turn can pass on to students, boosting the research base and the capacity for technical innovation. While Dr Pavel and his colleagues hold extensive expertise in
Laser ignition of a methane/air-mixture in an optically accessible combustion chamber (UBT-LTTT).
the first time it had been held in Europe - it was a highly successful conference,” he says. Summer schools are also among the purposes of this project, laying the foundations for continued research. “A summer school was held in July in Brasov, Romania, mainly with students from Romania, but also with some participation from other parts of Europe,” outlines Dr Pavel. Further events will be held in future, while staff exchanges and expert visits are also being organised, reinforcing the project’s commitment to knowledge sharing. “We invite specialists from abroad to discuss specific subjects in front of not only our partners, but also students from the institute or from various faculties,” says Dr Pavel. This will heighten awareness among students of the wider possibilities in research, and also potentially encourage the development of new projects. “It’s good to bring together young people with an interest in this subject, who are interested in working in this area in future,” continues Dr Pavel. “With some support from the European Commission, you can then start to think about other collaborations and other research projects.”
691688 LASIG-TWIN Laser Ignition - A Twinning Collaboration for Frontier Research in Eco-Friendly Fuel-Saving Combustion Project Objectives
Commercial applications This also holds importance in terms of potential commercial applications arising from the project’s research. While laser spark plug technology could help improve car efficiency, Dr Pavel says the project’s work is also relevant to other areas, including powertrains, the space industry, marine propulsion and large IC engines for co-generation, or even hybrid car engines. “We have already had some discussions with private companies about laser ignition, following on from the LIC ‘ in Bucharest. There is interest in laser ignition for certain aerospace and satellite applications,” he says. There is also interest from the automotive industry in this technology; Dr Pavel believes it’s important to involve industry in research if the project’s work is to have a wider impact. “We would really need the help and involvement of automobile companies,” he stresses. The cost of a laser spark plug is still a major
Robust packaged green laser for the Raman experiment of the EXOMARS mission (IOF).
consideration in this respect. While a classical spark plug is relatively inexpensive, at this stage of development a laser spark plug is still significantly more costly, a context in which the commercial sector can play an important role. “If laser spark plugs reach mass production they will become much cheaper, but this requires more input from the commercial sector, including both small companies and bigger companies,” explains Dr Pavel. However, while the project’s work holds wider relevance to industry, Dr Pavel says commercialisation is not on the immediate agenda, so his focus is more on continued research, deepening the knowledge base and strengthening links with other institutes with complementary areas of expertise. “We are discussing laser ignition technology with our partners, and we also have some further workshops and a summer school to organise” he outlines. A multi-point ignition offline test is shown. Inset present variable 3D spark location at optical plug exit (UL).
From left to right: UBT: Mark Bärwinkel and Sebastian Lorenz; IOF: Erik Beckert; CNRS: Laurent Zimmer and Gabi-Daniel Stancu; INFLPR: Nicolaie Pavel; UL: Geoffrey Dearden.
The collaboration between National Institute for Laser, Plasma and Radiation Physics (INFLPR), Magurele, Romania and the four institutions from Germany, the UK and France is expected to provide an opportunity for research excellence, technological innovation and industrial exploitation in the fields of laser spark plug fundamentals and applications.
EU contribution: 1,066,112.50 euros.
Professor Dieter BRÜGGEMANN, University of Bayreuth (UBT), Department of Engineering Thermodynamics and Transport Processes (LTTT), Germany • Professor Geoffrey DEARDEN, University of Liverpool (UL), School of Engineering, United Kingdom • Dr Laurent ZIMMER, Centre National de la Recherche Scientifique (CNRS), CentraleSupélec, Université Paris-Saclay, France • Dr Erik BECKERT, FraunhoferInstitute for Applied Optics and Precision Engineering (IOF), Germany
Dr Nicolaie Pavel Project Coordinator INFLPR Laboratory of Solid-State Quantum Electronics Atomistilor 409, Magurele 077125, Ilfov Romania T: +40 (21) 457-4550 ext. 2133 E: firstname.lastname@example.org W: https://www.lasig-twin.eu/ Acknowledgment: The LTTT thanks the Robert Bosch GmbH for the laser ignition systems granted as a loan.
Dr Nicolaie Pavel
Dr Nicolaie Pavel is a Senior Researcher at INFLPR; he conducts research on diodepumped solid-state lasers and on some laser applications (including laser ignition). He is a recipient of two postdoctoral scholarships: JSPS (1999-2001, Okazaki, Japan) and Alexander von Humboldt Foundation (2005-2006, Hamburg, Germany); in 2013 he obtained the degree “Dr. habil.” in Physics.
A step towards the future of avionics solutions Avionics systems are an integral part of modern aircraft and perform an ever-widening range of functions in today’s planes. Thierry MARET tells us about how the ASHLEY project helped to lay the foundations for the development of the next generation of avionics systems, research which will help to boost the wider European aerospace industry Many of today’s avionics systems have their
Integrated Modular Avionics
roots as analogue control systems, yet they have since evolved through digital computing power to a point where they play an increasing number of important roles in modern commercial and military aircraft, including in communications, navigation and flight control. The early avionics systems were implemented as a system specific architecture, yet as the number of avionics functions increased, the architecture has grown in size and complexity, which has had an overwhelming impact on cost and the efficiency of maintenance. These are issues that the ASHLEY project, a research consortium bringing together 36 organisations from 13 countries across Europe, was formed to address, helping European industry adapt to the changing commercial marketplace. The ASHLEY project was built on the work of other initiatives in laying the foundations for a more efficient avionics platform solution, carrying out research across a number of different areas, including photonics, database services and remote resources solutions, with the wider goal of boosting the European aerospace industry.
The IMA concept (Integrated Modular Avionics), a term to describe the architecture underpinning a distributed real-time computer network aboard an aircraft, was integral to this agenda. Developed in the early ‘90s as a means of simplifying avionics software development, the IMA concept first evolved into IMA1G, now IMA2G is emerging, providing the foundation for the
ASHLEY Avionics Systems Hosted on a distributed modular electronics Large scale dEmonstrator for multiple tYpe of aircraft The ASHLEY project is a EU 7th Framework Programme under grant agreement no ACP2GA- 2013-605442. Thierry MARET, Project Coordinator THALES Avionics 105, avenue du Général Eisenhower BP 63647 31036 Toulouse (France) T: +351 214 228 100 E: Thierry.Maret@ fr.thalesgroup.com W: www.ashleyproject.eu
Scale aircraft representative demonstrator, which provided a means to assess the processes, methods and tools that have resulted from the project’s work. Much has already been achieved in the project, including the introduction of an avionics power line communications solution, and passive optical sensing and power-bylight technologies, which could help reduce installation and maintenance costs. New
The early avionics systems were implemented as a federated architecture, yet as the number of avionics functions has increased, the federated architecture has grown in size and complexity next generation of avionics solutions. A set of IMA2G concepts have already been validated - researchers in the project worked to build on this further, consolidating and extending results from previous independent projects. A lot of attention was devoted in the project to extending the Distributed Modular Electronics (DME) concept in particular, going beyond the current state-of-the-art and opening up opportunities for technical innovation across a number of different areas. This included extending DME concepts and solutions to other aircraft domains and securing data distribution services, areas which could have a significant impact on the ongoing development of the aerospace industry as it seeks to adapt to emerging challenges. The innovations developed in the course of ASHLEY were validated at the ASHLEY Large
resources have been developed and validated to extend the DME with hosting capabilities for new services such as databases and communication managers while establishing strong data security measures. In the meantime, an integrated tool framework for integration and configuration management has been defined and demonstrated. The ASHLEY results will translate into more flexible avionics architectures, lower weight and installation constraints for the overall system platform, and shorter development lead-time, turning IMA2G into a key enabler for more competitive aircraft systems in future generations. This research will help put European aviation in a stronger position, illustrating the depth of research and technical expertise on the continent and helping Europe maintain its place at the forefront of aviation innovation.
Thierry Maret is a certified Senior Project Manager (IPMA Level B) at THALES Avionics, a world-leading provider of onboard systems for the civil aerospace market. He graduated from SUPELEC in 1991 and has an Engineering degree, majoring in electronics. He is currently the coordinator of ACROSS and ASHLEY, two research and development projects co-funded by European Commission under the Seventh Framework Programme. © AIRBUS S.A.S. 2009 - All rights reserved - Concept by EIAI
Climate control innovation extends range of electric vehicles Substantial amounts of energy are required to run auxiliary systems within electric vehicles, which is a major drain on the battery and so limits the range of the car. We spoke to Dr Nino Gaeta about the Xeric project’s work in developing a new climate control system, which could help both maintain passenger comfort and increase the range of electric vehicles The range of most electric vehicles today is typically quite low, at somewhere between 200-300 kilometres, which discourages many drivers from switching from conventional vehicles. Researchers in the Xeric project are developing a new system which could help to improve energy efficiency and so increase the range of electric vehicles, as project coordinator Dr Nino Gaeta explains. “We’re developing a new climate control system, based on new technology, which will limit the need to use the battery to control the microclimate inside a vehicle. By using less energy from the battery to control the climate, we can increase the range of a vehicle,” he outlines. The system absorbs humidity from the air, using dessicants and a membrane contactor. “The humidity in the air is captured at the membrane interface, so dry air is circulated in the car. This reduces the amount of energy required from the battery to dehumidify the air,” explains Dr Gaeta. This is often quite a major burden on a car’s battery, an issue which researchers in the project are working to address. The project’s work is quite multi-disciplinary in scope, bringing together researchers from several different areas. “The project combines expertise on membranes, dessicants, climate control, and traditional operations systems. In the contactor, we combine a membrane, dessicant, and traditional cycles, to de-humidify and cool air,” says Dr Gaeta. Whereas a traditional climate control system condenses water, in the Xeric system water is effectively taken away using the dessicant. “The membrane is between the dessicant and the air, and acts as a contacting surface. It
doesn’t allow liquid water to pass through it, but it allows liquid vapour to pass through,” continues Dr Gaeta. “Air enters the contactor, flows onto the membrane surface and is then de-humidified, at the level required for the specific application.”
rooms for example, yet attention in the project is currently focused mainly on cars. “This is the most demanding application, because space is limited in cars,” points out Dr Gaeta. The next generation of electric cars may have more space, yet Dr Gaeta and his colleagues are
We’re developing a new climate control system, based on new technology, which will limit the need to use the battery to control the micro-climate inside a vehicle. By using less energy from the battery to control the climate, we can increase the range of a vehicle The main application that has been identified is electric cars, yet Dr Gaeta says the system could potentially be applied in other situations where humidity needs to be removed from the air. This could mean hospitals or refrigeration
still working to miniaturise the Xeric system further, while also looking to test and validate it. “At the moment we are bench-testing the system. It is being tested by Frigomar, one of our partners, who are active in climate control in big boat yards. This is another application which could be explored in future,” he outlines. “Space and weight are very important in ships - every square metre and extra kilogramme costs money, so they aim to optimise the room and the weight for any equipment, including climatic control systems.”
XERIC Above: Testing 3F-CMC’s prototype at TICASS.
Exploitation workshop group
Innovative Climate-Control System to Extend Range of Electric Vehicles and Improve Comfort XERIC is a European Research & Innovation project, which gathers 8 partners since June 2015 (duration: 36 months). The project has received funding from the European Union’s Horizon 2020 programme under grant agreement n°653605. Project Coordinator, Soccorso Nino Gaeta GVS S.P.A. Italy Via Roma 50 40069 ZOLA PREDOSA (BO) Italy T: +39 051 6 176 321 E: email@example.com W: http://xeric.eu Tw: @XERICproject Dr Nino Gaeta is responsible for International Cooperation and Consulting Scientist at GVS in Bologna, Italy. He is Doctor in Chemical Engineering and has held positions at companies in both Italy and America.
The road to improved traffic safety Many factors may be involved in a road traffic accident, including lighting, street layout or sight obstructions. We spoke to Aliaksei Laureshyn about the InDev project’s work in looking deeper into the underlying causes behind road accidents, research which could inform the development of effective counter-measures to improve safety A pedestrian or cyclist involved in a road traffic accident is often much more badly affected than the driver of the vehicle, who is relatively well protected by the surrounding metal framework. Deeper analysis of accidents could lead to new insights into the underlying causes behind them and so help to improve the safety of vulnerable road users (VRUs), a prime motivation behind the work of the InDev project. “The idea was to look deeper into the causative factors behind those accidents,” explains Aliaksei Laureshyn, the manager of the project. This includes not only analysis of accidents and how they develop, but also research into near misses and other, closely related areas. “We are also looking into traffic conflict studies and behavioural studies. Before an actual accident happens, there might have been several near-misses. Maybe the driver avoided a collision by half-a-second, but otherwise the circumstances around how the incident developed are often the same as with an actual accident,” says Laureshyn. This area of research has a long history, dating back around half a century, when observers would be sent out to watch the roads and gather more data about the root
causes of traffic accidents. Technology has since moved on significantly of course, now Laureshyn and his colleagues in the project are applying modern techniques to help develop more rigorous, detailed methods of analysing the circumstances that lead to traffic accidents. “We are using data from video cameras, and also we are trying to develop computer vision tools that can help in detecting certain road traffic situations,” he outlines. With
Vulnerable road users This work is built on a recognition of the shortcomings of previous traffic conflict analysis methodologies, which were designed primarily with cars in mind, and didn’t always take vulnerable road users fully into account. “For example, if you imagine two cars travelling at 40 kph avoiding each other, you would say this is a serious conflict. But then if instead of a second car there is a cyclist, then intuitively you would say that it’s much more
Before an actual accident happens, there might have been several near-misses. Maybe the driver avoided a collision by half-asecond, but otherwise the circumstances around how the incident developed are the same as with an actual accident modern technology, researchers are able to gather large volumes of information on traffic over extended periods. “We collect accident data, conflict data, and more,” continues Laureshyn. “The project is about methodological research. We aim to build a deeper understanding of how accidents and conflicts should be analysed before we can recommend a methodology for wider use by local municipalities.”
dangerous, as the cyclist is less protected and so more vulnerable. We’re trying to include that aspect of vulnerability, so that estimates of the severity of a conflict are sensitive to who is actually involved,” outlines Laureshyn. Several sources of data are combined in the project, including accident databases, behavioural data and surrogate safety indicators, with researchers looking to develop a toolbox for traffic conflict analysis.
b) special software detects the presence of the relevant road users and selects those with arriving close in time;
c) an expert investigates the near-misses using another tool and finally judges how close it was to a real collision
a) Cameras are usually installed on existing infrastructure like lamp posts;
“We’re developing the technical theory behind the toolbox,” says Laureshyn. There are several strands to this research, including the development of computer vision tools to monitor roads and extract relevant information. While a monitoring tool could be applied to gather data on a road over a relatively long period, researchers need to take the changing circumstances over that time into account. “Traffic levels change, levels of sunlight change and shadows move. Developing a tool that will provide stable levels of performance in those different conditions is a very challenging task,” acknowledges Laureshyn. This is a major challenge in the computer vision field, and while Laureshyn does not expect it to be completely resolved in the course of the project, he says some important advances have been achieved. “For example, we are developing simple tools that we can use to remove video at points where we know nothing is happening,” he says. “So if you put in 24 hours of video, it will reduce it to a more manageable amount.” A road traffic expert can then analyse the video excerpt, identify whether it can be classed as a near miss or a conflict, and gain insights into the underlying causes behind the specific incident. While there are of course aggressive drivers on the road, and some pedestrians and cyclists don’t obey the rules, ultimately nobody wants to put themselves at extreme risk. “When we see situations like that, it means there has been a breakdown – somebody has miscalculated,” says Laureshyn. There could be
a wide range of possible explanations behind an accident; Laureshyn says the project aims to put analysis on a firmer footing, giving professionals a basis on which to investigate road safety. “We are writing a handbook for practitioners, such as engineers and municipal authorities, where we describe different methods to study the safety issues facing road users,” he outlines. “It will be a step-by-step instruction on how to conduct a conflict study, including things like where to locate the cameras, and automated behaviour data collection.”
Preventative measures The wider aim in the project is to more closely link the factors that lead to accidents to the risks facing vulnerable road users. This will provide a more solid evidence base for the development and eventual implementation of preventative measures. “With our methodology, we aim to better understand how a road traffic incident develops. Is it because of light conditions or sight obstruction? Or are people driving too fast in that particular area? That kind of information is highly relevant when thinking about the implementation of countermeasures,” points out Laureshyn. The focus in the project has been on acquiring and analysing the relevant information, yet Laureshyn and his colleagues are well aware of the wider importance of their research, and are keen to have a practical impact on road safety. “We want the handbook, in a physical format, lying on the desktop of each person working in road safety,” he says.
InDeV In-depth Understanding of Accident Causation for Vulnerable Road Users Project Objectives
The InDev project aims to examine how combining different methods can help researchers to build a deeper understanding of the underlying causes behind accidents involving cyclists and pedestrians. There is a particular focus in the project on methods that do not require accident data to assess the safety of a particular location. This could be behavioural or traffic conflict observations, the latter referring to a situation where an accident was avoided by only a small margin.
RIA - Research and Innovation action. Budget = EU contribution = 4.900.000 Euro Horizon 2020, Grant agreement No. 635895
Lund University, Sweden • Ålborg University, Denmark • The Federal Highway Research Institute, Germany • Hasselt University, Belgium • Netherlands Organisation for Applied Scientific Research, Netherlands • Warsaw University of Technology, Poland • Ingeniería de Tráfico SL, Spain • Institute of transport Economics, Norway • Polytechnique Montréal, Canada
Project Coordinator, Aliaksei Laureshyn Senior lecturer Lund University Sweden T: +46 462 229131 E: firstname.lastname@example.org W: www.indev-project.eu
Aliaksei Laureshyn is a senior researcher at Lunds University in Sweden and Institute of Transport Economics (TØI) in Norway. His main research interest is the use of traffic conflicts and other surrogate measures to study safety in traffic, particularly for the unprotected road users (pedestrians, cyclist). He has been working closely with researchers within computer vision to develop tools for autoamted detection and classification of traffic conflicts.
Credit: R. Williams (STScI), the Hubble Deep Field Team and NASA/ESA
Shining a light on the beginning of the Universe Cosmic microwave background data reveals insights into the period immediately after the Big Bang, while galaxy surveys can also help deepen our understanding of the primordial Universe. We spoke to Professor Hiranya Peiris about the CosmicDawn project’s work in using different sources of data to investigate the origins of cosmic structure The Planck satellite was launched by the European Space Agency in 2009 with the goal of imaging the cosmic microwave background (CMB) radiation left over following the Big Bang. Large volumes of data on the early Universe have since been collected by this mission. More data will be gathered in future by large galaxy surveys, while recent years have also seen significant theoretical advances in cosmology. Now researchers in the CosmicDawn project aim to build further on these recent developments. “The project is about trying to work out the physics of the very early Universe, and to test that against the available data,” says Professor Hiranya Peiris, the project’s Principal Investigator. The project combines theoretical and observational research, with Professor Peiris and her colleagues aiming to build a deeper understanding of primordial fluctuations in the early Universe. “Our understanding of the origins of fluctuations in the early Universe contains a whole bunch of assumptions, for example around the isotropy of the very early Universe,” she says. “We want to test those fundamental assumptions, on top of testing specific models as well.” A key element of this work centres around testing the theory of inflation, which seeks to describe the rapid expansion of space in the early Universe, at between 10 -36 and 10 -33 seconds after the Big Bang. The theory of inflation was developed around the late ‘70s, and it was proposed in order to shed light in two main areas. “The hot Big-Bang model – which is now very strongly confirmed – doesn’t explain where the initial fluctuations
in the density of the Universe came from, which led to all of the structure that’s in the Universe today. Inflation is a mechanism by which you can create structure in the very early Universe,” explains Professor Peiris. Additionally, the theory of inflation can also help to explain some of the classic puzzles around the Big Bang. “Why is the Universe so big, mostly empty but with tiny fluctuations to start with? Why is it so spatially flat? Why do there seem to be fluctuations that are not causally connected if you just think about the standard hot Big Bang theory?” asks Professor Peiris. “The answers lie in the very early Universe, before the emergence of particles associated with the standard model of particle physics.”
Cosmic microwave background Researchers in the project are using CMB data gathered from the Planck satellite, as well as data from large galaxy surveys, to investigate the origins of cosmic structure. Planck’s CMB data comprises images of the sky at microwave frequencies, gathered from a position well beyond the orbit of the moon, from which Professor Peiris and her colleagues can draw important insights. “We can see the tiny fluctuations in the microwave background. Once you take that data, essentially you get the temperature at different locations in the sky, and from that you can make temperature maps,” she explains. By studying the properties of these maps, researchers can then test theories of the early Universe. “The data is a timestream with trillions of data points, and it gets
converted into maps. These are maps of the sky at different frequencies, measured by the Planck satellite,” outlines Professor Peiris. “The reason we need different frequencies is that our Milky Way galaxy also emits at microwave frequencies. We’ve got to separate the foreground contribution from our galaxy, from the background contribution from the early Universe.” The data from galaxy surveys takes a different form. For example, the Dark Energy Survey (DES) uses an enormous camera to image the sky, giving researchers data on the large scale structure (LSS) of the Universe, from which more can be learned about the distant Universe. “Light has a finite speed. So when we look at distant things in the sky, we’re seeing the Universe as it was when it was younger,” explains Professor Peiris. This allows researchers to effectively map out the evolutionary structure of the Universe traced by galaxies; this LSS data is complementary to the CMB data. “They sample different periods in history, in the timeline of the Universe,” says Professor Peiris. “By testing a cosmological model with the CMB data, you can make a prediction for what you should see in the late time Universe, in a galaxy survey. Similarly, a galaxy survey allows us to make independent measurements of the same parameters which we tested in the early Universe as well. The overall picture has to fit together.” The consistency of our physical understanding of the Universe can be assessed by testing cosmological models against the data. While researchers have reached a point where the simplest models
of the early Universe seem to be compatible with the available data, Professor Peiris says there are also other issues to consider. “These very simple models are not necessarily natural from a fundamental physics point of view,” she says. This suggests that researchers are either being misled by the data, or that a core aspect of fundamental physics has somehow been overlooked. “We’ve been using the data to work out how compatible existing models are with the observations,” continues
recent years. “The detection of gravitational waves even from a couple of objects now allows researchers to test fundamental physics, in particular the nature of gravity, and improve existing constraints by up to ten orders of magnitude,” she continues. “Gaining more insights requires an improved understanding of fundamental physics, and also improvements in both the quality and quantity of data that we are getting from all these different sources.”
Big-Bang model doesn’t explain where the initial fluctuations in the density of the Universe came from, which led to all of the structure that’s in the Universe today. Inflation is a mechanism by which you can create structure in the very early Universe Professor Peiris. “The next step is to work out the fundamental physics behind these models. This involves theories of quantum gravity for example, which requires the engagement of theoretical physicists. One of the ways in which we are trying to investigate these theories is by constructing analogues of models of the early Universe in an ultra-cold atom condensed matter experiment, with Bose-Einstein condensates.” This means that the predictions of these theories can be investigated in the laboratory, which will be an important element in Professor Peiris’ future research agenda, while the increasing power of galaxy surveys will also open up new avenues of exploration. One survey of particular interest to Professor Peiris is the Large Synoptic Survey Telescope, a huge project which is set to start in 2019. “The power of these surveys to test theories is going to be significantly improved. That is another avenue of data that I’m exploring,” she outlines. Another major topic of interest for Professor Peiris is gravitational-wave astronomy, a branch of observational astronomy which has developed rapidly over
Accurate cosmology There have been significant strides forward in these research areas over the last decade or so, as cosmology has become an increasingly precise science. Uncertainties in measurements have been reduced, opening up new insights into the evolution of the Universe, and now researchers are looking further forward. “We now need to build the age of accurate cosmology. At some point, when you keep decreasing the error bars, you come up against systematic errors, which are not reducible by getting more statistics,” says Professor Peiris. The wider goal in this research is to understand the origins of cosmic structure; while this is of course a huge question, Professor Peiris says significant progress has been made in the project. “We have worked out very strong constraints, narrowing down the range of mechanisms by which cosmic structure could have been produced in the very early Universe, and have also identified where we need to improve our understanding of both theory and observation in order to understand the origins of cosmic structure.”
COSMIC DAWN Understanding the Origin of Cosmic Structure Project Objectives
The ERC Starting Grant CosmicDawn project aims to rigorously test the theory of inflation, the dominant paradigm for the origin of cosmic structure, using the latest CMB and galaxy survey data; and to seek signatures of new physics that are likely to exist at the unexplored energies found in the very early Universe.
This is a standalone ERC PI-led project.
• The UK Science and Technology Facilities Council • The Leverhulme Trust • The Royal Society • The Foundational Questions Institute
Project Coordinator, Professor Hiranya Peiris Astrophysics Group Department of Physics and Astronomy University College London Gower Street London WC1E 6BT United Kingdom T: +44 20 3549 5831 E: email@example.com W: http://www.earlyuniverse.org
Professor Hiranya Peiris
Hiranya Peiris is Professor of Astrophysics at University College London and Director of the Oskar Klein Centre for Cosmoparticle Physics in Stockholm. Her research aims to understand the evolution of the Universe and its underlying physics using the primordial fluctuations of the cosmic microwave radiation and the largescale structure of the Universe traced by galaxy surveys.
CREDIT: ESA /PLANCK
Exploring plants as supportsystems for the space missions of tomorrow
Plant behavior, hardware and cultivation conditions are thoroughly tested at Wageningen University as input for the design and manufacture of new cultivation hardware and sensor technology suitable for space missions.
As we set for long-term space missions far from our own planet, we need to be capable of regenerating resources essential to human life. Øyvind Mejdell Jakobsen and Ann-Iren Kittang Jost tell us about the TIME SCALE project’s work in developing technologies and know-how to support the space exploration missions of the future The vast expanse
of space has long fascinated scientists, now plans are emerging to probe deeper into the solar system with a new generation of manned space missions, far away from our own planet. ESA and NASA are aiming for long-term missions to the Moon and Mars, requiring sophisticated systems. “Such missions require life-support systems. The astronauts need water, food, oxygen, and other resources,” points out Øyvind Mejdell Jakobsen. Based at CIRiS, a part of the NTNU Social Research company in the Norwegian city of Trondheim, Jakobsen is the exploitation and dissemination manager of the TIME SCALE project, an EU-backed initiative aiming at next-generation technology and knowledge to support future long-term space missions. “To survive in Space, we can bring resources and use physical and chemical methods to produce what we need. However, as the technology and knowledge evolve, we can use plants as regenerative life-support systems that can re-circulate and regenerate scarce resources,” he outlines.
A key part of this work is based on the European Modular Cultivation System (EMCS), an experimental, greenhouse-like facility on the International Space Station (ISS) which allows scientists to study plant biology under different
Life-support systems The focus in this respect is on the development of biology-based regenerative life-support systems, which utilise biological systems to regenerate resources essential to human life.
As the technology and knowledge evolve, we can use plants as regenerative life-support systems that can
re-circulate and regenerate scarce resources controlled conditions. Experiments with the EMCS over the last ten years have enabled scientists to gain new insights into how plants behave under different gravitational conditions for example, now researchers aim to enhance the system further, opening up new avenues of investigation. “TIME SCALE demonstrates how we can upgrade the EMCS or similar ISS payloads with improved concepts and technologies,” explains Jakobsen. The wider objective in this work is to help develop a closed regenerative life support systems for longer-duration, manned space missions.
A good example is the conversion of carbon dioxide (CO2) into oxygen. “A plant-based regenerative life support system would be able to take the carbon dioxide that humans breathe out and, using plant photosynthesis, convert it into the oxygen that we all require,” outlines Jakobsen. This could support future manned missions, and Jakobsen says it could also be possible to regenerate drinkable water from waste water. “Another example is purification of waste water,” he says. “A plant takes up a lot of water, and then it evaporates
through a process called transpiration. That humid air could be condensed, over the plant leaves, giving you a source of drinkable water.” These types of systems may theoretically be effective here on Earth, yet gravitational conditions in deep space are of course very different, so further investigation is essential before it can be confidently stated that they would perform effectively in different circumstances on a long-term space mission. Water starts behaving differently in zero gravity, as do plants, so Jakobsen says it’s important to probe deeper into this area. “A lot of fundamental biology experiments need to be done under micro-gravity – basically no gravity at all – or reduced gravity, as can be found on the Moon or on Mars. We need to understand how the plants behave under these types of gravity conditions,” he outlines. There are also a number of other factors to consider. “We need to recycle not only the water, but also certain nutrients. If they can be re-used and recycled under micro-gravity conditions, then that would be a major achievement,” says Ann-Iren Kittang Jost, the coordinator of the project.
The TIME SCALE project will have a major role to play in supporting this kind of research. The aim is to develop knowledge and specific technologies to improve not only the EMCS, but also other payloads on the ISS, which can then be used in continued investigation. “These are important research platforms to help develop and demonstrate the technologies and biological knowledge that we will need for future systems,” says Kittang Jost. The technologies being developed in the project include nutrient sensor technology by CleanGrow Ltd (UK) and a plant health monitoring system developed by the Laboratory of Functional Plant Biology of Ghent University and the company Interscience (Belgium and the Netherlands). Wageningen University, University of Stuttgart and the companies Prototech AS (Norway) and DTM Technologies (Italy) are other partners of this multi-national collaboration, developing water and nutrient management system and improved concepts and hardware for plant cultivation. “These technologies and systems can be added on to different payloads on the
ISS, and they could potentially be applied in future greenhouses on the Moon and Mars,” continues the project coordinator. Towards the end of the TIME SCALE project, life tests with plant cultivation in a so-called breadboard will be performed at CIRiS in Norway to demonstrate operational capability of the new technology.
Terrestrial plant production The TIME SCALE research also holds important implications for plant production on our own planet, such as in the land-based greenhouse industry. Many plants today are grown using hydroponic systems, in which plants are cultivated in contact with running water but without any soil; Jakobsen says the project’s research holds clear relevance in these terms. “Water-nutrient management, sensor technology, and planthealth monitoring are all directly applicable to land-based food production,” he says. With concern deepening over food scarcity and an awareness that the Earth’s resources are not infinite, there is an increasing focus on improving resource efficiency; the
TIME SCALE develops technologies and systems for monitoring plant cultivation systems in real time. Left: The multi-ion analyser of CleanGrow Ltd offers automated monitoring of nutrients in solution. Middle and right: Gas chromatography and advanced imaging techniques developed by Interscience and Ghent University respectively, provide state-of-the-art possibilities to monitor the plant’s health.
TIME SCALE Technology and Innovation for development of Modular Equipment in SCalable Advanced Life support systems for space Explorations Project Objectives
TIME SCALE develops concepts and nextlevel technology for plant cultivation and monitoring, for use in space and terrestrial applications. In space, future advanced plant cultivation systems may provide astronauts with space-grown food and capabilities for recycling water, nutrients, air and waste. On Earth, nutrient and water recycling and plant health monitoring systems contribute to more efficient and sustainable production.
European Union’s Horizon 2020 research and innovation programme (Compet-02-2014), grant agreement No 640231.
www.timescale.eu NTNU Social Research (Norway) Project coordinator Dr. Ann-Iren Kittang Jost T: +47 928 80 298 E: firstname.lastname@example.org Exploitation and dissemination manager Dr. Øyvind Mejdell Jakobsen E: email@example.com CleanGrow Ltd. (Ireland) Dr. Roy O’Mahony E: firstname.lastname@example.org DTM Technologies (Italy) Davide Santachiara E: email@example.com Ghent University (Belgium) Prof. Dominique Van Der Straeten E: firstname.lastname@example.org Interscience (Belgium and The Netherlands) Dr. Joeri Vercammen E: email@example.com Prototech AS (Norway) Dr. Bjarte G.B. Solheim E: firstname.lastname@example.org University of Stuttgart (Germany) Dr. Stefan Belz E: email@example.com Wageningen University (The Netherlands) Prof. Leo Marcelis E: firstname.lastname@example.org
New concepts and systems for cultivating algae and higher plants under different gravity conditions will help to expand our knowledge of how plants behave in Space. University of Stuttgart has designed a new concept for an algae cultivation chamber for flight (left). A breadboard version of an improved Plant Cultivation Chamber has been designed and manufactured by Prototech AS (middle) and will be installed into a Modular Test Bed manufactured by DTM Technologies (right) for ground testing with living plants by NTNU Social Research.
project’s research could shed new light on this topic. “With long-term space travel on a spaceship you clearly don’t have many resources available – so you need to recirculate and re-use as much as possible,” points out Jakobsen. “However, if we look at Earth as our own spaceship, we realize it has limited resources such as clean water and phosphorus.” Improved resource utilization and the development of more sustainable plant production methods are urgent priorities. This rests to a large on a deeper
Preparing for the Moon Village The primary focus of the project is on supporting space investigation however, and in particular life science research in space, including research ideas which would have seemed remote just a few years ago. The concept of a Moon village has developed over recent years, with the idea of establishing a community on the Moon; while the Moon village has not yet taken a definitive form, Kittang Jost says this is an active area of research. “There’s a lot of
The water-nutrient management system, sensor technology, and the plant-health monitoring system are all directly applicable to land-based food production understanding of plant biology, underlining the wider relevance of the TIME SCALE project’s work. “How can we deepen our understanding of plant biology, to a point where we can make production systems that are as resource-efficient as possible?” asks Jakobsen. While re-circulating hydroponic systems are in use in many countries, Jakobsen believes there is still scope for improvement. “New technology and knowledge offer ways to improve on those systems,” he outlines.
work to do in this direction, moving towards technology demonstrators and other research possibilities. There are many opportunities,” she stresses. The TIME SCALE project partners will be in pole position to capitalise on these opportunities, utilising the expertise and knowledge gained during the initiative. “A major motivation behind this project has been to put European companies and universities in a position to be a part of the future development towards a Moon village, or longterm missions towards Mars,” says Jakobsen.
Dr Ann-Iren Kittang Jost Dr Øyvind Mejdell Jakobsen