April

Page 1

LEONARDO TIMES Journal of the Society of Aerospace Engineering Students ‘Leonardo da Vinci’

KITE POWER DSE Fall 2016 Page 11 Year 21 | N° 2 | April 2017

Zac Manchester Interview Page 33

VSV Symposium Challenge of Automation Page 48


Biofuel. You won’t notice the difference, but nature will KLM has proven that aviation can be more sustainable. As a pioneer we operated the world’s first commercial flight with biofuel. However, KLM will only use biofuels with no negative effects on food production and nature. Together with partners we stimulate the development of biofuel, only when used on a large scale biofuel will make the difference - klmtakescare.com


EDITORIAL

thought that the A380, the largest passenger plane ever built, would lose its drive only ten years after its first sell. With Boeing stopping the 747, it seems like the jumbo jets are just too “jumbo” for the market and airlines are consequently shifting to their wide-body siblings instead. When working on this magazine, Katharina Ertman, an editress here at the Leonardo Times, approached me with a proposal for an article title. “Am I too fat?” What was meant as a joke for a title is a serious analysis on how the jumbo jets are becoming less attractive for commercial airlines. In an ever more complex world, there seems to be no place for these oversized entities.

Last edition ...

LEONARDO TIMES Journal of the Society of Aerospace Engineering Students ‘Leonardo da Vinci’

1961

Dear reader, Over the past century, many technical systems have become smaller, yet more powerful and efficient. If you had asked a scientist in the 1960s if a computer program was a viable option to optimize an aircraft structure, he would have asked you to find a large enough hall for said computer. Today, your smartphone is more powerful than any computer back in those days. Consequently, the miniaturization of computer parts has allowed us to downsize the dimensions of many systems. Satellites have, however, only seen this downsizing in the last ten years. Whereas personal computers, SD cards, and mobile phones have been introduced in the 1990s and 2000s, we’ve only just now gotten to putting small satellites into orbit. Of course, the smaller the satellite, the lighter the payload, which opens many doors for space enthusiasts, scientists, and really just anyone who has the time, money, and dedication to put his or her little piece of hardware up there. How does one do that? Well, go on Kickstarter and fund your project, just like Zac Manchester did. Now before you ask yourself who that is, this magazine actually features an interview with him, a bright young man shaping the aerospace industry with his innovative ideas. Not long ago, nobody would have had the idea to connect crowdfunding with satellites but the marvel of our new times, the internet has made it possible.

A trend towards the small can also be seen in energy. At first, everyone tried building larger power plants; nuclear giants, which could power entire countries by themselves. Now, nuclear power has lost its favor and fossil energy is facing the same fate. Wind energy seems like a promising alternative, yet the windmills come at a cost. KitePower, a young company which designs and produces kites for energy harness, proposes an alternative. As already revealed on the cover, the Leonardo Times sent an editor to get the scoop on this new form of energy production. With all this in mind, it is apparent that we are constantly striving for a more technologically-advanced society. At the current rate, it’s anyone’s guess as to where we’ll be in fifty years. Living in an advanced society does have its perks but seems to be accompanied by a diminution in social skills and an instilled sense of ungratefulness. “Manners Maketh Man” appears to be a dying proverb in the age of Snapchat-conducted interviews. I therefore urge you take a break every blue moon to appreciate how far we've come and recalibrate your 21st century expectations of Mankind. As for the Leonardo Times, rest assured that we will remain mavericks of the media world and will not deviate from our A4 format. Wishing you a pleasant read and an enjoyable season. Victor Gutgesell

Active Flow Control Keeping the flow attached Page 12

ExoMars What happened? Page 38

Citizen Science You can help science! Page 48

Year 21 | N°1 | January 2017

If you have remarks or opinions on this issue, let us know by dropping an email at: LeoTimes-VSV@student.tudelft.nl

LT www.leonardotimes.com

Similarly, not long ago, nobody would have

Likes us on Facebook /leonardotimesjournal

LEONARDO TIMES N°2 2017

03


CONTENTS FRONT FEATURES 03 Editorial 08 In the News 10 Leonardo's Desk

Big Planes, Big Problems

18

With new, more efficient planes like the 787 and A350 coming onto the market, is there still a place for their jumbo-sized cousins?

CONTROL & OPERATIONS (C&O) 15 Airbus Hydraulics 18 Big Planes, Big Problems 30 Ecological Automation

DESIGN SYNTHESIS EXERCISE 12 LUMID 12 BEEBLEBROX 13 IFSIX 14 DREAM

AEROSPACE STRUCTURES AND MATERIALS (ASM) 22 Aeroelastic Tailoring

24 Kite Power An interview with Kitepower: a Delft based start-up and one of the leaders in the field of airborne wind energy.

WIND ENERGY 24 Kite Power

INTERVIEW 33 Zac Manchester

FLIGHT PERFORMANCE AND PROPULSION (FPP) 37 Flying All-Electric

AERODYNAMICS 40 Bubble Velocimetry

STUDENT PROJECT 43 Delft Hyperloop

NICK'S CORNER 46 Ungrateful Humans

HYPERLOOP

AVIATION DEPARTMENT 48 Challenge of Automation

After winning the SpaceX Hyperloop Pod Competition, the Delft Hyperloop Team finds itself at the frontier of innovation. An interview with Victor Sonneveld, a member of Delft Hyperloop, provides more insight into the team’s triumph.

ADVERTISMENTS 02 KLM 06 ASML 28 Huisman 51 NLR 52 Fokker

04

N°2 2017 LEONARDO TIMES

Hyperloop

41


COLOPHON

Ecological Automation

30

Future air traffic control needs to rely on more automation to safely handle increased traffic volumes. Ecological Interface Design (EID) can help to make automation more understandable for human operators.

Year 21, NUMBER 2, April 2017 The ‘Leonardo Times’ is issued by the Society for Aerospace Engineering students, the VSV ‘Leonardo da Vinci’ at the Delft University of Technology. The magazine is circulated four times a year with a circulation of around 5000 copies per issue.

EDITOR-IN-CHIEF: Victor Gutgesell FINAL EDITOR: Nicolas Ruitenbeek

KITEPOWER

EDITORIAL STAFF: Martina Stavreva, Nora Sulaika, Katharina Ertman, Marloes Eijkman, Mannat Kaur, Maria Mathews, Eleonoor van Beers and Nithin Kodali Rao. FINAL WEB EDITOR: Rosalie van Casteren THE FOLLOWING PEOPLE CONTRIBUTED: Apeksha Amarnath, Krish­na Kant Ratan Parkhe, Clark Borst, Rakesh Yuvaraj and Roger Hak.. DESIGN, LAYOUT: SmallDesign, Delft PRINT: Quantes Grafimedia, Rijswijk

Ungrateful Humans

NBCNEWS

Articles sent for publishing become property of ‘Leonardo Times’. No part of this publication may be reproduced by any means without written permission of the publisher. ‘Leonardo Times’ disclaims all responsibilities to return articles and pictures. Articles endorsed by name are not necessarily endorsed editorially. By sending in an article and/or photograph, the author is assured of being the owner of the copyright. ‘Leonardo Times’ disclaims all responsibility. The ‘Leonardo Times’ is distributed among all students, alumni and employees of the Aerospace Engineering faculty. The views expressed do not necessarily represent the views of the Leonardo Times or the VSV 'Leonardo da Vinci'.

Is technology a boon or a curse of the post-modern society we live in?

VSV ‘Leonardo da Vinci’ Kluyverweg 1, 2629HS Delft Phone: 015-278 32 22 Email: VSV@tudelft.nl ISSN (PRINT) : 2352-7021 ISSN (ONLINE): 2352- 703X Visit our website www.leonardotimes.com for more content. Remarks, questions and/ or suggestions can be emailed to the Editor-in-Chief at the following address: LeoTimes-VSV@student.tudelft.nl

46 LEONARDO TIMES N°2 2017

05


“EEN HIGHTECH TOPBEDRIJF MET EEN SOCIAAL RANDJE” ASML

Werktuigbouwkundige Marcel Goldschmeding aan de slag als campuspromotor voor ASML

Op het moment dat we Marcel spreken, focust hij zich op zijn master systems & control (TU Delft). Een sterk inhoudelijke master die er om bekend staat ‘lastig’ te zijn. Maar dat vormt geen probleem, integendeel zelfs. Marcel: “Ik had zin in de uitdaging, dat motiveert me juist!” Zijn studieactiviteiten combineert hij met het werken als campus promoter voor ASML.

M

arcel: “Via mijn bestuursjaar bij de studievereniging van werktuigbouwkunde kwam ik in contact met een oud-bestuurslid die campus promoter was voor ASML. Ik was geïnteresseerd in de mogelijkheden van een Scholarship bij ASML, dus we hebben daar toen enkele keren over gesproken. Toen hij stopte met zijn werk als campus promoter, heeft hij mij aangedragen als zijn opvolger. Zo is het balletje gaan rollen.”

ONMOGELIJKE MOGELIJK MAKEN De meeste techniekstudenten zijn goed bekend met de Veldhovense producent van de lithografiemachines waarmee mondiale spelers als Samsung en Intel hun chips maken. ASML is er immers verantwoordelijk voor dat de Wet van Moore nog steeds opgaat. Marcel: “Toen Moore vijftig jaar geleden stelde dat de capaciteit van chips iedere twee jaar zou verdubbelen, verklaarde iedereen 06

N°2 2017 LEONARDO TIMES

hem voor gek. Door ASML gaat die wet nog steeds op. Ze maken het onmogelijke mogelijk. Dat verleggen van de grenzen, doorgaan wanneer iedereen zegt dat iets niet kan, daar houd ik zelf ook heel erg van.”

BEREIK Als campus promoter is het belangrijk dat je een goed netwerk hebt en veel mensen kunt bereiken, vindt Marcel. Zijn bestuurswerk en andere activiteiten komen daarbij zeer goed van pas. Marcel: “Door mijn penningmeesterschap en commissiewerk bij de studievereniging ken ik ook veel mensen van andere verenigingen, wat erg handig is bij deze functie. Maar ook mijn deelname aan het International Research Project, waarbij we voor bedrijven projecten uitvoeren in Indonesië, vergroot mijn bereik. Daarnaast ben ik te vinden op verschillende activiteiten die ASML organiseert, zoals lunchlezingen en de Delftse bedrijfsdagen. Ik krijg er

echt energie van om met heel verschillende typen mensen in gesprek te gaan.”

OPEN EN RELAXTE SFEER Wat kunnen mensen verwachten als ze bij ASML aankloppen? Volgens Marcel is ASML een ‘hightech bedrijf met een sociaal randje’. Marcel: “Het is er heel open, je voelt er een beetje de Brabantse gezelligheid, een open en relaxte sfeer! Tegelijkertijd is het natuurlijk één van de topbedrijven van Nederland. Iets anders wat me opvalt, is de snelheid waarmee alles vooruitgaat en groeit. Sowieso is het voor werktuigbouwkundigen natuurlijk super interessant wat daar gebeurt.”

MEER WETEN? Benieuwd geworden naar Marcel en ASML? Spreek hem aan of stuur een mailtje. Marcel: “Als je enthousiast bent over werk, stage of misschien een ander project bij ASML, neem dan vooral contact op. Want als je iets tof vindt, dan slaat die vonk over en is er een heleboel mogelijk!” Marcel is te bereiken via mgoldschmeding@gmail.com www.workingatasml.com/students


We push technology further to print microchip features that are ďƒžner to accelerate artiďƒžcial intelligence to make robots understand humans

to let robots help in healthcare

Do you dream of changing the world of innovation? Do complex technological challenges appeal to your imagination? We are looking for you. ASML always wants to get in touch with eager and curious students. Join us at workingatasml.com/students


QUARTERLY HIGHLIGHTS Race to reusability

Since the retirement of the Space Shuttle Program in 2011, mankind has lost the ability to reuse spacecraft after launch. However, as commercialization of spaceflight is becoming more and more evident, entrepreneurs like Elon Musk and Richard Branson are looking for innovative solutions to the problems that major space programs are now facing. Therefore, it comes as no surprise that two recently founded space companies are trying to make spacecraft reusable. These companies are, as one might have have guessed, Jeff Bezos’ Blue Origin, founded in 2000, and Elon Musk’s SpaceX, founded in 2002. Although they are currently trying to achieve the same feat, namely reusable spacecraft, their approaches and spacecraft are quite different. Whereas Blue Origin’s New Shepard is quite small (only 16m long), SpaceX’s Falcon-9 rocket has

a towering height of 48m. Furthermore, the Falcon-9 rocket is much more slender than the New Shepard leading to more control and stability issues. Finally, the New Shepard only achieves suborbital flight whereas the Falcon-9 has already delivered satellites into orbit. This makes landing and reusing part of the spacecraft more difficult for SpaceX than for Blue Origin. Despite this fact, New Shepard’s first successful attempt to land its rocket after launch took place only one month before the Falcon-9 rocket achieved the very same goal. Therefore, although Blue Origin was first, one could argue that SpaceX won this race. However, landing a rocket is one thing, reusing a (part of a) rocket is another. The New Shepard, which landed after launch in November 2015, was successful-

ly relaunched in January 2016, proving the system is reusable. However, it would take more than a year for a Falcon-9 first stage to be reused. Finally, last month, on March 30 at 18:27 local time, SpaceX launched a Falcon-9 rocket from Cape Canaveral, of which the first stage had already been launched a few months earlier. The rocket delivered a communications satellite into orbit, after which the first stage landed again on a barge in the Atlantic Ocean. Now that both companies have proven that their concept for rocket reusability works, they have to make sure that the reliability of their concepts increases to make full scale use possible. It has yet to be proven that reusable rockets are here to stay. However, the first step towards a more sustainable spaceflight is well underway.

Transatlantic price war For years, the market for transatlantic flights has been the playground of big airlines such as KLM, British Airways, and Lufthansa. On the contrary, low-cost budget airlines, like Ryanair, have gained quite a significant share in intercontinental flights. Ryanair is now even considered to be the biggest airline in Europe. However, while big airlines also offer regional flights, budget airlines have stayed far away from the costly venture of transatlantic flights, mainly because of the capabilities of their aircraft. 08

N°2 2017 LEONARDO TIMES

Over the past few years, the range of smaller aircraft has increased with, amongst others, the introduction of the A320neo and the Boeing 737 MAX. These aircraft are perfect for budget airlines, as they are much cheaper to maintain and therefore make crossing the Atlantic economically viable again. Because of these advancements, low-cost airline Norwegian Airlines announced last February that they would be the first airline to offer transatlantic flights for less than one hundred GBP. They will fly from Edinburgh,

Belfast and Cork to smaller airports in New York, Boston and New England. By avoiding the bigger U.S. airports, the airline is able to keep the landing charges to a minimum. Just like shorter flights with budget airlines, the flights will be very basic. Legroom for the passengers is limited and luggage is not included in the price. Despite this, Norwegian Airlines’ statement will undoubtedly lead to a transatlantic price war.


ExoMars after Schiaparelli failure In the search for extraterrestrial life on Mars, the ExoMars program, a collaboration between the European Space Agency and Roscosmos, is trying to build upon the work that has been carried out by previous Mars landers. It consists of two missions. The first was launched in 2016 and consisted of the Trace Gas Orbiter (TGO) and the Schiaparelli lander, which was named after the Itailian astronomer Giovanni Schiaparelli. Last October, the Schiaparelli lander crashed on the surface of Mars, due to a failure during its descent phase. The orbiter however, is still in orbit. The main mission of the orbiter is to map the quantities of methane in Martian orbit. This is an important aspect of its mission, as this information will be used in the selection process of the landing site for the ExoMars rover, which is part of the second mission and will be launched in July 2020. This is accompanied by a surface platform, which will gather data at the landing location of the Rover. Furthermore, the orbiter will be used to supply the second ExoMars mission with a telecommunications relay. In order to be able to execute the missions, the orbiter’s orbit has to be changed from its highly elliptical initial orbit, where its altitude varies between around 250km and 98000km, to a circular orbit at 400km. In January, a series of crucial maneuvers was executed to increase the inclination of the orbit with respect to the Martian equator, which ensured that the Schiaparelli lander could be delivered at its projected landing site. The past two months have been spent in calibrating the scientific instruments on board the TGO in order to ensure high quality measurements. Once the calibration tests of its instruments were completed last month, the orbiter began its descent to its final orbit. In order to do so, the Martian atmosphere was used to slow down the spacecraft. Because of the low density, the velocity change per

orbit is very small. However, over the course of several months, a quite significant cumulative altitude change will be achieved. During the following months, the TGO’s orbit will gradually change to the circular and properly inclined required one, which will require a couple of extra burns to be executed. Meanwhile, on Earth, preparations have begun for the second launch of the ExoMars program. In March 2014, seven possible landing sites were selected for the ExoMars rover, which on March 17 were narrowed down to the final two. The selected launch sites, Oxia Planum and Mawrth Vallis, are located just north of the Martian Equator. In this

region, many extensively layered, clay-rich sedimentary deposits can be found, as well as a diversity of minerals that suggest that, in the past, water may have been present. They hence prove to be very interesting sites for a mission such as ExoMars. In the coming years, these two sites will be extensively analyzed and a final decision on the actual landing site will be determined just a year before the launch of the second mission in 2020. Although the landing of the Schiaparelli lander failed, it will allow ESA and Roscosmos to learn from it and will, hopefully, make the ExoMars program a success.

This process sounds comparable to the rise of the low-budget airlines at the end of the last century. The big airlines lacked a good response, which caused them to lose market shares to their cheaper counterparts. This time, they want to respond more appropriately to the intruders on their market, and it will prove interesting to see whether the heavyweights will be able to thwart the rise of low-cost transatlantic flights.

LEONARDO TIMES N°2 2017

09


LEONARDO'S DESK

A MESSAGE FROM THE BOARD Dear reader, As announced in the previous issue of the Leonardo Times, the annual symposium of the VSV ‘Leonardo da Vinci’ took place last month at the Aula of the TU Delft. With the theme ‘Challenge of Automation’, several different topics were addressed. First off, Albert van Veen, Chief Digital Officer of Schiphol Group, elaborated on the challenges that come with automation in airports, followed by Winfried Lohmiller (Airbus D&S), who presented his thoughts on automation in aircraft. In the second block of the day, Frederik Mohrman (NLR), Professor Max Mulder and Bart de Vries (KLM) talked about the effects of automation in their respective fields of work and whether we could envision a future with fewer pilots. The last block on automation in air traffic control was addressed by Professor Jacco Hoekstra, Sepehr Behrooz (Indra), and Frank Hommes (LVNL). The chairman of the day was Lt.-gen. b.d. Alexander Schnitger who summarized the presentations using sketches made during the symposium. Apart from our annual symposium, in the last three months three departments of the VSV organized trips abroad to visit various companies. First, the Space Department 10

N°2 2017 LEONARDO TIMES

visited the Airbus Defence & Space, and Safran Launchers sites located in Les Mureaux and Vernon, France. There they were given the possibility to see, amongst others, the test sites for launchers, as well as the assembly line of the main stage of the Ariane 5. Secondly, the Master Department ‘Apollo’ departed for Stuttgart, Germany in the beginning of March for an ‘automotive trip’ during which companies like Audi, Bosch, Daimler and Porsche were visited to gain insight into the automotive industry and to explore career opportunities. Lastly, the Women’s Department ‘Amelia’ visited Brussels Airlines and ASCO with female students from our faculty. In February the presentation days of the ‘Delftse Bedrijvendagen’ took place, during which over a hundred companies had the chance to meet TU Delft students and present themselves in the best possible manner. With a focus on start-ups and over 3000 visiting students, the event was considered a success. Last but not least, another one of the series of the CEO interviews was held. This time we invited Warren East, CEO of Rolls-Royce, for a live connection interview with our students in the main lecture hall of the faculty. The topics that were tackled ranged from his career development to how he sees the future of Rolls-Royce under the prism of the recent Brexit.

Several interesting events have been planned for the coming months as well, starting with an interview with Jorge Ramos, CEO of Embraer Europe, Africa and the Middle-East, which will be conducted in April. This will be followed by the Aerospace Women’s Day, which will be held in the beginning of May and will provide female Aerospace students the opportunity to meet engineers and managers from the industry, explore career opportunities and develop leadership skills. Another event we are really looking forward to is the presentation on ‘The Future of Space’ by Professor Stoewer, former Managing Director of the German Space Agency and former Head of ESA’s System Engineering Department, to be held later on in May. In June, the Aviation Department of the VSV will organize an evening lecture on Electric Propulsion in Aviation in cooperation with the NVvL, to see what the future will bring for aircraft, followed by a special lecture to celebrate the Space Department’s 6th Lustrum. With winged regards, Casper Dek President of the 72nd board of the VSV ‘Leonardo da Vinci’


DSE

DESIGN SYNTHESIS EXERCISE

Prior to recieving a Bachelor diploma in Aerospace Enineering, each student must complete the Design Synthesis Exercise (DSE). The DSE is a two-month group project during which students combine creativity and knowledge obtained throughout their studies to formulate a complete aerospace design. Aircraft, spacecraft, drones and turbines are the usual repertoire, and this year's designs were as versatile as always.

Fall 2016 LEONARDO TIMES N°2 2017

11


LUMID Group 1 Dreams of a lunar colony have always featured high in our interplanetary ambitions. Still, our Moon has many hidden secrets and multiple space missions are needed before our knowledge allows us to take the next step. The LUMID mission adds yet another piece to the puzzle.

IT’S RAINING ROCKS Our celestial neighbor continues to feature prominently as a test platform for developing the technologies to take humans further than ever before. While ambitious plans for a colony on the Moon are already being whispered, the harshness of its environment is yet to be fully understood. The LUMID mission aims to stretch the limits of the newly emerging CubeSat technologies to characterize one of the lunar environment’s most worrisome hazards: the flux of micrometeoroids (MM) on its surface and vicinity. Weighing less than a gram and barely visible, these minuscule fragments of rock can release the equivalent kinetic energy of a .22 caliber bullet upon impact [1]. But do not let their dimensions deceive you; over time, they can degrade or destroy assets in orbit or on the ground. It therefore becomes important to know the severity of their presence on the lunar surface.

THE “WHAT” AND THE “HOW” The LUMID mission – a piggyback on a

MM and uses those marks on the foils to determine the MM’s velocity and trajectory. The Moon’s highly irregular gravity field necessitates frequent station keeping that LUMID achieves with its green-propulsion ion thruster. The Lunar Orbiter will be used as a relay for communications with the Earth. Deployable and rotatable solar arrays ensure undisputed power provision under the Sun, while a passive heat unit ensures its systems are neither frozen nor fried in the Moon’s extreme thermal environment. A quad-core onboard computer simultaneously analyses/ transmits measurement data and maintains spacecraft functionality. All subsystems combined compose a state-of-the-art piece of hardware that stretches the limits of what is currently thought possible for nanosatellites. Its success will establish the potential of microsatellites for inter-planetary missions, opening new frontiers in the CubeSat market.

ALL GOOD THINGS COME TO AN END larger undefined Lunar Orbiter – consists of three identical, independently functioning CubeSats in a 15x300km elliptical orbit around the Moon, evaluating the MM hazards within the ±50° latitudes. This is where the majority of geological elements prime for supporting a human colony, such as lunar pits, lava tubes, and mineral resources, are located. Each spacecraft will carry an infrared thermal imaging camera, with the purpose of capturing the heat radiated by a MM upon impact on the lunar surface. To distinguish these thermal signs from other sources of thermal radiation on the Moon, the instrument requires operation in the shadow and thus functions only when on the night side. This would account for only a fraction of the mission’s duration, therefore three satellites are deployed in different orbital configurations to ensure at least one can perform measurements at each instant. In addition, a group of impact detectors are placed on the satellite itself. Comprised of two instrumented thin foils, the instrument is penetrated by an impacting

The mission will conclude with a controlled impact on a selected lunar region, to protect existing features of interest. Options for scientifically exploiting this final phase have not been explored, but are plenty, as NASA’s recent Lunar Prospector, LCROSS, and GRAIL missions have demonstrated. At the end of its two year mission, the LUMID system will have orbited the Moon more than 8,500 times, scanning an area 3.5 times the surface of the Earth. Its measurements are expected to improve existing statistical models, casting light on the MM phenomenon and settling decades of uncertainty over its severity. All of that, contained in an 18kg, 34x22x22cm package at a price tag of less than €20m. References [1] D. K. Lynch, “The leonid meteoroid shower: Lessons learned from 19977 and plans for 1998-2001”, Acta Astronautica, vol. 47, No.11, pp.831-838, June 2000. DOI: 10.1016/S0094-5765(00)00134-X.

BEEBLEBROX Group 2 Airbus presented us with an interesting challenge: what if cargo space demands change due to an increase in carry-on baggage? Circular fuselages have been dominant, but new materials are on the horizon. Could the optimal cross-section of the fuselage shape change?

T

he project objective was to design a 200-passenger aircraft with a non-circular fuselage, with the goal of making it a viable design for the next five decades. With these requirements, we took off. Our cli12

N°2 2017 LEONARDO TIMES

ent, Airbus, required that the research would be mostly focused on the fuselage. However, the DSE topic implied that we were also required to design a complete aircraft. We decided to focus on the fuselage shape and

material combination, while keeping a conventional tail and wing configuration. For the fuselage, the design process was from the inside out, so to speak. This means that the process begins with the aircraft interior. The number of passengers, their carry-on luggage, and the required width of the aisle are the main contributing factors in finding the most efficient configuration. In our de-


for the aircraft. Changing the fuselage mostly affects the weight and the aircraft operations while performance and stability are not affected to a large extent. This lead to the conclusion that the new design was feasible and even better than the circular one. With an elliptical fuselage shape, improved turbofan engines, and conventional wing and empennage, the A342 Beeblebrox, named after Zaphod Beeblebrox, the Universe’s most eccentric leader, was born.

sign, the six-abreast configuration with one aisle was chosen. Next, taking the interior into consideration, potential cross-sectional shapes were analyzed from an aerodynamic and structural perspective. These were done both analytically as well as numerically. Some shapes were eliminated immediately, for example the square shape was not feasible from a structural perspective. Because the fuselage must be pressurized in flight, the stresses become so high at the corners that

the weight for the square fuselage increases out of proportion. Therefore, the final choice for the fuselage had to be made between a circular and elliptical cross-section. Adding unconventional features, such as a Prandtlwing, or designing a blended wing body, was avoided at this stage. However, they could still be design options in the future. Additionally, the performance, operations, and stability characteristics were calculated

The elliptical fuselage was still calculated to be roughly 1,500kg heavier. Despite this, it has a 1.9% increase in lift-to-drag ratio. It also has comparable direct operating costs to the Airbus A320neo. With current technology and materials, a circular fuselage is still preferable. Adding bionic material to the elliptical fuselage shape at certain points might relieve stress and improve the structural performance of the ellipse. Another improvement could arise from future lightweight materials, and hence the increase in lift-to-drag ratio would exceed the increase in weight.

IFSIX Group 3 Pushing the limits in aircraft design is a dangerous business. At a certain moment a test pilot has to fly a freshly built prototype for the first time. The IFSix is a flying laboratory capable of accurately re-creating flying experiences without exposing the pilot and materials to unnecessary risks.

for a maximum speed of Mach 0.65. A forward-swept upper-wing is added to generate extra lift forces, while extra vertical wings provide lateral forces. They are joined together and connected to the engine nacelles and to the main wing. A drag chute is used when needed to dramatically increase drag and more closely imitate space vehicles re-entering Earth’s atmosphere. At the heart of the IFSix is the new Flight Control System (FCS), installed between the cockpit and the control surfaces. The FCS continuously calculates how the simulated craft would behave under actual flight conditions and pilot inputs, creating a so-called ‘target behavior’. It then uses the original and new control surfaces to obtain the same behavior from the ‘physical’ IFSix. This way the test pilot will physically sense the accelerations, while force-feedback joystick and pedals will give back the feeling of fully reversible flight controls.

E

very land-based flight simulator is limited by its fixed position in space. It can reproduce with fidelity an aircraft cockpit, generate visual and aural cues, and even tilt around to simulate longitudinal and lateral accelerations, though only to a certain degree. For most aeronautical uses this is considered satisfactory. When simulation requires higher-fidelity

DESIGN

New computers and sensors are installed, taking the place of original luxury seats and interiors. A flight engineer coordinates the test profiles on-board and gathers reliable and consistent data for post-flight analysis. A set of safety trips, either automatic or manually activated, guarantees that augmented flight can be terminated at any moment and control can be safely brought back to the pilot, seated on the left side. A second test pilot sits in the back, taking over control halfway through the flight.

IFSix is based on the successful Embraer Phenom 300 business jet, redesigned with an innovative box wing that is optimized

IFSix can simulate not only existing aircraft,

six-degrees of freedom, a new tool is needed: an in-flight simulator. A highly modified airplane capable of giving the pilot the real feel of a completely different aircraft, even as big as the Antonov An-225 Mryia.

MISSIONS

LEONARDO TIMES N°2 2017

13


but also those still on the drawing board. Flight control laws of new airplanes can be tested with a real pilot-in-the-loop, possibly discovering problems that would cause the prototype to fail, as happened with the Northrop YF-17 in 1974. IFSix can simulate damaged aircraft to check the pilot’s ability to maintain control, for example introducing asymmetric drag, lift loss due to ice accumulation, or disabling specific control surfaces. Very rare and dangerous scenarios can be reinstated safely, like total

loss of hydraulic power as happened to the DHL Airbus A300 in Iraq, 2003. Space vehicles such as the XCOR Lynx and the Scaled Composited SpaceShipTwo (Virgin Galactic) glide back to the home airport following a steep descent. With the aid of a drag chute, the IFSix can simulate the handling characteristics of such vehicles, accessing yet another share of the simulation market related to space tourism.

COMPETITION

most versatile in-flight simulator on the market, with fully developed six degrees of freedom and fuel-efficient turbofan engines. It has a range of 3,500km and an endurance of five hours, allowing both test pilots to train in typical two-hour sessions each. The IFSix can train and evaluate test pilots at prices lower than our main competitor, Calspan. IFSix will have a purchase cost of 8.8 million euros and an all-inclusive operational costs of 3,800 euros per flight hour, based on a projected use of 750 hours per year.

The European-based IFSix would be the

DREAM Group 4 What is the unique feature of helicopters that makes them so useful compared to airplanes? Their ability to hover! Current high-tech rotorcraft research and development programs focus primarily on noise reduction and increasing flight speed. However, with our design, we push hover endurance to the limits.

pecially due to its high power efficiency and low structural weight. For the powerplant, the turboshaft engine of the single rotor was found to be the optimal solution regarding the propulsion and the trailing edge flaps were chosen as the control system.

FINAL DESIGN

MISSION Every year the American Helicopter Society (AHS) runs a student design competition that poses various challenges in rotorcraft design. This year the goal is to design a 24hour hovering machine, which can be developed and built within three to five years. The project objective statement can be formulated as: “Design within ten weeks with eleven students a rotorcraft that can hover for at least 24 hours while carrying a payload of 80kg and can be built within five years.” One might think: what are the challenges of hanging in one place for a long time? For comparison, the longest flight time of a rotorcraft is 18 hours 41 minutes in forward flight, which is more efficient than hovering [1]. Thus, the competition goal is to achieve great improvement in current technology. Additionally, the hover should be carried out at three different stations that are separated by at least 1km, which requires that the rotorcraft boast forward flight capabilities as well.

CONCEPTUAL DESIGN In order to approach this task, a design option tree was created which listed all the pos14

N°2 2017 LEONARDO TIMES

sible solutions regarding the subsystems. Some of them seemed either infeasible or irrelevant for the mission. The remaining ones were used to come up with four concepts that were further developed, analyzed, and compared. In order to have a good overview of the subsystems’ performance with respect to endurance, as many variations as possible were used. The first concept featured a single rotor configuration, turboshaft engine, and anti-torque fins for control. Dual rotors were also studied and represented by a tandem and a coaxial rotor configuration. The first configuration made use of a Wankel engine and electric swashplates for controlling the rotorcraft, while the coaxial one was powered by a hybrid engine and flaps on the trailing edges of the blades. Last but not least, a quadcopter with a hydrogen fuel cell power system, electrical engines and mechanical swashplates was also considered. The performance of these concepts in certain criteria were estimated and traded off and the result was that the coaxial configuration was the optimal solution. This was es-

Even though during the conceptual design the best subsystem solutions were chosen, some optimization and improvements needed to be performed for the mission to be a success. Firstly, the rotor blades were designed with a hyperbolic twist and chord distribution, which resulted in 20% reduction of profile drag in hover. The chosen engine, which in this case was a Rolls-Royce turboshaft, was improved by implementing a wave rotor which lead to a maximum output power of 746kW and a 12% reduction in fuel consumption. Additionally, an innovative transmission system, called Pericyclic Continuously Variable Transmission (PCVT) ,ensures that the rotors will operate at the optimal rotational speed at any moment. Furthermore, the control system is represented by Continuous Trailing Edge Flaps (CTEF), which make use of piezoelectric bimorph materials to change the shape of the trailing edge of the blades. They provide full control of the rotorcraft, while simultaneously reducing vibrations and noise production. Last but not least, in order to keep the structural weight to a minimum, the fuel tank was designed such that it can cope with the landing loads. Thus, no dedicated landing gear was required.

CONCLUSION Both the hover and the forward flight performance of DREAM were analyzed. For optimal performance, the required power was deemed to range between 525 kW to 125kW. This variation was achieved by the implemented PCVT, as discussed before. All the mentioned analyses, improvements, and optimizations lead to a successful improvement in current technology by achieving a total endurance in hover for the rotorcraft of 25.3 hours.


AIRBUS HYDRAULICS Modeling and simulation of hydraulics in Airbus aircraft C&O

Apeksha Amarnath, Engineer- Simulation & Physical Systems, Airbus India

LEONARDO TIMES N°2 2017

15


AIRBUS

Figure 1 - Ironbird setup in Toulouse.

A hydraulic system is used in aircraft to enable the movement of control surfaces, brakes, doors, landing gears and a variety of other components, using pressurized fluid pumped through the system. Airbus is on the cutting-edge of developing tools to ensure it is safe and reliable.

H

ydraulic systems make up a large part of an aircraft’s systems. Without them, it would be very difficult for a plane to get off the ground. This vast and complex set of systems can be generally compartmentalized into four areas: fluid storage, hydraulic power source, transmission, and monitoring and control. Each of these systems is critical to the total hydraulic system and plays a specific role in making sure everything runs smoothly. First, the fluid storage subsystem: hydraulic fluid is stored in a reservoir located onboard and a network of pipes circulates this fluid to where it needs to go over the course of a flight. Next, the fluid is pumped into the hydraulic system, specifically actuators, at high pressures ranging between 200 to 350 bar. The primary pump used to accomplish this is the Engine-Driven Pump. As the name suggests, it gets its power from the engine shaft. This pump is coupled to the engine via some gearing. It is designed in such a way that it can be depressurized in flight in case of failures or other emergency situations. This comprises the hydraulic power source component of the system. 16

N°2 2017 LEONARDO TIMES

The hydraulic actuators make use of the pressurized fluid by converting the pressure energy to the desired hydraulic movement, making up the transmission portion of the operation. These actuators then return low-pressure fluid back to the reservoir, completing the cycle. For redundancy, aircraft have multiple hydraulic systems that supply the same parts. In the event that one fails, another can continue supplying hydraulic power to the actuators, allowing the aircraft to fly normally. For instance, the A330 has three hydraulic systems: green, blue and yellow, while the A350 has only two systems, namely green and yellow. In general, the green system is the main hydraulic power supplier in flight while the rest act as redundancies in case of faults and emergencies or are used for ground operations. In some aircraft, an Electrically-Driven Pump (EMP) adds one more layer of redundancy. In the A350, the EMP can only be used on the ground in order to maintain the hydraulics of the aircraft while the engines are not running.

HYDRAULIC SIMULATION To aid in testing, Airbus HQ in Toulouse houses a variety of hydraulic set-ups for

specific families of aircraft. This is called the Ironbird and encompasses a system of actual pipes, pumps and fluid, which are used as a hardware test bed, shown in Figure 1. At Airbus India, the hydraulics team, part of the simulation department, works on the Virtual Ironbird. This involves modeling and simulation of the hydraulics of various aircraft, such as the A330, A340, A350-900, and A3501000. The main aim is to be able to develop hydraulic models with functionalities and behaviors corresponding to the Ironbird, thereby representing the hydraulics of an actual aircraft. These models are then used for research, development, testing, and pilot training. Figure 2 shows a schematic diagram of the Ironbird. Hydraulic modeling is done mainly using a tool called AMESim which is a multi-physical modeling environment. It is a physics-based model, comparable to for instance, Simulink, which is signal-based. This is done using scripts that are run on an in-house tool along with the model, which is converted to C-code. AMESim has components such as motors, pumps, and valves, predefined with their respective equations. Some behaviors, which are specific to Airbus aircraft such as the engines or pumps, are implemented with the use of numerical data or C-code. This means test data from running the actual hardware is used to form an optimized look-


AIRBUS

Figure 2 - Schematic diagram of the Ironbird. up table for numerical methods.

MODEL DEVELOPMENT To standardize the development process and documentation, a set of requirements and guidelines have been created called AP2633. This provides standards for the process, documentation, and software in terms of encapsulation and model portability. Under the Airbus standard for simulation, this leads to a few general requirements: a normalization process for documents and interfaces, aircraft modularity, and the requirement that test cases are generalized. The modeling process starts with defining which functionalities are required, implementing them separately, and running sub-systems within the complete model. This process of development and testing goes on for about two years until the subsystem simulation behaviors are satisfactory. The next stage would be to integrate the subsystems and deliver a package that can be run on aircraft simulators, called Aircraft-1. These simulators do not have the actual hardware that would be present in a cockpit,

The hydraulic system represents a significant part of an aircraft’s overall operation. It is crucial that the hydraulics systems are functional and can withstand almost any malfunction that comes its way. Airbus, with its systems both in Toulouse and India, ensures that its hydraulic systems are always ready for flight and perform at the highest of standards. References [1] Lighthill, M.J., “On sound generated aerodynamically I: General Theory”, Proceedings of the Royal Society of London, Series A: Mathematical and physical sciences, pp. 564-572, 1952. [2] Arntzen, M., Rizzi, S.A., Visser, H.G., and Simons, D.G., “A framework for simulation of aircraft flyover noise through a non-standard atmosphere”, 18th AIAA/CEAS Aeroacoustics Conference, AIAA 2012-2079, Colorado Springs, CO. [3] S.A. Rizzi, A. Aumann, L.V. Lopes, C.L. Burley, “ Auralization of Hybrid Wing Body Aircraft Flyover Noise from System Noise Predictions”, 51st AIAA ASM meeting, AIAA2013-0542, Grapevine, Texas. [4] Aircraft flyover simulation, http://stabserv. larc.nasa.gov/flyover/, NASA, 2013.

AIRBUS

The system is modeled the same way as the system in the aircraft: a closed loop with high-pressure fluid flowing from the pumps and low-pressure fluid flowing back to the reservoir. Along with the normal behavior, malfunctions are also implemented within these models to test the behavior of the various parts of the system in case of a malfunction. Some examples of these malfunctions are external leakages or valve jamming. There are two sets of models created for each system: a real-time model and a detailed model. The real-time model is not as high fidelity as the detailed model, as that would require larger computing time. The real-time models as the name suggests "run" in real-time. The detailed models, on the other hand, are used mainly for research and detailed analysis, as they have a higher fidelity and "run" on a variable step solver. These models include the inertial behavior of the hydraulic fluid and detailed models of the pipes. The most difficult work in real-time model development is arriving at a balance between model fidelity, numerical stability, and execution time. This often requires many

iterations and exploring alternate modeling techniques.

they only have a cockpit environment which is simulated with displays. The model is then run to ensure everything is satisfactory when cockpit controls are incorporated. After passing this stage, the next task is to run the model integrated with other systems such as the electrical and fuel systems, which are simulated in real time using the Ironbird. This simulator, called Aircraft-0, connects the cockpit with the Ironbird. Because of this, the interaction between the simulator and the aircraft hardware can be directly observed. Once this stage is cleared, the model can be supplied to companies and research institutions for pilot training purposes or further research. The model is encapsulated in a package, which contains the C-code of the AP2633 compliant model.

Figure 3 - Model Development process. LEONARDO TIMES N°2 2017

17


BIG PLANES, BIG PROBLEMS Why are jumbo jets falling out of favor? C&O

Katharina Ertman, Editor Leonardo Times

The future does not look great for jumbo jets. United announced the retirement of their 747 fleet, Qantas recently stated it was cancelling its remaining eight orders for the A380, and Singapore Airlines is not renewing the lease on its first A380. What caused the market to turn against these planes?

N

early fifty years ago, the world was introduced to the future of air travel: the Boeing 747. The largest passenger aircraft at the time, its iconic double-decker design and high occupancy allowed more people to experience flight. It was developed in response to overcrowded airports, which were servicing large fleets of smaller aircraft, at a time when commercial aviation was beginning to boom. The development of the 747 also brought on the advent of a variety of technologies still used on aircraft today; for instance, the high-bypass turbofan, which allowed for more powerful engines with a lowered fuel consumption. The result was an airplane that dominated 18

N°2 2017 LEONARDO TIMES

the long-distance flight market for years. Freighter versions of the aircraft were also developed in order to better meet the demands of increasing globalization. It would not be until 2005 that Boeing’s main competitor, Airbus, would debut its response to the ever-popular 747: the A380. The project, called A3XX in its infancy, began its early research phase in the late-80s, and as is typical for the aviation industry, things moved quite slowly. What resulted from this long endeavour, however, was the world’s first double-decker plane with the upper deck stretching the full length of the fuselage. It boasted the title of “largest pas-

senger aircraft”, with the ability to carry up to 868 passengers in an all-economy layout. It was designed to serve a market of heavily-traveled long-haul routes, and provide airlines the option of operating fewer flights per day while meeting passenger demand. Like the 747 was developed in order to meet increasing demands on long-distance air travel, the A380 was conceived with the prediction that air travel volume would increase significantly in the coming years. It was also predicted that carriers would be more interested in meeting such demand for long-haul travel. With the exception of the 2008 recession and its aftermath, air traffic has increased steadily in recent years, but not as rapidly as Airbus originally predicted, but we’ll come back to that. Poised to become the dominating forces


WIKIPEDIA

in commercial aviation, the future of the 747 and A380 seemed rosy. The 747 was an incredibly successful aircraft, both in the passenger and cargo sectors, and the A380 looked to be a worthy competitor. However, this outlook was soon clouded and the era of jumbo jets seems to have come to a halt. So what happened? The short answer is money, but the long answer encompasses a wide variety of factors.

ALL GOOD THINGS MUST COME TO AN END When the 747 came around, despite the fact it was filling a major gap in the aviation market, its introduction was plagued by economic downturn and the 1973 oil crisis. It wasn’t until Boeing developed the 747-400 that the jumbo jet saw its day in the sun. The numerous modifications, including the installation of a two-crew glass

cockpit, winglets, and improved engines and fairings, made the aircraft more appealing to carriers. This new variant arrived at a time when the global economy was strong and air travel was becoming more accessible to the masses. Cargo variants of the 747 also became popular, with an exceptional performance for large payloads. This combination of a good market environment and a new-and-improved version of a plane that was already on airlines’ radar, led to a booming success for Boeing. They delivered 442 747-400 aircraft and 126 747-400F units since its introduction. However, as global markets shifted and geopolitics began to take their toll, the economic outlook began to sour. Since 1998, the price of crude oil has risen steadily. In 2008, the US stock markets crashed, sending out economic shocks that were

felt across the world for several years. With rising fuel costs and potential travelers looking to stay home instead of taking extravegent trips, airlines were faced with a decline in passenger volume. They looked for ways to stay afloat. To travelers, the most obvious changes were the checked bag fees or charges for choosing a seat. But behind the scenes, airlines also began to look at their fleets and the demand on their routes. The 747 and A380 were designed with the assumption that carriers would be interested in planes that could transport large volumes of passengers over long distances. But, and this is particularly true of the A380, the demand was simply not there. Jumbo jets are extraordinarily expensive, not only to purchase, but to maintain as well. As of 2017, an A380 rolling off the factory floor in Toulouse costs $436.9 millon, nearly LEONARDO TIMES N°2 2017

19


1966

Boeing publicly announces Boeing 747 project with Pan Am as the launch customer First Boeing 747-100 rolls off the factory floor

1969

The 500th Boeing 747 is manufactured Boeing 747 makes its first flight

1977 1988

Boeing 747 makes its first commercial flight from New York to London Airbus engineers begin preliminary research into ultra-high capacity airliner to compete with the Boeing 747

Airbus 380

Debut of the Boeing 747-300 and 747-400 variants Introduction of the Boeing 747 freighter

1988

The operating costs of these jumbo jets is one aspect of the problem, but it goes handin-hand with concerns over not being able to fill planes with a sufficient number of passengers to offset those operating costs. An airline, at least in today’s market, is more interested in flying several smaller planes on a particular route on any given day and have consistently full flights, than take the risk of not selling out on a route serviced once a day by an A380. Back when the 747 was introduced, it was one of the only planes that could complete long-haul flights as it had four engines, while most other planes at the time had just two and the Extended Operations (ETOPS) criteria did not allow for most twin-engined aircraft to cross major oceans without stopping. These days, however, twin-engined aircraft are becoming more reliable and four-engined aircraft have simply become more of a liability in terms of maintenance cost and downtime. The result is a decreased demand for aircraft that are more of a financial gamble when there are several less expensive alternatives that are efficient and play well with the current trends in air traffic passenger volume.

THE A380: DREAMS OF A BYGONE ERA It is the most ambitious passenger jet proj20

N°2 2017 LEONARDO TIMES

1993

NASA

twice the price of its widebodied cousin, the A330. Similarly, Boeing’s newest variant of the 747, the 747-8i is priced at $352 millon, while the 787 Dreamliner comes in at $224.6 millon and the 777 starts at $261.5 millon. This is, admittedly, a bit of an apples to oranges comparison, but the reality is that a larger plane is going to cost more for an operator upfront. These figures do not even include maintenance costs over the lifetime of an aircraft, which amounts to millions. This, combined with a decreased demand for expanded long-haul service, are major deterrents to airlines in ordering jumbo jets these days.

Boeing and Airbus begin joint study into Very Large Commercial Transport (VLCT)

1993

ect that the world has ever seen, but no one wants it. One might look at the Boeing 747 and the A380 and think that as the 747 gets older, a formidable competitor, one that could continue to challenge the market for years to come, would play well with the airline industry. But it seems that the exact opposite has happened. It is understandable from an aircraft lifespan perspective to see that the 747 is reaching the end of its program life and being replaced by its twin-engined, more efficient siblings. However, the A380 is practically new to the market. So what gives? One can generally speak about the decline in jumbo jet orders, but there are a few factors that are specific to the A380. These are worth mentioning, if only for insight into the future of the aviation manufacturing marketplace.

A major contributor to the lack of enthusiasm over the A380 is the fact that it arrived fairly late to the game. By the time the A380 made its debut, the 747 had already been dominating the market. At the same time, both companies were rapidly developing their own jumbo jets’ demise: the 787 Dreamliner and A350. These aircraft, positioned to take on the long-haul sector of air travel and succeed older aircraft such as the 767 and A330 respectively, would end up being more attractive to airlines than their bloated counterparts. The A380 also made its debut right as the global recession was taking foothold. This led to a sharp decrease in air travel and, combined with rising fuel costs, this larger, more expensive aircraft that required filling nearly every seat in order to breakeven and had significantly higher maintenace costs, became extraordinarily undesirable to airlines.


Boeing 747 2005

1994

Airbus announces its independent development of a VLCT, called the A3XX

First of five test A380s unveiled at Toulouse First flight of A380 is conducted 2005 Delays due to wiring issues sets delivery dates back Newly restructured Airbus approves €8.8 billion to continue A380 First Airbus A380 delivered to project Singapore Airlines

The A380, with its preliminary design studies beginning in the early 90s, was also built for a market that never materialized. In the late 80s and early 90s, air travel was booming. The A380 would fit nicely into a market where air travel continued to grow exponentially, but that simply has not happened. It’s certainly not to say that it won’t happen in the future, but it seems unlikely at this point. Airbus also failed to launch a freighter version of the aircraft, so its reach was limited to a handful of large carriers who regularly require the use of ultra-high capacity aircraft.

HOPE FOR A FUTURE? Despite a sluggish pace of new orders for these jumbo jets, there is a silver lining for both aircraft. The A380 program, for instance, is essen-

2011

747-8 Intercontinental and Freighter program is launched

LUFHTANSA

2000

1,500th Boeing 747 delivered to Lufthansa

tially being propped up by Emirates, which has firm orders for 142 of the type. The company has stated that it aims to provide the ultimate luxury in air travel (to those who can pay, of course), at a time where most airlines are pinching pennies and cutting back on services to their frequent and infrequent fliers alike. Features such as the Skycruiser cabins in first class, with onboard shower and spa facilities, and an in-flight bar service, are perfect for an aircraft with enough real estate to make these provisions possible. The question remains on whether or not this is sustainable, but the oil market will probably have the final say on that. For the moment at least, the A380 has some supporters. The Boeing 747 has also seemed to find its niche in the market, mainly in the cargo sector. 747-8 Freighter aircraft are selling

Airbus suspends work on A380 Freighter, orders from FedEx and UPS are cancelled

2009

2007

quite well, with 88 orders for the variant, compared to 48 for its passenger-carrying sibling. The 747 still remains one of the most popular aircraft for cargo-hauling, with the family carrying about half of the world’s cargo shipments. A more efficient 747 is an attractive option to cargo airlines, as well as retrofitting passenger 747s which are retiring from human cargo service. Because of the more relaxed restrictions on noise and safety requirements for freighters, many 747s find new life in the cargo sector. So even if the 747 does not curry favor with human air travelers, it will probably find a continued home with many a tulip being shipped to the far reaches of the globe every spring. There is also a chance that the market will change in the future. The A380 had the unfortunate timing of starting deliveries right as the economy was beginning to sour. This isn’t unlike the 747, which was not very popular until the -400 variant was released twenty years after the 747-100 came into the world. The 747 faced major problems in the 70s due to an economic downturn and further effects of the Cold War, but Boeing managed to turn the program around and developed a highly successful aircraft. This leaves some glimmer of hope for the A380. There’s nothing to say that the market won’t change in the future and these large capacity planes will become fashionable again, to the benefit of both the A380 and Boeing 747. But only time will tell. References Airbusgroup.com Boeing.com Airwaysmag.com Businessinsider.com Inflationdata.com Nasdaq.com Highbeam.com

LEONARDO TIMES N°2 2017

21


AEROELASTIC TAILORING Multi-Fidelity Composite Structure Optimization KRISHNA KANT RATAN PARKHE

ASM

Krishna Kant Ratan Parkhe, MSc Student Aerospace Engineering, TU Delft

The mesh showing the discretization of the wing for the high-fidelity CFD.

Optimization of structures has been and will continue to be a very important area of research. With increasing computational resources, optimizations are being performed even in the early stages of the design process. Though, in an early stage, the focus is mostly on getting quick results. In this regard, a multi-fidelity optimization model has been developed which results in a higher accuracy at a lower cost.

F

iber-steering has been an area which has caught the fancy of many researchers in recent times. Unlike traditional composites, which are reinforced using straight fibers, new kinds of composites are being developed that exploit the directional nature of stiffness of the composite material. These advances have been possible mainly due to the improvements in the manufacturing of the composite materials. With the development of advanced tape laying and fiber placement machines, complex fiber paths can be realized. Aeroelastic tailoring is the process of directionalizing the stiffness in a way that affects the aeroelastic deformations in a favorable way, usually by means of load allevation which can potentially result in a 22

N°2 2017 LEONARDO TIMES

lighter design. The importance of aeroelastic tailoring is continuously growing as aircraft wings become highly flexible (and hence more aeroelastic). Incorporating aeroelastic considerations in the early part of the design process has a lot of benefits, which include reduction in the number of iterations later in the optimization process. However, if accurate predictions are needed, these require significant computational resources, as they are associated with high computational requirements. It should be pointed out that the computational resources needed naturally depend on the complexity of the analysis. With most of commercial aircraft operating in the transonic regime, the associated computations are also complex as, in order to pre-

dict the flow phenomena that occur at such conditions, accurate CFD results are needed, which can be computationally expensive in the early stages of design. To overcome these restrictions, various multi-fidelity approaches are gaining wide acceptance. By definition, a multi-fidelity method is one in which multiple methodologies are adopted in the calculations with favorable features from each of the methods being adopted into the chosen approach. One such method is implemented in this thesis. It comprises of a low-fidelity model that does bulk of the calculations and a high-fidelity model which provides corrections that fill in the deficiencies in the low-fidelity model. The aim is to achieve a high-fidelity result without performing extensive computations. In the context of aeroelastic tailoring, which is an optimization routine, performing such high-accuracy analysis is prohibitively expensive owing to numerous analysis runs needed for every single set of design vari-


ables. Hence, a multi-fidelity approach was selected with low and high-fidelity aerodynamic models, both of which make certain assumptions on the flow as discussed below.

LOW-FIDELITY MODEL The low-fidelity model, as the name suggests, is a model which is of a relatively lower accuracy. The implications of loss of accuracy might depend on what is being studied. In this case, the model in discussion is an aerodynamic model and hence its job is to provide the aerodynamic load distribution over the wing (forces and moments). The low-fidelity model models the aerodynamics by the vortex lattice method (VLM), which is a potential flow method. To put it concisely, this model makes the assumption of an incompressible, inviscid and irrotational flow. The compressibility is accounted for by the Prandtl-Glauert correction. This correction though cannot account for the associated effects, such as the presence of shockwave.

HIGH-FIDELITY MODEL The high-fidelity model in this case is the Euler model, which models the fluid as an inviscid one with no assumptions being placed on the rotational and compressible nature of the flow. This is used as a corrector model that removes the deficiencies in the low-fidelity model by accounting also for the presence of a shockwave.

AEROELASTIC MODEL In the aeroelastic model, which is has a partitioned nature, the aerodynamic and structural models are separated. The information regarding forces and displacements are exchanged with each other through suitable interpolation schemes. In this case, two interpolation schemes are needed; one for the interpolation of the data between the structural and the aerodynamic surface and a second one for the transfer of data from the aerodynamic surface to the whole volume (of the CFD analysis). For the first interpolation, the nearest neighbor scheme is used and for the second a spline based interpolation scheme. In the nearest neighbor scheme, for each node at which the data is needed, the algorithm performs a search for the nearest node that has the data and gives this value to the requested node. In the

spline based system, interpolation matrices are formed using the spatial configurations of the structural and the aerodynamic nodes. As the wing structure deforms continuously through the calculation, these matrices would change as well. However, updating them after every iteration is computationally expensive as these matrices are in the order of the total number of degrees of freedom, which could easily be in hundreds of thousands. A choice has to be made on when to update them and in the current research it was decided to update them after every 4th iteration.

AEROELASTIC TAILORING In the aeroelastic tailoring a gradient based optimization is performed. The low-fidelity model is used to calculate the gradients, as evaluating the high-fidelity gradients is computationally very expensive. The objective function of this optimization routine is the weight of the wing. The design variables are the lamination parameters and the ply thicknesses. Lamination parameters are certain derived parameters that define the laminate orientation using few independent parameters (maximum of twelve). They are introduced as the total number of design variables are greatly reduced and hence are suitable in an optimization sense. If instead the ply angles are used directly, the total number of design variables used increases rapidly with the number of plies. The constraints on this optimization are on the ply strains, aeroelastic stability and lamination parameters (between -1 and 1). The optimizer used is a Gradient based Convergent Method of Moving Asymptotes (GCMMA). This method guarantees that a solution will be found no matter what the initial design point is. This, however, does not solve the issue that is faced by any gradient-based optimizer; the tendency to be stuck in a local minima/maxima. The initial design in this case was chosen to be a quasi-isotropic design in order to provide the initial laminate with some strength in all directions so that the initial design does not prove to be infeasible to start with. The optimizations are performed using two approaches: first, with a purely low-fidelity solution and then with a multi-fidelity solu-

tion. To see how the deficiencies in the low-fidelity solver affect the optimization results, they are carried out at subsonic and transonic speeds. The reasoning behind this is that the low-fidelity and the high-fidelity solvers should be similar at subsonic speeds and the errors in the low-fidelity model appear mostly in the transonic regime. The results of the optimization show interesting behavior. The designs at subsonic speeds are the same for both low and multi-fidelity designs, as expected. However, in the transonic case, one would expect that, with the presence of shock wave in the multi-fidelity model, the lift would be lower on the wing and hence the design would be lighter as a lighter load needs to be sustained. The results however show that the multi-fidelity design is rather a slight bit heavier. This was seen due to the fact that in the multi-fidelity design, where a shock wave is present, the lift is shifted towards the rear end of the wing and hence also produces a torsional moment. To bear these loads, the optimizer adds +/- 45 degree plies which add some weight to the wing. In conclusion, it was observed that, in the case of transonic flight regimes, a multi-fidelity approach becomes important, as the low-fidelity model (potential flow) is seen to be lacking in its predictions and capabilities. The resulting design is not necessarily lighter, which is an even more important deduction meaning that the design obtained from the low-fidelity model was actually under-designed, which can be potentially catastrophic. A strong correlation between the need for multi-fidelity approach and the torsional moment was observed. To decisively state or counter this, such an analysis should be performed with a supersonic airfoil instead of the current one (NACA 0012). This would delay the onset of shock wave as well as the resulting shift of lift rearwards. This in turn would result in a lower torsional moment, as the lift is almost uniform in the chord-wise direction. Such a study would enable stronger conclusions to be drawn. Regardless of this, it can be said that there are cases (like this one) where the low-fidelity model is inadequate and a multi-fidelity approach is needed.

LEONARDO TIMES N°2 2017

23


KITE POWER The world of airborne wind energy WIND ENERGY

Maria Mathew, Editor Leonardo Times

As concerns grow over the use of coal and natural gas for our energy needs, we are increasingly turning to renewable forms of energy. Conventional wind turbines are one of these. Now, however, groups such as Kite Power are expanding the scope of wind energy to include airborne wind energy. Though in its infancy, this new subset of wind energy has already achieved quite some success.

K

itepower, a start-up with its roots right here at TU Delft, is on the frontlines for innovation in the airborne wind energy sector. This sector deals with cost-effective alternatives to the conventional wind turbines that usually come to our minds when we think of wind energy. Kitepower works closely with both TU Delft and its industrial partners to continue advancing the research of this field and make this technology a viable option for the future. Wubbo Ockels, a Dutch physicist, was most famous for being the first Dutch astronaut in space. After the end of his Earth-orbiting career, he came to TU Delft to work as a professor in the aerospace faculty. It was here that 24

N°2 2017 LEONARDO TIMES

he first had the idea of what eventually developed into what we know today as Kitepower. Joep Breuer and Pietro Faggiani, engineers from the current Kitepower team, sat down with the Leonardo Times to tell us more about the company, its origins, and the progress they have achieved. Could you tell us about how and when was Kitepower originated? Joep: The idea of producing energy with kites is quite old. It started with the paper by M.L. Loyd in 1980. At the end of ‘96 or so, Wubbo Ockels was at the aerospace faculty in TU Delft and he started the group to research this topic. He initially had a different concept in mind, though. I think he started

with the Altaeros idea, which is a stationary blimp with a propeller inside. Consequently, he thought it would be better if it moved and that lead to the laddermill concept, which is a chain of kites that would go up in the front and down at the back. This eventually evolved into what we now call the “yo-yo” principle. That is the early beginnings of the concept. As the potential of the idea was spotted, the group grew bigger, so is it hard to pinpoint the exact origin of Kitepower. So it was initially a research group within the university? Joep: Right. And it still is. Although we have the company “Kitepower”, there is also the website kitepower.eu, which is hosted by Dr. Roland Schmehl. He is the professor who took over what Wubbo Ockels had started. The university also owns a part of the company and we work very closely together. We have a lot of students who do their thesis or research with us .


KITE POWER

good relationship with the university and are partly owned by them. We also have some industrial partners (Genetrix for kite development, Dromec for drives and winches, Maxon Motor GmbH for motors, gears, and electronics) who work on manufacturing the kites and the various other parts. (To Joep) I heard from Pietro that you have worked with Wubbo Ockels. Could you tell me what that was like? Joep: Well, I did not work with him per se. Rather, he was my professor. Also, he’s one of the only Dutch astronauts. He was not an easy man to work with, but he was a very inspiring man and he had ideas that were ahead of his time. One of these ideas was Kitepower and it took another ten or more years to make this into reality. Are there any other companies in the EU who are trying to develop the same thing? Do you have any competition in the field? Joep: There are quite some people working on this. Ten years ago, when I did my Master thesis and when Wubbo was still here, there was actually nobody, but it has slowly started growing. Now there are around 40 institutes/ companies and universities worldwide who are working on this. (They refer to the image about “Groups participating in airborne wind energy”) Some

They all focus on different markets. We have a couple of groups that directly focus on the megawatt scale applications. Then we have companies which focus on the smaller scale generation for off-grid applications, remote islands, and emergency situations, mostly as an add-on to these alternatives. That is also the market we’re focusing on, and we probably have only a handful of competitors. I think we really have an edge to get our product to be the first on the market. I also believe we have a very strong team, with the industrial partners and TU Delft being the group that has been working on this the longest. Do you have any plans of collaboration with the other groups working on the field? Joep: In Dutch we have a word “concullega’s” which means competition, but also colleagues. I think we’re all in the same boat. The wind energy market is huge. Even if we get a small percentage of that in the airborne wind industry, there’s enough to share, so we can easily collaborate. However, it is natural that everybody wants his product to be first on the market. It also depends a bit on who you are talking to, but generally we are quite friendly with all the other people. I U. AHRENS, M. DIEHL, R. SCHMEHL

But it is now a start-up? Joep: I think there was already a first push called Kitepower 2.0 with an external investor to take this to the next level. That was the old team. Our current CEO, Johannes Peschel, was part of that.

have disappeared, but some new ones sprang up as well. I think we can say that in total we have less than ten serious competitors, though.

Pietro: That was two years ago. Then this investor didn’t want to go further and the project was almost shut down. In order to find funding, the team applied for a European grant. The chances of getting it were really low and the team dissolved a bit. But eventually they got it and the part of the team that was still around started over with the grant, and that is how we are here. Joep: The company started last year with Johannes, then it slowly grew and now we have five employees, a lot of students and some freelancers. I think the good thing about the European project is that we are not doing it alone, but in cooperation with other partners. One of them is TU Delft, because we have a

Groups participating in airborne wind energy research and development activities as of 2013. LEONARDO TIMES N°2 2017

25


know some of them for more than ten years, from the time we were still a research group. My old boss also has a company in this field. There is also a conference every year – the AWEC (Airborne Wind Energy Conference). Pietro: It (AWEC) is organized every two years. The last one was in 2015, here in TU Delft. The next one is going to be in October of this year. In the world of airborne wind energy there are different concepts. Thus, we are not all doing the same thing. The big distinction in the wind energy field is among the people who want to produce electricity on the ground and the ones who want to produce on board of a wing or kite. There is also the distinction between those who use flexible kites, as we do, and those who use rigid ones. Flexible kites are cheaper, easier to manufacture and can be seen as a consumable of the system. Other companies may use rigid structures, which has higher aerodynamic performances and there is more knowledge available on how to control it, because it is based on principles used by the aviation industry. So, though we’re a lot of people aiming for airborne wind energy, we follow slightly different approaches and concepts. Thus, while there is competition, often there doesn’t even have to be. How long do you think it will be before it becomes commercially available? Joep: We hope that this will happen by the end of this year. We already secured a contract with the Ministry of Defense to test the system at their site. There is still quite some work to be done, but I think we will be able to sell systems by the end of the year. The technology is starting to get noticed and I really think it is a game-changer in wind energy. Can you explain the basic working principle of the system? Joep: We have a winch with a drum on the ground, which is connected to the line and the kite. The kite pulls on the line and flies in figures of eight as it goes farther away. It has a small box underneath which pulls the line, just like a kite surfer would do, and with this we can steer the kite. Our kite is a multi-line kite, which can steer and gain its own speed. There are motors on the ground that can pull the lines and steer it. We use steering principles that are already used in kitesurfing or parachutes, so it is relatively simple. If the kite moves twice as fast, we get four times the force, which is used to pull the drum and rotate it. The drum is coupled with a generator and produces the electricity. When we reach the end of the line, we stop flying patterns and decrease the angle of attack, so that we have less force on the kite as we glide, and pull it back in. The process is then repeated.

26

N°2 2017 LEONARDO TIMES

How efficient is the system? Pietro: Steering takes a very small part of the energy. Theoretically, and from simulations, we can go up to having a cycle efficiency of 90%. Depending on the wind conditions and several other factors, the energy that is used for pulling the kite back can go to as low as only 10% of the produced one. While flying cross-wind maneuvers, the kite can produce a lot of force on the line. This high force and low speed result in high power that is sustained for a long time. As we pull the kite very fast and there is low aerodynamic force acting on the tether during gliding, the energy required to reel it back in is relatively low.

Isn’t there a risk of extreme bending and consequently damaging the kite? Joep: Well, the material is standard kite material, so it is very flexible. The kite and the line do need replacement every once in a while, but it is the cheapest part in the system. It also gives us the opportunity to improve the design. For example, if you now buy a wind turbine and use it for fifteen or twenty years, at the end you would have a design that is twenty years old and you won’t not able to improve it. However, as the kite needs to be replaced once in a while, we can continuously upgrade it. We are aiming to replace it around once every year.


KITE POWER

Will your system be cheaper as well? Joep: In the end, yes, it will be cheaper, but now it is an emerging technology. We are now at the stage where wind turbines were maybe twenty years ago. So we still have to learn and improve, but it has a very big potential and we can already compete with diesel generators, which are mostly used in off-grid sectors. This is exactly the market at which we’re currently aiming for. Have you faced any design difficulties and what have been the main problems you have encountered? Joep: Yes, a lot! I think, for the technology in general, we have to be able to replace mass with intelligence. This intelligence means that the kite must be able to react to every wind condition that is out there. We can do this right now, but it still involves some manual work and this has to be completely automated so that people do not need to be there to control the system. Developing the software that will make the system fully autonomous is one of the key things we are working on currently. What is the expected output of your system? Joep and Pietro: The current system that we have, which was also developed at the university, has a nominal generator size of 20kW, but our new system will have roughly around 100kW nominal capacity. The kite we use now has an area of 25m2 and has a span of around 7m. It can produce up to 600kg of force and there have been times when it has even exceeded this and broken the link in the system. We are now also upscaling the design of our kites to make them suitable for a larger system . The positive thing about this is that we can adjust the size of the kite depending on the location where we are going to deploy it. So, if we go to a location with a lot of wind, we would probably take a smaller kite and where there is not a lot of wind we would take a bigger kite. We can also think of having more kites for the same system, and during the time of the year when we have a lot of wind we can use a smaller kite and then replace it with a larger kite during times when there is a bit less wind.

How high does your kite go? What are the advantages of your system? Joep: It goes several hundred meters high. For the final product we have 600 meters of line on the winch, but we fly at an angle of around thirty degrees. Using a relatively long line is also a big advantage in comparison with standard wind turbines as we can go much higher. Thus, we can fly at much lower wind speeds as well, and can therefore produce much more energy in a year. Wind turbines normally compute this with a capacity factor based on the time of the year when they can produce the full amount of power. For wind turbines on land, the capacity factor

is 30 – 40%. On off-shore wind turbines it is a bit higher, but we believe that we can do better than that by accessing wind resources at higher altitudes, as these stronger and more steady. Another advantage is that we use a lot less material. We don’t need the big foundation, we don’t need the mass, and we don’t need the big blades required by conventional wind turbines. Our kite is very lightweight. We do need the generator, of course, and that is about the same weight and the same size as used by the conventional turbines, but the overall weight of the system is reduced.

CONCLUSION Will this new technology be able to complete and even replace the existing wind energy sources? With such a flexible design, possible variations in the configurations and constant upgrading of the system, it seems like even further improvements and higher goals can be achieved. So we can easily say that the team is truly “Challenging the Future”! References [1] kitepower.net/

LEONARDO TIMES N°2 2017

27


HUISMAN INNOVATION New drillship design: safer, faster and greener HUISMAN

Huisman

Less than six years after the first ultra deepwater drillship supporting Huisman’s distinctive multi-purpose tower hit the waves, the company has unveiled plans for the next generation vessel and has built a full-scale test tower outside its corporate headquarters to prove the efficiency clams made.

I

n 2011, the first drillships equipped with Huisman’s Dual Multipurpose Tower (DMPT), Noble’s Bully I and Bully II, went into service for Shell. The new builds were soon followed by Globetrotter I and Globetrotter II, also Noble-operated vessels under long-term contracts with Shell. These vessels incorporated the distinctive enclosed DMPT into a Huisman drillship design, Huisdrill 10000, which packed 3000 square metres of mostly open deck space into a more compact overall footprint. 28

N°2 2017 LEONARDO TIMES

Huisman has worked closely with Shell and Noble since the launch of the Bully and Globetrotter drillships, analysing operations and talking to drilling engineers about ways to fine-tune processes and develop new tools. The first drillships to include the DMPT have performed well, noting that Bully I and Bully II each earned Shell’s Global Floating Rig of the Year award in 2013 and 2014, respectively.

ship operations, Huisman has developed a design for the next generation unit, named Huisdrill 12000. The new vessel measures 216 by 38 metres and increases the open deck space to 4500 square metres. The Huisdrill 12000 features a flush drill floor and work deck designed to reduce the vertical movement of people and materials, improve visibility, and provide easier access to the well centre and construction side of the DMPT. The bulk of the machinery is mounted below deck or inside the enclosed drilling tower, which helps keep down maintenance costs and time lost due to inclement weather.

Drawing on lessons gleaned from the drill-

The DMPT has no V-door limitation for pipe


tower because, when installed on the vessel, it omits the substructure and is mounted on the deck. That in turn reduces the centre of gravity, giving the vessel a smaller footprint, and lower fuel demand, than the average drillship. The robotic pipe handlers are perhaps the most significant result of the effort to increase efficiency and safety. The robots, or multi-functional manipulators, are mounted on rails built into each of the tower’s four corners, up to four units per rail. The system enables drillers to run 180-foot stands of pipe rather than the standard 90 or 135-foot stands, thereby reducing tripping times, the number of connections during drilling, and the number of mud pump starts and stops. The manipulators are relatively small and, because they are of a single design, easy to maintain and change out. They are outfitted with quick-connect heads that permit different tools to be exchanged without human interaction. Their movements—rotate and extend—are simple by design results in a tripping speed of 5000 feet per hour with 180-foot stands.

TEST TOWER To make its case to potential clients, Huisman has built a full-scale test tower at the Schiedam facility. The quayside MPT is similar to its proposed vessel-mounted counterparts, including the capacity to handle 180-foot stands of pipe and 150-foot risers. The tower includes a strut that can simulate dynamic vessel motion during tests. handling and features dual draw works technology to eliminate time-consuming slipand-cut procedures. Huisman fashioned a “splittable” block, designed to increase tripping speeds. The recently updated version of the DMPT was engineered to handle a hook load up to 1630 tonnes and extended in height to accommodate 180-foot long stands. The new tower also includes robotic pipe handlers mounted on each of the four corners of the tower, enabling automation of tripping and pipe handling functions, thereby increasing the speed of tripping and casing running. The DMPT accomplishes this while remaining shorter than the typical deepwater drilling

At a time of declining oil prices and operational cost continuing to rise, the oil and gas industry needs to find new ways to shape its own future. Innovation is needed more than ever before. Not only is innovation crucial as the new finds of oil and gas are often at difficult locations such as deep offshore; bringing down the ever growing costs of exploration and production is also an important issue together with improving and increasing the production from existing fields. Huisman, certainly a pioneer in this respect, is constantly developing new technology as they know that increased cost efficiency is the only way to make operations profitable. The new design of the Huisdrill 12000 and the new test tower are excellent examples of this. LEONARDO TIMES N°2 2017

29


ECOLOGICAL AUTOMATION Air Traffic Control: Making the Invisible Visible C&O

Clark Borst, Assistant Professor, Faculty of Aerospace Engineering, TU Delft

Automation often leads to a loss in situational awareness, mainly because it is designed as a ‘black box’ that poorly communicates its intentions and constraints. The Ecological Interface Design framework can be used to make automation more transparent for human operators. CHALLENGES FOR FUTURE AIR TRAFFIC SYSTEMS Predicted air traffic growth and the associated economic and environmental concerns are forcing a fundamental redesign of the air traffic management system (ATM). In Europe and in the United States, similar efforts are being undertaken to modernize the current ATM system. This redesign will focus largely on new forms of automation, requiring humans to supervise more complex and more intelligent automated systems to ensure a high performance and safety level. This however, has also given rise to a growing concern within the Air Traffic Control (ATC) community: will controllers remain compe30

N°2 2017 LEONARDO TIMES

tent and skilled enough to safely assume control should the automation fail? Similar to how flight deck automation and autopilots have been reported to play a role in skill erosion of commercial airline pilots [1], the fear is that smarter automation will dumb down air traffic controllers. Note that not only the ATC community is moving towards an increase in automation. This trend is visible in almost all transportation domains, most recently in the automotive industry with the undertaking of the “self-driving car”. As such, these domains (eventually) all struggle with the same question: is it possible to exploit the advantages of automation whilst maintaining a competent and skilled workforce?

THE AUTOMATION PARADOX Traditionally, automation is considered as something that replaces human activities. In the process of pushing the human out of the control loop, engineers often pay little attention to properly inform the human about what rationality is guiding the automation. This eventually makes operators lose their understanding about why, when, and how to intervene in case the machine reaches its boundaries. We are now reaching a limit of what can be automated with today’s technology. Though we can make an autopilot follow a predefined flight trajectory automatically, a human still outperforms a computer in adaptive decision making and creative problem solving. This type of behaviour is of paramount importance in handling unexpected events and in dealing with uncertainties. Such abilities are occasionally seen when a human “saves the day”, as in


EUROCONTROL

KLM

Eurocontrol's Maastricht Upper Area Control Center manages traffic above 24,500 feet over the Benelux, Northwest Germany, and a small part of Northern France.

the Apollo 13 moon mission, or the Hudson River water landing. Thus, it seems that the more we automate, the more critical the role of the human becomes, not less. This is formally known as the ‘automation paradox’ and teaches us that the ultimate responsibility for the safety of operations still lies with humans. To fulfil such a critical role, people need to have a deeper understanding of the problem at hand. More automation also implies the need for more communication, not less [2]. Though what type of information supports adaptability and creativity?

ECOLOGICAL AUTOMATION: A NEW PERSPECTIVE In processes governed by the laws of physics, creative solutions are limited. For example, an aircraft cannot sustain flight when it is flying slower than the stall speed. The turn radius of an aircraft is constrained by

the maximum allowable load factor. Besides these ‘internal’ aircraft constraints, the maneuverability of aircraft is also affected by ‘external’ static and dynamic environmental (i.e., ecological) constraints such as terrain, air traffic and weather. Ecological Interface Design aims to make these work domain constraints salient on an interface in such a way that people can directly perceive the entire (physical) space of possibilities [3,4]. Here, the challenge is to find an appropriate mathematical representation that resonates better with the way humans think and solve problems. Finding such a representation also impacts the model we eventually embed in our automation. That is, instead of using representations geared toward finding single, optimized solutions, automation should provide the boundaries for actions and enable the human to decide on the course of action.

4D TRAJECTORY MANAGEMENT The idea of underpinning ecological automation is best explained by means of an example. In the future airspace environment, aircraft are expected to be at a specific point at a specific time [5]. Such 4D trajectories will generally be planned several weeks to months before the actual flight to optimize the flux of air traffic through a piece of airspace (i.e., sector). During flight, however, unplanned disturbances may arise, such as local adverse weather. These events would

require an air traffic controller to adjust the trajectories, whilst adhering to the original planned time and position at which the aircraft needs to leave the sector as much as possible. Instead of tackling this problem with advanced path-planning algorithms that optimize a certain multi-dimensional cost function to select the best possible trajectory, we have demonstrated that we can also let the human perform this task by visualizing a ‘solution space’ using relatively simple conflict detection algorithms. Figure 1 shows an example of such a solution space. Here, two aircraft in conflict are highlighted in red and for the top red aircraft the solution space is visualized. The green area portrays a space of valid locations for a controller to insert intermediate waypoints. The boundary of this space depends on the maximum aircraft speed — flying a longer distance requires a higher speed to still arrive at the sector exit point at the original planned time. Of course, an intermediate waypoint can be placed outside the solution space, but this will result in a delay at the sector exit point as the aircraft cannot fly the additional track miles fast enough. The red areas mark invalid waypoint locations, because they will either not solve the current conflict or result in a new conflict with another aircraft. Thus, any intermediate waypoint inserted inside a green area is valid and will result in a conLEONARDO TIMES N°2 2017

31


TU DELFT

Figure 1 - Prototype of a next generation radar screen, showing a spatio-temporal solution space to solve conflicts between aircraft. flict-free trajectory with all surrounding aircraft. From this figure, it can also be seen that putting an intermediate waypoint in the left side of the solution space is most favorable in terms of robustness, because it features the largest green area. Putting a waypoint in this area also hints at what the resulting traffic pattern will look like: the selected aircraft will pass the other aircraft in front. As such, in one glimpse a controller can spot all possible solutions that resolve the conflict and visually identify the most favorable solution area, whilst adhering to the original planned exit time. The solution space representation is also compatible with higher levels of automation, in which the computer can analyze the solution space and make a suggestion (i.e., advisory) on where to insert an intermediate waypoint to solve the conflict. By showing the advisory inside the solution space, a controller can inspect the validity, assess the quality of the given advice and either accept or reject it. In this way, the automation constraints become directly observable (i.e., transparent) through the interface and it becomes relatively easy to manually re-direct solutions warranted by situational demands.

EMPIRICAL INSIGHTS: DOES ECOLOGICAL AUTOMATION WORK? Several human-in-the-loop studies in simulated environments have indicated that the ‘solution space’ approach helps controllers in gaining insight into traffic situations (i.e., situation awareness), keeping them in the loop, and allowing them to solve problems in 32

N°2 2017 LEONARDO TIMES

their own way [4,5]. Especially this last point is interesting for ATC: air traffic controllers are amongst the most critical population when it comes down to accepting new technology. In the past, several technologies have not been embraced by the ATC community, simply because controllers did not accept or appreciate them [6]. In many cases, their judgment was fair because the technology would force them to work along a fixed set of strategies and procedures. Although this can reduce the complexity of their work, procedural compliance is often too restrictive in highly dynamic environments featuring uncertainties. Despite the benefits ecological automation has to offer in terms of human-machine interaction, there is also a cost associated with it. Since operators are free to choose any strategy they prefer, given it does not violate work domain constraints, they can also choose suboptimal strategies. Current developments in modernizing the air traffic management system are, however, largely focused on “optimization”, e.g., optimal landing sequences, optimal fuel usage, and optimized flight trajectories. When ecological technology will be used, the focus will shift from optimal control to robust control, sacrificing optimality relative to any situation. However, for complex work where system dynamics or values associated with competing goals can change in unpredictable ways, robust solutions will generally be preferred to solutions that are optimal most of the time, but they can fail catastrophically in a small set of situations. As long as our machines are not smart enough, technology should leverage

people’s abilities, and not replace them. References [1] Carr, N. (2014). The Glass Cage: Automation and Us. W. W. Norton & Company. ISBN:0393240762. [2] Norman, D. A. (1990). The ‘Problem’ with Automation: Inappropriate Feedback and Interaction, Not ‘Over-Automation’, In D. E. Broadbent, A. Baddeley & J. T. Reason (Eds.), Human Factors in Hazardous Situations (pp. 585-593). Oxford: Oxford University Press. [3] Vicente, K. J., & Rasmussen, J. (1990). The Ecology of Human-Machine Systems II: Mediating Direct-Perception in Complex Work Domains. Ecological Psychology, 2(3), pp. 207–249 [4] Borst, C., Flach, J. M., & Ellerbroek, J. (2015). Beyond Ecological Interface Design: Lessons From Concerns and Misconceptions. IEEE Transactions on Human-Machine Systems, 45(2), pp. 164–175. doi:10.1109/ THMS.2014.2364984. [5] Klomp, R., Borst, C., van Paassen, R., & Mulder, M. (2015). Expertise Level, Control Strategies, and Robustness in Future Air Traffic Control Decision Aiding. IEEE Transactions on Human-Machine Systems, 46(2), pp. 255–266. doi:10.1109/ THMS.2015.2417535. [6] Westin, C., Borst, C., & Hilburn, B. (2016). Strategic Conformance: Overcoming Acceptance Issues of Decision Aiding Automation? IEEE Transactions on Human-Machine Systems, 46(1), pp. 41–52. doi:10.1109/ THMS.2015.2482480.


ZAC MANCHESTER INTERVIEW The future of personal spacecraft INTERVIEW

Nora Sulaikha, Editor Leonardo Times

KickSat is a pioneering project in the world of personal spacecraft. Zac Manchester, its founder, sat down with the Leonardo Times to engage in a wide-ranging conversation about the inspiration, and the workings of KickSat, as well as his work with Yuri Milner and Stephen Hawking on the Breakthrough Starshot project. What was it that inspired you to enter the aerospace field? I’ve been super interested borderline obsessed you could say in this stuff since I was a little kid. I think that’s true for a lot of people that end up in aerospace. It’s like a childhood dream. So you could say that since I was old enough to talk, I was interested in airplanes and things like that. With your work on KickSat in mind, would you consider yourself more of an entrepreneur or scientist? I don’t consider myself an entrepreneur. I’m a post-doc right now working in a research lab in a university. The KickSat project has not really in any way become a company or even something that I’ve made any money off of it’s more of a money-losing prospect. The Kickstarter part of KickSat was really kind of out of desperation. There was an opportunity for a free launch that I had through this great NASA program called ELaNa, which stands for ‘Educational Launch of Nano Satellites’, through which NASA awards free

CubeSat launches to universities. But the program doesn’t come with any money to actually build the spacecraft. So then it was about how to get enough money to actually build the hardware, and that’s where the Kickstarter aspect came in. What made you want to start the KickSat initiative? Well, it’s really expensive and technically quite hard to launch a satellite, and CubeSats have made it possible for university groups to do this. But it still costs about a quarter of a million dollars and a lot of time and technical expertise to pull these things together and get them launched, which is still quite far out of the realm of possibility for, say, high schools or hobbyists generally. I think that’s unfortunate and there’s really no inherent reason that this should be the case. On the cost end of it, the CubeSat itself is expensive but if you just chop it up into smaller pieces, then presumably that scales down the cost per piece. So, for something the size of the tiny ChipSats that I worked on, if you

just divide up the launch cost for a CubeSat among them, it comes down to being on the order of a couple hundred dollars to launch. The idea then is, can we put together a couple hundred dollar space mission? This would cost less than an iPhone, which puts it in the hands of a whole new group of people. So now you could think of even a high school science fair project, where they fly a satellite. I think that idea of democratizing space is something I’m very interested in, and I think these really small scale spacecraft enable that. What kind of people fall into the category of funders? I think there’s a wide range of people from all over the world. Mostly it was just people who thought it was cool and wanted the chance to say that they had their own satellite. So I think that for now it’s more of a novelty, or a hobbyist's thing. There’s no serious application for it at this point. But I think it’s in a way the gateway drug to space. Most of the people who buy those things and play around with them are not doing serious science, it’s more of a hobbyist’s thing. Yet, you learn skills that way and get really interested in it, which sometimes carries over into a career. I LEONARDO TIMES N°2 2017

33


BEN BISHOP

Artist’s representation of the deployment of the Sprite ChipSats. think what KickSat is really about is a way of giving normal people a touch of spaceflight. The sort of person who can afford a couple hundred dollars to do something like this is a whole different category from either the person who’s going to go to a university like TU Delft and joins the CubeSat team there, or who works at Airbus or Lockheed. It’s a much broader category and I think that’s important. I also think that it’s important to include people who are not specialists. Such people are simply passionate but don’t want to make a career out of it and are just interested enough to go down that route. I believe they will often have new ideas that those who are doing this with the blinders on won’t have. Around 300 people donated on Kickstarter, and for a lot of those people what that included was getting something they wanted transmitted from one of these satellites. Some people even wrote their own code to run on them, and designed their own experiments. My favorite one of those was from a group in the UK, the British Interplanetary Society. They funded one of these ChipSats, and they came up with idea of writing random pseudo data to the RAM on the micro controller, and reading it back to look for bit-flips. Bit-flips can happen in the RAM due to radiation. In space, there’s more radiation which can cause problems for computers and/or cause bit flips and similar things. So, they turned that around and thought, ‘Well, if we can detect these things and measure them, maybe we can turn the RAM on the micro controller into a radiation sensor’. So, it’s the idea of using the little onboard computer as a radiation detector, which is super clever. 34

N°2 2017 LEONARDO TIMES

Did it work out? The whole thing didn’t really work out. We were successful with building everything and getting it launched. Unfortunately, with the first KickSat mission, we had several constraints placed on us due to the launch timing and other regulations. Our original plan was to deploy all the little ChipSats a couple of days after the launch, but we had some delays as there were other spacecraft coming from the ISS around the same time. So, we had to wait sixteen days before we were allowed to do the deployment. We were already in a really low orbit and the spacecraft was going to re-enter in about twenty days. Due to problems with the mothership satellite after fourteen days, we weren’t able to make an uplink work to the spacecraft to command it to deploy. A couple of days later the whole thing re-entered. So, because of the waiting, the timing just didn’t work out and we had some significant setbacks.. We’re doing a KickSat 2 mission that should fly this summer (I hope), though that has run into other bureaucratic issues as well. Turns out, when you try to do new things in space that don’t fit into the traditional mold of how people fly satellites, it can be quite difficult to get through all the government regulations. So, we’ve had some problems there but I think they’re mostly resolved at this point. We’re getting close. I’m hopeful that we’ll be able to fly. What do you hope the end goal of KickSat to be? I think there’s several things there. In a big picture sense, I’d love for KickSat to become something that’s self-sustaining, in which

I’m not the only guy that’s doing this – to become something like a space Arduino, or something where people can hack these things and make their own versions of them. That can ultimately become a standard for these tiny satellites that people can then fly readily. Right now, I’m the only one who’s flying these things. There’s been a ton of interest from people who want to build these little satellites themselves, but the problem is that someone has to launch them. Someone has to do the integration work to package them together and get them launched, and right now that’s just me which I don’t think that is very sustainable. But I am hopeful that if there’s enough interest it will become self-sustaining eventually. How did your involvement with the Breakthrough Starshot project come about? It came about because of KickSat. The big picture idea is that it’d be amazing if we could go to another solar system, right? The nearest solar system is the Alpha Centauri and it’s four light-years away. We can’t go anywhere close to the speed of light, but the idea is to push that boundary as far as we can with known physics. With rockets, it gets really difficult to go much faster than we’re already going now, meaning you can’t go to the next solar system with them. So, you need some kind of external impulse because you can’t carry that much fuel onboard. The main concept being pushed to achieve that is a light sail. But, instead of a light sail pushed by the sun, which doesn’t go very fast, what if you made your own photons to push it? And so, the idea is to build a really big laser on the ground back on earth to push a light sail.


The other thing that’s really hard is overcoming perturbations and turbulence in the atmosphere. If you’re trying to shoot a laser beam through the atmosphere, it’s going to be perturbed by atmospheric turbulence and if you need to point a beam and have it hit a little spot, it needs to be extremely precise, and so you must compensate for the atmospheric turbulence. There’s ways of doing that in the astronomy world, which they use on telescopes, called adaptive optics. They put actuators all over the telescope mirror and run a 1kHz control loop where they nudge the mirror all over the place to cancel out the effects of the atmospheric perturbations. They’ve done that on telescopes in the last ten years, which is really hard, but if we’re to do the Starshot project, we’d have to do it like that as well.

BIO

ZAC MANCHESTER

A current post-doctoral fellow at Harvard University’s Agile Robotics Lab, Zac Manchester has a lot on his plate. He started KickSat during his PhD studies at Cornell in October, 2011. With the opportunity for a free launch through NASA’s ELaNa program, but no funding for the actual spacecraft, the first-ever crowd-funded satellite was born. Characterized for being the size of a cracker or even a large postage stamp, the ChipSats

named Sprite offered laymen the chance to affordably own and operate a satellite. Now, two years after its launch, the implications of KickSat and how its tiny size could help in furthering space exploration has facilitated Manchester to be a part of the Breakthrough Starshot project. This pioneering project aims to send a probe to the nearest star, Alpha Centauri, at 20% the speed of light.

Now that you have a fixed-size laser on the ground that can exert a fixed force and you want to go really fast, you have to make the spacecraft as light and small as possible. A group of people got together and were looking at this and thought, how fast can we make this go? What’s the biggest laser we can possibly build? And someone decided that it was maybe a kilometer long phased array. So if you want to go ten or twenty percent of the speed of light with this kind of a laser arrangement, it turns out that you need a satellite somewhere on the order of a few grams. One of the persons in charge of the project, Pete Worden, who was the Center Director of NASA Ames for a number of years, knew of me because I worked there for several years and that’s how I was brought into the Breakthrough project. They were looking for satellite ideas on the gram scale and the only thing that’s close and has been flown into space before were the Sprite ChipSats.. The engineering challenges are tremendous but it’s not science fiction. It’s not crazy theoretical physics. You can run through all the numbers and see that it’s difficult, but the physics is there and can be implemented. I think that’s the difference between Starshot and things like warp drives, which are more science fiction for now. Here, we know how to do it in principle; we’re just going to have to spend a lot of effort figuring out the engineering, and how to actually build it.

Well, we need to build a really big laser, which we’ve done before and the power level that you need is also not that terrible; there are lasers now that have more power. The tricky part is you need to focus the beam onto a meter scale spot to hit the satellite way out in space. If you need a small spot you need a big aperture and to get a meter sized spot out far enough to do this, you need a kilometer-scale laser beam on the ground. Making some kind of phased array that’s a kilometer across is really hard. There’s been phased arrays made in the lab with a handful of lasers on an optics bench with everything perfectly controlled. So although we've built phased arrays and really powerful lasers before, we haven’t really built anything that’s of this scale.

What are the main problems that have been highlighted with the project so far?

How would you navigate the spacecraft along a desired course? That’s another outstanding problem that hasn’t exactly been solved. Right now, you can work out the numbers to, say, send this to the exoplanets around Alpha Centauri. So, if you wanted to hit one of those planets or if you wanted to get within 1AU of one of these planets, the accuracy of the pointing must be incredibly good. Right now, I think the idea is that it’s probably not going to be possible to do course correction on one of the spacecraft because if you’re building something that weighs 10g and its going that fast, how are you going to push it any appreciable amount? I don’t think we’re going to be able to do any course correction on the way. You just have to point it really well in the beginning. What about the effects of space debris? Yeah, then it's toast. There’s actually a guy at Princeton named Bruce Draine, who is a world-expert on interstellar dust, and he has run the numbers on the likelihood of hitting various sized particles on the way to Alpha Centauri, and you’d definitely hit some stuff. But it’s the critical size of the particle that matters. You’re going to hit lots of hydrogen atoms, for instance, and that’s not that big a deal, but you could hit a dust grain that’s a big enough size and it could just blow right through the spacecraft. It depends a lot on

Artist’s representation of a laser array. LEONARDO TIMES N°2 2017

35


the probability of hitting something like that and the distribution of the particle size out there, and we don’t know a lot about that. The real answer is that we don’t really know what’s out there. We have some idea of the particle sizes and distributions but we’re not going to know for sure till we get there. The saving grace of sorts is that it’s going to be really expensive to build all the infrastructure for this project but once you build the laser you can send out a lot of the little satellites. The little satellites will be relatively cheap so you can imagine sending lots of them. Even if only 1% can make it, we can afford to send a thousand. How do you get data from them once they’ve been launched? That is also very hard. One of the ideas being pushed right now involves the initial concept of using laser communication. This is actually done with some satellites, and it has been demonstrated already. So instead of using radio, you use a laser beam to send the data back. It’s better in the sense that, for radio the wavelength is in the centimeter scale and for lasers the wavelength is of the order of microns, so the beam spreads out a lot less, making it much more focused. What that means is that you’re not losing energy, so if you’re trying to shoot this beam with data on it back to the earth and you do it with a laser, it’s a much more tightly focused beam and you can use less energy. The other side of that though, is that you have to point it much better. If you think about having a beam with a radio wave versus a laser, the pointing requirements are tougher by six orders of magnitude. So, if you use a lasercom, you can run the numbers and see that it should be possible to close a link with a laser from four light years with, as it turns out, not a crazy amount of power - something like 1-10 watts and you can actually do it with a really big ground-based telescope. But the little satellite is going to have to point with microradian or milliarc second pointing accuracy which we can’t even do with a CubeSat right now. We can do it with big space telescopes; the

Breakthrough Starshot. Hubble points much better than that, for example. But doing that kind of pointing with something that weighs a few grams when we can’t even do it with a CubeSat is really hard. There might be other approaches. But, I think that for a lot of these things you can write down some equations and make some assumptions and you can see that this stuff could work. It’s just that the details are really tricky. Are there a lot of people working on it right now? Right now, there’s an advisory council of sorts with roughly twenty to thirty people. The project was announced about a year ago, and in that one year what has happened is that they’ve put together this advisory board, and we’ve gotten together several times and are trying to figure out what the right directions are for the research. The guy who started this, Yuri Milner, has pledged 100 million dollars of his own money for the early stage work, which is not enough by far to actually complete this, but he’s pledged this money upfront to fund the first stage of

research, and we’ve put this board together to figure out what the most promising technologies in a bunch of areas to fund are. So probably there’s going to be some funding for phased array lasers and maybe some for materials for the light sail as it needs to be super light and super reflective. Sometime soon, probably in the next couple of months, there’s going to be solicitation for proposals from companies or research labs to look into these targeted areas. Do you have any other projects underway? What are your plans for the future? I do like to keep busy. If there was the right opportunity to turn one of my projects into a start-up I’d certainly be open to that. There were people talking about doing that with KickSat but I never really, personally, saw a viable business model there that I thought was worth pursuing, and I had a lot of other things I was interested in doing as well, so it just didn’t happen. I think though, if there was a really compelling business case in one of these things, I would probably try it out but, so far, it’s not really the case as far as I can tell. But career-wise, I’m hoping to be a professor. In 1957, Sputnik successfully launched into orbit and pushed the boundaries of human perception to 100km above ground level. Nowadays, with talks of space tourism underway, the novelty of space has worn out, and it’s the works of people like Zac Manchester that are again redefining the boundaries of human perception. With initiatives like KickSat and project Breakthrough Starshot, humankind once again gets to venture out of our comfort zone. References

Closeup of the engraved KickSat. 36

N°2 2017 LEONARDO TIMES

[1] kickstarter.com [2] “Cracker sized satellites launched into space”, Anne Ju, news.cornell.edu, April 24, 2014


FLYING ALL-ELECTRIC The progress and future of all-electric flight FPP

Nithin Rao, Editor Leonardo Times

Air pollution is inherently increasing with population. This a result of the emissions of harmful gasses associated with burning fossil fuels to run power plants, factories, and automobiles. Through technological advancements, each of these domains slowly started to shift towards sustainable energy. The progress and scope of electrifying air travel is presented in this article.

W

ind and solar energy are being harnessed to generate electricity around the world. For example, on May 15 2016, Germany produced nearly 100% of its domestic electricity through renewable energy sources [1], demonstrating renewable energy’s potential. The automobile industry is also witnessing an era of sustainable energy thanks to Tesla motors, which made one of the first aesthetic and advanced cars running solely on electricity. And other automobile companies are catching up. This rapid change of technology, however, is not seen in the aerospace sector, though the emissions contribution from the aviation sector is quite significant. According to the International Council on Clean Transportation, if the aviation industry were a country, it would rank 7th in terms of CO2 production. The gas turbine engine that is used today was developed 75 years ago and is powered by kerosene. So why

did the aviation sector not witness this trend of adopting a sustainable power source and how far are we from developing such technologies? The idea of electrifying flight has been researched and implemented long before the first airplane was invented. In 1890, Gaston Tissandier was the first to fly an electrically powered airship [2]. This was possible due to the high lifting capacities of airships. However, the technology did not seem to extend to airplanes due to the limiting power-to-weight ratios of the motors. The scenario slightly improved after the invention of Nickel-Cadmium batteries as power storage-to-weight ratios increased exponentially. In 1973 Fred Militky and Heino Brditschka, made the Militky MB-E1 based on the Brditschka HB-3 motor glider. This was the

first manned electric aircraft (flying on power stored on-board), and flew for 14 minutes [3]. Similarly, a growth in the solar cells technology made it possible to develop the first manned solar airplane in 1979, which was the Mauro Solar Riser [4]. Nearly twenty years later, in 1997, Alisport Silent Club was the first commercially available electric aircraft [5]. Two decades later, how far have we progressed in making electric flight an everyday reality? Any propulsion system consists of three components: a power source, an energy-to-work converter, and a work-to-thrust converter. In a conventional piston engine aircraft, the power source is the fuel, the energy-to-work converter is the four-stroke piston engine and the work-to-thrust converter is the propeller. In an electric aircraft, the power source is providing by the batteries, the energy-to-work converter is the electric motor, and the work-to-thrust converter is the propeller. One of the main limitations of electric propulsion is the power storage capacity of the batteries. The battery capacity dictates the range of an aircraft, similar to the fuel capacity in a convenLEONARDO TIMES N°2 2017

37


EADS

Though the technology of electric propulsion is still in its infancy, a small selection of companies has already started producing and selling electric aircraft on a commercial scale. One of the most prominent electric aircraft is the Flightstar e-Spyder, developed by Flightstar in 2013. The e-Spyder is priced at nearly $39,000, with an endurance of 60 to 90 minutes and a cruise speed of 61km/h. The e-Spyder is fairly popular with nearly fifty units sold. Other popular aircraft, such as the Pipistrel Alpha Electro, a trainer aircraft by Pipistrel with an endurance of 60 minutes (+30 min reserve)[10], and the Sun Flyer by Aero Electric Aircraft Corp, are still awaiting certification and promise performance equivalent to a Cessna.

The Airbus E-Fan tional aircraft. Two solutions this problem is to either increase the storage-to-mass ratio of the batteries or to include onboard technologies capable of power generation. The first solution is based on technology advancements. Hence, the second solution is widely implemented by incorporating onboard power generators such as solar cells or fuel cells. One of the most successful experimental aircraft during the past five years is the Solar Impulse II. Solar Impulse II completed circumnavigation of the Earth in July 2016, making it the first manned solar-powered aircraft to do so [6]. This feat not only demonstrated the reliability of solar-powered propulsion, but also the range of such aircraft. However with a wingspan longer than that of a Boeing 747 and a payload capacity less than that of a Cessna, the Solar Impulse II is just one of the first steps of achieving high endurance electric-powered flight. Similar to solar panels, fuel cells have been implemented on aircraft to produce electricity. A fuel cell produces electricity by reacting hydrogen ions with oxygen. In 2008 Boeing modified a two-seater Diamond aircraft and made the “Theater Airplane” which was powered by a fuel cell. Its endurance was 20 minutes. In September 2016 HY4 became the world’s first four passenger aircraft to be powered by fuel cells. Aircraft manufacturer Pipistrel, fuel cell specialist, the University of Ulm, the German Aerospace Center (DLR) and a few other companies developed HY4. The latter was tested for 10 minutes and is estimated to have a range of 1,500 kilometers [7]. However, one of the drawbacks of using hydrogen as a primary fuel is that though it produces fewer emissions compared to conventional fuels, hydrogen is more expensive than conventional fuels due to the complicated production techniques. 38

N°2 2017 LEONARDO TIMES

concept is expected to extend to a 90-seat regional transport jet in the future. These aircraft, however, are prototypes and are still in their experimental stages [8].

Some of the most successful electric-powered aircraft, which were tested with onboard power in the last few years, are the Airbus E-fan, EADS Cricri, Hugues Duval MC15E Cri-Cri, Luxembourg Special Aerotechnics MC30E, and the Battery-Powered Robinson R44. The most advanced of these is the Airbus E-fan. The Airbus E-fan is an electric aircraft developed by the Airbus Group. It is a two-seater aircraft and has an endurance of around 60 minutes. In July 2015 it became the first “all-electric twin engine aircraft” to cross the English Channel. It has a wingspan of around 9.5m and a cruising speed of 160km/h, and a maximum take-off weight of 550kg. For comparison, a Cessna 172 is a three-seater aircraft with a wingspan of 11m, a cruise speed of 226km/h, and an endurance of five hours, with a maximum take-off weight of 1,111kg. The Airbus E-fan

It is evident from the current progress that presently, the scope of electric flight is limited to light-sport aircraft. One of the main reasons for this limitation is the energy-to-mass (or energy-to-volume) ratio of the fuel and the power-to-mass ratios of the energy-to-work converters. As seen from Figures 2 and 3, a gas turbine engine not only has the highest power-to-mass ratio, but aviation fuel, e.g. kerosene, also has higher energy density. Additionally, an electric motor has nearly the same power-tomass ratio as a gas turbine engine, however the energy density of the batteries such as the Li-ion battery is nearly 18 times lower; nearly 18 liters of Li-ion is required to produce as much energy as a liter of kerosene. With these limitations what can we expect in the next few years? In June 2016, NASA announced its X-57 project through which it plans to spend the next decade on electric aircraft research. Gas turbine

Power to weight ratios of different energy to work converters

Electric motor Four stroke engine

5 kW/kg

3 kW/kg Fuel cell

600 W/kg

100 W/kg

Figure 1 - Power densities of different energy to work converters


VoltAir (1) is an all-electric commercial aircraft concept introduced by EADS (now Airbus Group) in 2011. The concept is presumed to be flying within 25 years, assuming battery densities reach 1000Wh/kg. For reference, Li-ion batteries are currently equal to 200Wh/kg. The plane was designed with assumptions of developments in power electronics, superconductors, and batteries (Lithium-air batteries). VoltAir was designed to support the European Commission’s aviation roadmap for 2050, reaching emission and noise level targets. In addition to focusing on propulsion technology, VoltAir was redesigned from scratch in order to integrate the electric propulsion system. This is contrary to other designs, which are modified versions of an existing aircraft’s propulsion system [12].

Diesel Gasolene Kerosene

30 Butanol LPG Ethanol

20

LNG

Methanol Liquid ammonia

10 Liquid H2

Zinc air battery Li-ion battery

0 0

Natural gas

H2

20 40 60 80 100 Mass energy density [MJ/kg]

120

Figure 2 - Energy densities of different energy sources. limits the use of Li-ion battery for all-electric propulsion in civil transport aircraft. However, small-scale aviation such as UAVs and light-sport aircraft can completely shift to all-electric propulsion systems in the next decade. This is one of the main reasons most research in alternate propulsion of regional airliners is focused on developing2 hybrid propulsion systems. An hybrid propulsion system consists of a combination of a conventional internal combustion engine and a supplementary source of renewable energy.. This results in lower emissions and increased efficiency. For instance, the hybrid engine developed in the AHEAD projNASA

It is reported that some of the top innovators, like Tesla’s Elon Musk and Google’s Larry Page, are also investing in research on battery technologies, so as to make electric cars more agile and electric planes a reality [13]. Even through extensive research on battery technologies, further developing the current Li-ion battery technology such that its specific energy equals that of kerosene would take nearly thirty years. This

Energy densities of various fuels Volume energy density [MJ/lit]

The X-57, also called the “Maxwell”, is a modification of the Italian-designed Tecnam P2006T twin-engine light aircraft. Its design consists of 14 electric motors, twelve on the leading edge and two on the wingtips. The aim of the design is to prove that electric aircraft are quieter, more efficient and environmentally friendly than conventional aircraft. If successful, X-57 is assumed to cut the operational costs by 40% relative to similar aircraft. It is a four-aircraft and is expected to have a range of 165km [11].

ect, a collaboration between TU Delft, WSK, TU Berlin, DLR, Technion, and AD Cuenta, consists of a gas turbine engine with a contra-rotating fan, liquid hydrogen/LNG combustion chamber, and a natural gas combustion chamber. This propulsion system, when integrated with a BWB (Blended Wing Body) aircraft, is expected to reduce the CO2 emissions by 65% [14]. Thus, replacing the conventional propulsion systems with hybrid propulsion systems seems to be the only way to reduce emissions of regional airliners within a short period of time. With such technological limitations, do we still invest time and resources in developing the Li-ion battery technology or do we start looking for a new source of energy to power regional airliners with zero emissions? References [1] www.renewableenergyworld.com [2] www.loc.gov [3] www.flightglobal.com [4] www.all-aero.com [5] Coates, Andrew (1980). Jane's World Sailplanes & Motor Gliders new edition. London: Jane's. ISBN 0-7106-0017-8 [6] www.solarimpulse.com [7] www.hy4.org [8] "E-Fan, the New Way to Fly". Airbus Group. Airbus Group. Retrieved 25 September 2015. [9] www.wired.com [10] www.pipistrel.ad [11] www.nasa.gov [12] VoltAir – All-electric Transport Concept Platform – Airbus Group [13] www.dailymail.co.uk [14] www.ahead-euproject.eu

The NASA X-57. LEONARDO TIMES N°2 2017

39


BUBBLE VELOCIMETRY Helium Filled Soap Bubbles used as tracers in PIV research AERODYNAMICS

Rakesh Yuvaraj, MSc Student Aerospace Engineering, TU Delft

Particle Image Velocimetry (PIV) is an extensively used experimental method in aerodynamic research. PIV uses tracer particles, which moves with local fluid velocity [1]. With heavier-than-air tracers in PIV experiments of vortex flows, a deficit of particles occurs in the vortex core. The present work addresses this lack of seeding in said core by using a Helium Filled Soap Bubble (HFSB) as a tracer, which is neutrally buoyant in air.

V

ortex flows have long been recognized as an important part of fluid mechanics. As early as the 19th century, Leonardo da Vinci sketched and described various vortex motions. Theoretical studies of vortices have been devoted to the exploration of particular solutions for the general equations of motion. Experimental research on vortex flows is performed for a wide range of applications, where vortices form an important part of the flow field [Lewellen, 1971]. These phenomena are of high importance in aerodynamics. From the optimization of 40

N°2 2017 LEONARDO TIMES

wing-tip devices, to the use of vortex generators to reattach a separated flow, as well as rotor blade optimization, or the design of delta wing aircraft, the presence and effects of vortices cannot be neglected. Figure 1 displays a wing-tip vortex of an aircraft, which is most commonly found in aerodynamics. The present work focuses on addressing a specific problem with the vortex flow field in PIV experiments. PIV is an optical method for flow visualization used in education and research. PIV has a

wide range of applications in aerodynamic research because it enables to understand unsteady flow phenomena such as separated flow over models at high angles of attack [Raffel 1998]. A PIV system in wind tunnels has the capability to record low speed flows (e.g. in turbulent boundary layers where the velocity is of the order of one m/s), and high speed flows exceeding 500m/s (supersonic flows with shocks) [Kompenhaus, 2000]. In PIV, the motion of the seeding particles allows one to determine the speed and direction of the flow being studied [Adrian, 2011]. The selection of the tracer particle depends on the nature of each PIV experiment. To obtain sufficient information from the flow field, it is necessary to have a homogeneous distribution of tracer particles in the entire flow field. Figure 2 displays a non-homogenous


Figure 1 - Flow visualisation of Leading Edge Vortex using smoke as tracer. distribution of tracer particles in a PIV experiment of vortex flows in air, using oil-based tracer particles (Smoke). This phenomenon is termed as “black hole” by Verhaagen et al [Verhaagen, 1988]. This depletion of tracer particles in rotational flow is dependent on the density ratio of the particle and the fluid [6]. Using neutrally buoyant tracer particles in the experiment solves the problem. Implementation of neutrally buoyant particles is difficult in experiment with air flows, since the density ratios are typically three orders of magnitude greater than air. After years of innovation in the field of tracer particles, the possibility of producing neutrally buoyant particles has become a reality with Helium Filled Soap Bubbles (HFSB).

The use of Helium Filled Soap Bubbles dates back to 1970s. For instance, the study of flow fields generated when opening a parachute [Klimas, 1973], the Crow instability by visualizing the trailing edge vortices of a wing in a large wind tunnel [Eliason, 1975], the flow visualization of a swirling flow in axisymmetric combustor geometries [Ferrell, 1985] etc. In recent times, HFSB has allowed the study of the flow around a cylinder in tomographic PIV [Scarano, 2015], to perform quantitative flow visualization for on-site sport aerodynamic optimization [Sciacchitano, 2015], as well as large-scale volumetric pressure determination from tomographic PIV [Schneiders, 2016]. The current work aims to solve the problem of a deficit of particles in the vortex core in airflows. Figure 2 displays the result of a flow visualization of the leading-edge vortex of a delta wing using an existing PIV tracer particle (smoke). The center of the vortex has an axial velocity of two to three times the free stream velocity and has a high axial vorticity. This information of the vortex is entirely lost when there is lack of seeding in the core of the vortex. The proposed solution to this problem is the use of neutrally buoyant HFSB as tracer particles. To visualize the effect of the density ratio of the tracer particles, flow visualization is performed with heavierthan-air, lighter-than-air and neutrally buoyant HFSB as tracers in the vortex flow. Figure 3 displays the results of flow visualization using HFSBs of different densities. The

experiment with neutrally buoyant HFSBs shown in Figure 3 (left) has a homogeneous distribution both in and out of the vortex core. This implies that these particles behave exactly like the fluid flow. The experiment with heavier-than-air HFSBs shown in Figure 3 (middle) shows a deficit of particles in the center of the vortex. Hence, heavier-than-air particles performs similar to smoke particles in vortex flows. The experiment with lighterthan-air HFSBs shown in Figure 3 (right) has a homogeneous distribution of particles in the vortex in addition to the peak population at the center of the vortex. This helps to visualize the phenomenon, on top of enabling the study of meandering in vortex flows due to the peak population of particles present in the axis of the vortex. The flow visualization experiment showed that the use of neutrally-buoyant, lighterthan-air HFSBs solves the problem of “black holes” in PIV experiments of vortex flows in air. The next step in this process is to study the effectiveness of HFSB as an alternate tracer in PIV experiments of vortex flows. Stereoscopic PIV experiments enabled to compare HFSB and smoke particles as tracers in PIV research. Figure 3 displays the time-averaged velocity magnitude contour with the velocity vector of the vortex flow using smoke (left) and HFSB (right) as tracer. As may be seen, the result is similar for both particles in the all regions except for the vortex core. However, the presence of a “black hole” with smoke particle as tracers results in

Figure 2 - Flow visualisation with neutrally buoyant (left) and heavier than air (middle) lighter than air (right) HFSB particles as tracers. LEONARDO TIMES N°2 2017

41


Figure 3 - Time averaged velocity magnitude contour with velocity vector field using smoke (left) and HFSB (right). Free-stream velocity = 20 m/s (Re = 4*105) [12]. corrupted information in the core of the vortex. Consequently, there is loss in the velocity vector in the center of the vortex. On the other hand, the use of HFSB as tracer results in a particle-filled core and represents the entire physics of the vortex flow especially in its core. This is an important result pertaining to the current work since the “black hole” is not present anymore in the PIV experiment of vortex flows, with HFSB as tracer. In addition to solving the problem of “black holes” of vortex flows in air, the use of HFSB as tracer allows for performing tomographic PIV experiments, which is a three dimensional measurement technique developed by Elsinga et al. at TU Delft [13]. This is mainly due to the large diameter (of the order of 300m) and high flow tracing fidelity (characteristic time of the order of 10m) of HFSB. Figure 4 shows the results from tomographic PIV, which enables to visualize the two leading-edge-vortices (LEV) above the delta wing. The primary vortices start at the apex with zero diameters and grow in size as they move along the chord towards the trailing edge. Figure 4 also features the horizontal velocity of the vortex flow at 25%, 50% and 75% chord location on the delta wing. The horizontal velocity is a maximum at the top and bottom of the vortex, and is zero in the walls.

alization, 2(3, 4), 229-244. [4] Ronald J Adrian and Jerry Westerweel. Particle image velocimetry. Number 30. Cambridge University Press, 2011. [5] Nick G Verhaagen and Steve HJ Naarding. Experimental and numerical investigation of the vortex ow over a yawed delta wing. AIAA Paper, pages 88{2563, 1988. [6] PY Julien. Concentration of very fine silts in a steady vortex. Journal of Hydraulic Research, 24(4):255–264, 1986. [7] PC Klimas. Helium bubble survey of an opening parachute flow field. Journal of Aircraft, 10(9):567{569, 1973. [8] BG Eliason, IS Gartshore, and GV Parkinson. Wind tunnel investigation of crow instability. Journal of Aircraft, 12(12):985{988, 1975. [9] GB Ferrell, K Aoki, and DG Lilley. Flow visualization of lateral jet injection into swirling crossow. AIAA Paper, 85(0059):14{17, 1985. [10] Fulvio Scarano, Sina Ghaemi, Giuseppe Carlo Alp Caridi, Johannes Bosbach, Uwe Dierksheide, and Andrea Sciacchitano. On

the use of helium-_lled soap bubbles for large-scale tomographic piv in wind tunnel experiments. Experiments in Fluids, 56(2):1{12, 2015. [11] Sciacchitano, A., Caridi, G. C. A., & Scarano, F. (2015). A quantitative flow visualization technique for on-site sport aerodynamics optimization. Procedia Engineering, 112, 412-417. [11] Schneiders, J. F., Caridi, G. C., Sciacchitano, A., & Scarano, F. (2016). Large-scale volumetric pressure from tomographic PTV with HFSB tracers. Experiments in Fluids, 57(11), 164. [12] Rakesh Yuvaraj, Vortex Velocimetry of air flows using Helium Filled Soap Bubbles (HFSB). Master’s thesis, Department of Aerospace Engineering, Delft University of technology, 2016. [13] Gerrit E Elsinga, Fulvio Scarano, Bernhard Wieneke, and Bas W van Oudheusden. Tomographic particle image velocimetry. Experiments in fluids, 41(6):933–947, 2006.

References [1] Ing Markus Raffel, Christan E Willert, and Jrgen Kompenhans. Image evaluation methods for piv. In Particle Image Velocimetry, pages 105{146. Springer, 1998. [2] WS Lewellen. A review of confined vortex flows. 1971. [3] Kompenhans, J., Raffel, M., Dieterle, L., Dewhirst, T., Vollmers, H., Ehrenfried, K., ... & Ronneberger, O. (2000). Particle image velocimetry in aerodynamics: Technology and applications in wind tunnels. Journal of Visu42

N°2 2017 LEONARDO TIMES

Figure 4 - Time averaged stream wise vorticity iso-surface with Z-directional velocity vector contour at three planes. Free-stream velocity = 10 m/s (Re = 2*105).


DELFT HYPERLOOP Winner of SpaceX Hyperloop competition STUDENT PROJECT

Marloes Eijkman, Editor Leonardo Times

January 2017, the Delft Hyperloop team won the Hyperloop Pod Competition! Thanks to an interview with Victor Sonneveld (Delft Hyperloop Electronics) and a peek in the Hyperloop working space I was given some more insight into the Delft Hyperloop team and their pod technology. Sharing some of the experiences of the team during the competition weekends and explaining some of the technical aspects of the pod more in detail Victor gave a clearer picture of what the team came across in the process of the Hyperloop project. DELFT HYPERLOOP Hyperloop is a new high-speed ground transport system consisting of a high-speed pod traveling through a tube running above or below ground. Inside the tube, the air pressure is very low such that the pod would be able to travel at the speed of sound. The idea for Hyperloop was unveiled in 2013 by SpaceX, founded in 2002 by Elon Musk. As far as is currently known, SpaceX itself is not developing a Hyperloop. However, they are interested in helping the development of this futuristic transport system. SpaceX started the SpaceX Hyperloop Pod Competition with the goal of accelerating the development of the Hyperloop, and to stimulate stu-

dents from all over the world to bridge the gap between academia and real world problems. The aim of the competition is to build and design the best half-size Hyperloop Pod. The Delft Hyperloop Dream Team, consisting of 33 students from different faculties of the TU Delft, participated in and won the 2017 Hyperloop Pod Competition.

DESIGNING THE POD In 2013, Elon Musk presented his white-paper, Hyperloop Alpha. A white-paper is an informational document that can describe a new technology, service, or solution to a problem. Using the white-paper, which is readily accessible, the Delft Hyperloop team

designed a half-size carbon-fiber pod which was tested in a tube provided by SpaceX. A requirement is that the pod must stay suspended while travelling through the tube. The Delft Hyperloop team’s pod accomplishes this by a part of the pod called the levitation system. Using magnets, a repulsive magnetic field is created, which enables the pod to stay afloat at speeds higher than 30km/h. Next to the magnets of the levitation system a structure with small wheels is connected to the pod. Those carry the pod until a sufficient speed is reached for the magnetic field to be strong enough to keep the pod afloat. Delft’s team decided to focus on several aspects to make the pod design feasible overall. The team did not focus only on speed, but also, amongst others, on safety and comfort. Eventually, the idea is for the pod to have virtual windows showing a simulated view of the surrounding landscape, which will ensure that passengers can travel in a comfortable environment even though it is not possible to actually see the outside world. LEONARDO TIMES N°2 2017

43


The Hyperloop levitation system. How did the team start off designing the pod? "The first thing we did was go through Elon Musk’s white-paper. One of the ideas he gave was that the pod would be kept afloat with air bearings. At the high speeds the pod would reach, the use of wheels would be impossible and thus Elon Musk saw a solution in air bearings. When thinking of air bearings you can think of an air hockey table: air is compressed and then released under the pod, which would make it float. While roughly calculating values for this system, the team found that air bearings might not be the best solution for the floating of the pod. A heavy compressor would be needed and this compressor would require a lot of energy. In addition to this, the air-gap that could be reached would only be around a millimeter. Furthermore, a compressor has moving parts and generates heat, which makes the design a lot more complicated. We found that the saving of energy by making the pod float with air bearings would be overruled by the energy demand of the compressor. So, this was the first part of the white-paper which was discarded. Instead of the air bearings we used a magnetic system to make the pod float. This was not done by all the Hyperloop teams, some of the teams used air bearings as described in the white-paper. The magnetic levitation system is comparable to MagLev (Magnetic Levitation, a transport system being developed in Japan), but then it will be used in a vacuum tube instead of on rails."

44

N°2 2017 LEONARDO TIMES

What were your goals for the Hyperloop pod design and which of those were you able to accomplish during the competition? "One of our main goals was to reach a speed of 360km/h with our pod. However, the maximum speed that was reached during the competition weekend was 90 km/h. The reason was that the testing of the pods is done by taking small steps. Because of the limited amount of time available for those tests, our team had performed only two tests before the actual competition, when the maximum speed reached was 50km/h. The step of going from 50 to 360km/h would have been too big. In addition to this, the speed of 360km/h was based on the assumptions that the tube would be 1.6km long and the acceleration given to the pod would be 2.4g, given by an impulse system. Finally, the tube was 1.2km long and a pushing cart replaced the impulse system. The safety of the pod is an important issue as one of our main goals was to create a safety-oriented pod design. An example of the safety-oriented design is the braking system. The system consists of two magnets on the bottom side of the pod, which will contract towards the beam in the middle of the tube during braking, causing resistance that will slow the pod down. While the pod is moving, the magnets are held at a sufficient distance from the beam by springs, which are under tension. This system contributes to the safety of the pod since, in case of a failure and consequently a software shutdown, the springs are automatically released from tension and thus the pod will brake."

JANUARY 27, 28, 29: SPACEX HYPERLOOP POD COMPETITION The SpaceX Hyperloop Pod competition started off in January 2016 with the design weekend during which the teams presented their plans for the overall pod design. Here the TU Delft team won the Pod Innovation Award. In June, the pod of the Delft Hyperloop team was presented in Delft. The design weekend is followed by two competition weekends. SpaceX has designed and built a track to test the Hyperloop pods designed by the teams competing next to their headquarters in Hawthorne, California. Between January 27 and 29 2017, the first competition weekend took place. The whole Delft Hyperloop team went to California to test their half-size Hyperloop pods and compete with the other teams. The aim for the second competition weekend is to take place summer 2017 where the Hyperloop teams will compete for the fastest pod design. Why did you win the competition? "First of all, to compete to win you had to make it to testing in the tube, which only three teams managed. The selection of the teams was done by different tests, one of which was placing the pod (switched on) inside a vacuum room and seeing if nothing goes wrong (like exploding batteries which happened to one of the teams). What the pod had to fulfill to win the competition, we don’t know exactly. For example, there was a judging session during which five judges came to take a look at the pod, asked questions, and watched how the team tested the


The Delft Hyperloop Team pod and handled the risky systems (in our case the magnets). The judges kept track of the score and, apparently, in the end our team received the highest score. " How did the testing in the tube work? "The pod was placed in the tube, carried by the wheels at first. A cart was placed behind the pod to push it to a certain speed, after which the cart stops and the pod carries on. The final speed to which the pods were pushed was the same for every pod of the competing teams. This pushing cart was initially not the plan of SpaceX, the plan was to give every pod the same impulse, which would have caused the lighter pods to go faster than the heavier pods. However, SpaceX did not manage to create the impulse system in time. The pushing system was a slight disadvantage for us because our design was the lightest. In a final design, the pod would be brought to speed with linear induction engines, which are still in development. " Did anything concerning the design go wrong during the competition? "While designing we did not anticipate gaps between the aluminum sheets of the bottom part of the tube. Before the pod reaches a speed at which the magnetic levitation system works, it is carried by wheels. During the testing the wheels got stuck in these gaps for very small instances causing the software of the pod to shut the pod down due to an overuse of power. To overcome this, and other problems caused by irregularities in the tube, we had to take a part of the software out. "

Will you continue to the next competition: SpaceX Hyperloop Pod Competition II? "We are not registered for the next competition. The next competition will be this summer already and this was only announced last September. The Delft Hyperloop team has tried to gather a new team for this competition, but this has not succeeded, mainly because it is not a convenient time (too late) of the year to recruit people for a new team. Then as a team we thought, if we do it, we do it well, so it was decided not to compete in the next competition. However, it is the plan to compete in the third competition weekend, summer next year, with a new team." Is Hyperloop the future? "The interest of Hyperloop is worldwide, with multiple start-ups working on making the idea of Hyperloop a reality. Part of the Delft Hyperloop team that won the competition weekend in California continues developing Hyperloop in their brand new startup HARDT. Besides HARDT, there are multiple companies working on Hyperloop. The company Hyperloop One is working on Musk’s Hyperloop idea in California and another Hyperloop start-up was just launched in California called Arrivo. The advantages of travelling with Hyperloop are said to be numerous: it is safe, fast, clean, comfortable, and cheap. Especially for shorter distances travelling with Hyperloop would be significantly faster than travelling by plane or train. However, a few questions remain. There are some aspects of Hyperloop which seem to cause some unsolved problems still."

Can you name a difficulty of Hyperloop that still needs to be overcome? "The Hyperloop pods will have difficulty handling corners. Only big curves can be made while the pod is traveling at high speeds. Considering this, the path of the Hyperloop has to be chosen carefully and, if in really it is needed, the pod could be slowed down to make a sharper turn." Will the Hyperloop be financially realizable? "The Hyperloop concept would be seen as an economic success if the earnings cover the operational costs, so not including the costs of building the tubes. However, it has been estimated that in time also the costs of building the tubes would be covered by the earnings of the Hyperloop. So yes, Hyperloop is expected to be financially realizable, but I cannot tell you right now the time it would take to cover all the costs." Evidently the Hyperloop development is still in an early stage. At this moment it is difficult to say whether the Hyperloop will actually become the transportation of the future and when this would happen. The idea of Hyperloop, however, has reached ambitious engineering teams from all over the world who are working on making it a reality. The prospect of Hyperloop becoming a success seems tangible and it will surely be interesting to see if and how it will take shape in the future. References [1] Delft Hyperloop, http://delfthyperloop. nl/#intro, Delft Hyperloop 2017 [2] SpaceX Hyperloop, http://www.spacex. com/hyperloop, SpaceX 2016 LEONARDO TIMES N°2 2017

45


UNGRATEFUL HUMANS The long-term effects of living in a technologically advanced society. NICK'S CORNER

Nicolas Ruitenbeek, Final Editor Leonardo Times

We live in an era where technology truly makes the world go round, despite money being a keen contestant. From designing reusable rockets to broadcasting Saturday Night Live’s latest political caricature, the myriad possibilities that lie at our fingertips are simply overwhelming. The benefits of being immersed in a technologically evolving world are undeniable, but at what cost?

H

ow many people do you know who complain relentlessly about air travel? From an aerospace perspective, this might be a rather pointless question as we are (supposedly) fascinated by the marvel of being suspended at 36,000ft in a tin can. However, it is not uncommon for those unaffiliated with the engineering world to have a contemptuous rapport with soaring amongst the clouds. Jet lag, neck cramps, and time-wasting airport protocols are the new focal points of the avid traveler. The Golden Age of Aviation is marked as the period between the end of World War I (1918) and the beginning of World War II (1939). This was when a progressive change was witnessed from the wood-and-fabric biplanes of World War I to fast, streamlined metal monoplanes. It was also the time when one would dress up in dashing attire to board their flight, and enjoy the privileged conveniences that air travel provided. Nowadays, we find ourselves packed like sardines with nary a suit in sight, counting down the 46

N°2 2017 LEONARDO TIMES

hours until touchdown. We are completely oblivious and ungrateful to the marvel of engineering that is seamlessly transporting us through the air to our next destination. Greed can be somewhat blamed for this. Humans are capable of incredible feats, but we have a tenacious and stubborn tendency to always want more. As new technologies continue to be rolled-out, we openly criticize anything that falls even slightly short of our out-ofreach expectations. We have become so focused on where we are going and what lies ahead, that we seldom take the time to appreciate the present moment. Technological innovations have increased global interconnectivity and rendered the world a smaller place, but they have also instilled an unfortunate sense of thanklessness. Education is another field that has been significantly impacted by technological advancements. In 1975, Apple Inc. began donating computers to schools, and by 1977 it was estimated that 90% of students that had

access to workstations in their learning environments had used them. In just over two years, the use of computers for academia had almost become second nature. Today, there is not a student in sight who doesn’t have access to one. Educational institutions welcomed this new working environment as it streamlined their entire teaching system. Researchers continue to reap the fruits of modernization as it provides them with new hardware and software to validate their theories. Universities, colleges, and schools hence appear to be the perfect contenders for newfound innovations to thrive. This is, however, not the case anymore. Universities across Europe report that on average, only 25% of students regularly attended lectures in the academic year of 2015-2016, and this number continues to decline. This is in no way correlated with the number of students attending or graduating from said universities. It simply shows that students are less inclined to listen to their teacher read a book at nine in the morning, when they can watch a recording of the same at their convenience. The lack of attendance leads to a complacent lecturer who isn’t willing to put as much work into preparing his or her presentations, and the vicious circle repeats itself anew. Although this might not impact high schools


spend countless hours glued to our phones, evaluating our social status relative to who liked our photo, instead of actually socializing with people. The term “networking” is no longer reminiscent of cocktail parties and late nights playing pool in a shady bar, but rather accepting invitations from strangers on LinkedIn. Although social platforms provide companies with an inside view on the lives of their potential employees, the latter are equally likely to flaunt their non-existent talents and skills. Playgrounds have become empty and the streets that once echoed the sounds of children laughing and playing are painfully quiet. “Go play outside” has become a phrase of the past that now refers to sitting on a bench while playing on an iPad. The manner in which the millennial generation raises their children is greatly different than the previous ones, and consequently, children are expected to grow up with an unprecedented lack of social skills.

just yet, higher educational institutions are irresolute on how to deal with this predicament. Lecture halls slowly become obsolete and buildings can be repurposed or leveled. Universities have been ever enthusiastic to incorporate new technologies, but now appear to have shot themselves in the foot. The old-school days of regularly attending classes and having a you-scratch-my-back-I’llscratch-yours mentality are quickly coming to an end. Establishments are now investigating the implementation of online learning programs to replace the face-to-face lectures. The future of education will have to be entirely revamped and the accreditation of an institution will hinge on how open and effective they are at employing new learning systems, as opposed to how many vines and ivy plants they have growing around the perimeter. From a social standpoint, we feel as if the Internet has brought the world closer together. People are connecting globally, giving rise to a post-modern culture that only the likes of sci-fi novel writers could predict. Social networking is a leading cause for many issues in our proverbial sci-fi society by giving the illusion of a social life when, in fact, it promotes quite the opposite: isolation. We

Intelligence agencies are perhaps at the forefront of utilizing new technologies to meet their high demands, though at which point should the ethical implications of their actions override their directives? The Central Intelligence Agency was flying Lockheed U-2s during the Cold War in 1960 to gather intelligence on U.S.S.R. ground movements. Since then, computational advancements have allowed for such agencies to gather far more accurate data that does not have to be acquired at stratospheric heights. Edward Joseph Snowden is internationally known for revealing details of classified United States government surveillance programs and has been under temporary asylum in Russia since 2013 for violating the Espionage Act of 1917. The latter’s long title reads: “An Act to punish acts of interference with the foreign relations, and the foreign commerce of the United States, to punish espionage, and better to enforce the criminal laws of the United States, and for other purposes.” At the time, the U.S. government clearly did not predict the endless capabilities that technological advancements would provide them in the 21st century. In 2007, the National Security Agency (NSA) was monitoring every anthropoid on the face of the earth as a means of preserving computer network security. This was achieved by harvesting millions of emails and instant messaging

contact lists, searching email content, tracking and mapping the location of cell phones, as well as hacking computer webcams. All of which unequivocally violates the outdated Espionage Act, as the government itself is committing espionage on an international level. The justification for this was that by gathering copious amounts of data on every living soul, national and foreign relations could be rigorously scrutinized. It was later revealed that out of the $52.6 billion U.S. national intelligence budget, $25.3 billion was allocated to data gathering. This yet again reflects the greedy and obsessive side of humanity, but at the cost of jeopardizing everyone’s privacy. Any further technological innovations that might assist these agencies in their ventures will have to be carefully employed without violating any human rights. We have a tendency to romanticize the past. The simpler days that weren’t buzzing with the sounds of computers and cell phones. There is a fallacy that the past was an innocent, unadulterated, and perfect place where humans achieved perfection, and the modern world has completely ruined it; the Golden Eras are a thing of the past. Although self-pity and soul-searching might be a trend nowadays, we should rejoice and strive to make every era a golden one, especially from an engineering perspective. As engineers, we strive to design and build a better, more efficient world. For instance, current efforts are centered on artificial intelligence (AI). The potential applications of neural networks applied to artificial learning could prove to be endless. From a pilotless aircraft to a new streamlined teaching system, AI systems are predicted to be the next major breakthrough to be incorporated into our daily lives, rendering them even more disruptive. We thrive in this technologically advanced society and it is therefore of paramount importance that we consider the potentially unfavorable consequences of our work. Maintaining a sound moral compass is perhaps the most important tool an engineer can possess. References [1] The Economist [2] The Guardian [3] The New York Times

LEONARDO TIMES N°2 2017

47


CHALLENGE OF AUTOMATION Aviation symposium of the VSV ‘Leonardo Da Vinci’ AVIATION DEPARTMENT

Roger Hak, BSc Student Aerospace Engineering, TU Delft

Automating systems in aviation poses numerous challenges ranging from political to implementing safety measures. Since automation in aviation is widely discussed, the Aviation Department organised a symposium themed “Challenge of Automation: How Technology Elevates Aviation”, which took place on March 7th.

S

ince the earliest days of aviation, flight automation and on-board systems have been continuously enhanced. Incorporating every aspect that encompasses this very broad topic into a one-day symposium would be challenging enough, to say the least. Therefore, a selection of subtopics was made. The 24th Aviation Department chose to address the automation of airport systems, commercial civil aircraft, and air traffic control, as these fields interact to a more significant extent with the engineering world. This year’s edition also included a workshop for 36 symposium participants, with the theme “The Automation of the Civil Aircraft”, organized by ADSE. 48

N°2 2017 LEONARDO TIMES

In the final minutes of the countdown, the auditorium was packed with visitors for the symposium, with more than 500 participants in attendance. Casper Dek, president of the VSV ‘Leonardo da Vinci’, kicked off the programme. In his introduction, he touched upon what the symposium means for the society and motivated the audience by introducing the challenging question of the day. Casper ended by introducing the chairman of the symposium, Lt. Gen. Ret. Schnitger. Lt. Gen. Ret. Schnitger started off with a decomposition of the broad topic that is automation, mentioning the three main aspects of automation: Public, Politics, and

Perception. Very quickly, it became clear what would be the focal point of the day. We would be traveling vicariously by commercial airplane and experience automation on the way. That means we were start off at the airport, the topic of our first block, then pivot to the aircraft, the topic of our second block. Finally, even though our passenger doesn’t experience it directly, the air traffic control that safely guides the aircraft would be examined in the third block. It was then time to introduce our first speaker: Albert van Veen, CIO of Schiphol Group.

AUTOMATION AND DIGITIZATION OF AIRPORTS Albert van Veen has a clear mission. He wants to make Schiphol the leading digital airport by 2018. With concepts such as seamless passenger journey and smart airport, passengers can expect an improved travel experience. This means, for example,


VSV ‘LEONARDO DA VINCI’

VSV ‘LEONARDO DA VINCI’

less time spent looking up flight information, less time spent in a security line, and notifications on your smartphone when you’re about to miss a flight. Furthermore, van Veen explained the digital passport concept and how collecting all the needed information on your phone could greatly improve the fluidity of air travel.

RPAS is the certification, especially regarding the automation part. After a short Q&A it was time to head for lunch, during which the audience had the opportunity to ask questions to the speakers and share their take on automation. This naturally resulted in interesting conversations between the speakers and students.

AUTOMATION OF CIVIL AIRCRAFT

After lunch, the auditorium filled again and Lt. Gen. Ret. Schnitger introduced the next speaker on behalf of NLR: Frederik Mohrmann. Mohrmann had a very inspiring lecture sharing his views on automation. He uncovered some paradoxes of automation and shared one of his main approaches to automation, which is best summarized in the following sentence: “Automation should not fix the human error, it should leverage the human asset.” Furthermore, Mohrmann encourages us to think about how we can promote human strengths and a higher level

In the first block, we followed our passenger going from the parking lot into the airport. He has now boarded the aircraft. We kicked this off with a talk by Winfried Lohmiller, executive expert at Airbus. Lohmiller shared the trend in automation and elaborated upon the concept of RPAS (Remotely Piloted Aircraft Systems). These systems have numerous applications such as border control, science, communication, disaster control, environmental protection, coastal surveillance, and critical infrastructure. The main challenge for

of thinking. Next, professor Max Mulder joined the stage and held an eye-opening lecture. Mulder showed us how the debate about automation is a very old one since the first fully automated flight back in 1947. Furthermore, Mulder pointed out that numerous incidents in aviation are reported every day in which the pilots or air traffic controllers save the day. These incidents don’t make it to the news but do prove that the human is underrated within the automation loop. Mulder is skeptical of people that predict that within ten years, aviation will have replaced pilots by machines, mainly because people have been saying this for the past 50 years. On top of that, Mulder believes that humans will always have a significant contribution to the system. After these two lectures, which had more LEONARDO TIMES N°2 2017

49


VSV ‘LEONARDO DA VINCI’

signed. Changes to the systems should be done very gradually in such a way that the air traffic controllers can easily adapt to minor changes and besides that, minimize the risk of new systems not working as expected. After another Q&A, Lt. Gen. Ret. Schnitger wrapped up the day and Roger Hak shared some final thoughts on automation. This edition of the VSV symposium proved to be a great success. The goal was to inspire and stimulate the audience, and trigger them into thinking about the challenges that accompanies automations and to offer insights to its various solutions. The 24th Aviation Department can guarantee that this goal has been reached. In conclusion, this symposium was able to shed light on how technology is currently changing aviation, and how it will continue to do so in the future. Regardless of one’s opinion towards this development, the Aviation Department hopes that the symposium has encouraged people to dive deeper into this topic. The theme of this symposium will be a hot topic in aviation for a long time and many of its questions have yet to be answered. The Aviation Department

AUTOMATION WITHIN AIR TRAFFIC CONTROL Having followed our passenger through the airport and flight portions of the day, it was now time to look at the ever-important, but behind-the-scenes, aspect of the symposium. Professor Jacco Hoekstra started his lecture on the automation of air traffic control. The general concept was that the airspace around an airport will still be controlled by air traffic controllers, as is done now. However, when an aircraft is outside of an airport’s airspace, computers inside the aircraft can calculate the flight route. This way the air traffic control is decentralised and the aircraft itself determines which route to fly. The computers will make sure there will not be any potential conflict. It was then time for the next speaker, on behalf of Indra, Sepehr Behrooz. Behrooz shared insights on the latest ATC systems, which Indra is currently working on. He also shared the general trend of the coming fifteen years in air traffic control. There will be an enhanced use of available airspace, more precise trajectories, more air-ground 50

N°2 2017 LEONARDO TIMES

communications, and a higher amount of conflicts. Additionally, a four-dimensional airground negotiation is required to agree on optimal conflict-free routes. The final lecture of the day was held by Frank Hommes from LVNL, Air Traffic Control The Netherlands. He is an active air traffic controller, and therefore a valuable part of this programme block because of his practical experience. Hommes mainly discussed the challenges that come with implementation of new ATC systems. Basically, a new system cannot be directly implemented as de-

The aviation department of the Society of Aerospace Engineering Students VSV ‘Leonardo da Vinci’ fulfills the needs of aviation enthusiasts by organising activities like lectures and excursions in the Netherlands and abroad.

VSV ‘LEONARDO DA VINCI’

of a theoretical and engineering approach, Bart de Vries, head of flight operations at KLM, was introduced. De Vries’ lecture tied everything that was discussed so far together, since he approaches automation from a practical point of view. For example, safety management and training in relation to automation was touched upon. De Vries also mentioned how technology makes life easier for pilots simply by using an iPad on board, for example.


NLR is the place for anyone with a passion for technology www.werkenbijhetnlr.nl

NLR ON SOCIAL MEDIA:

NLR is the Netherlands Aerospace Centre for identifying, developing and applying advanced technological knowledge in the area of aerospace. With state-of-the-art facilities and excellent sta.

Would you like to develop your talents and your competencies? At NLR you’ll get all the space you need! NLR - Netherlands Aerospace Centre

p) +31 88 511 33 30

e) hr@nlr.nl

i) www.nlr.nl


Come on board

Are you passionate about aviation and do you want to: • • • •

Work on many of the world’s most exciting and largest aerospace projects Be part of a large aerospace company Be inspired in an innovation and technology driven culture Contribute to a growth strategy

Take a look at fokker.com, go to careers, read about what we have to offer you and come on board. Fokker Technologies, a division of GKN Aerospace, is a leading global aerospace specialist that develops and manufactures highly engineered advanced aircraft systems, components and services for aircraft manufacturers and airlines worldwide. Colleagues in the Netherlands, Romania, Turkey, Canada, Mexico, USA, China, India and Singapore share your passion for Aerospace. GKN Aerospace, the parent company of Fokker Technologies, is a global first tier supplier of wing, fuselage and engine structures, nacelle systems, landing gear and wiring systems, transparencies, ice protection systems and aftermarket services, with a global workforce of over 17,000 employees in 15 countries.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.