HiPEACinfo 67

Page 1

67 HiPEAC webinar series NOVEMBER 2022 Exploring the IoT, from architectures to networks On-chip wireless communication, with Sergi Abadal Rolf Riemenschneider on IoT opportunities Maspatechnologies: From EU project to innovative start-up

Rolf Riemenschneider on the IoT in Europe

3 Welcome

Koen De Bosschere

4 Policy corner

14 16

Wirepas’s very very good IoT

Sergi Abadal on wireless on-chip networks

25 IoT special feature

Low-power, high-quality graphics made in the EU: The Think Silicon story

Georgios Keramidas

The IoT and the future of edge computing in Europe

Rolf Riemenschneider

6 News

HiPEAC voices

Very very good IoT: An interview with Teppo Hemiä, Wirepas

16 IoT special feature

‘For modern computing systems to work efficiently, communication needs to be as fast as computation’

Sergi Abadal

18 IoT special feature

Teaching the IoT to learn with VEDLIoT

Jens Hagemeyer and Wisse Hettinga

20 IoT special feature Edge distributed intelligence in the internet of autonomous systems

Ovidiu Vermesan

21 IoT special feature

IoT-powered healthcare improvements

Stefan Scharoba and Marc Reichenbach

22 IoT special feature

‘Our nervous system as a whole is like an IoT’

David Atienza Alonso

24 IoT special feature

Modelling IoT-Fog-Cloud systems: The DISSECT-CF-Fog simulator

Andras Markus and Attila Kertesz

26 Inside the box

Introducing the Gaisler GR765 space processor

Fabio Malatesta

28 Innovation impact

Perfect timing: MASTECS’ multicore timing analysis for safety-critical systems Francisco J. Cazorla

30 Innovation impact Building new innovation pathways with DigiFed Ana-Maria Gheorghe, Linda Ligios, Ramona Marfievici and Isabelle Dor

32

SME snapshot

How to robotize any vehicle, with Aitronik

Roberto Mati

33 Industry focus

FPGA-based SoC customization for the automotive market

Daniel Madroñal, Raquel Lazcano, Francesca Palumbo, Tiziana Fanni and Katiuscia Zedda

34 Innovation Europe

AI vs. colorectal cancer: REVERT’s data-based therapy

Horacio Pérez-Sánchez and Antonio Jesús Banegas-Luna

35

HiPEAC futures

Career talk: Federico Iori

How A-WEAR is training the next generation of wearables engineers

HiPEAC internships: Imaging analysis for satellite propulsion at Campera

Three-minute thesis: Domain-specific low-touch FPGA design flows

4
contents
14
HiPEAC INFO 672

18 3026

Faster, smarter deep learning with VEDLIoT

Close-up of Cobham Gaisler’s GR765 SoC

DigiFed’s new pathways to innovation

High energy prices are the talk of the day in Europe. The continent is bracing for a tough winter in which part of the population might have to make a difficult choice between eating or heating. This is the third previously unthinkable event to take place in the space of three years. It all started with previously unimaginable Europe-wide lockdowns to fight the COVID-19 pandemic, followed by a war in Ukraine, and now an unprecedented energy crisis. In the summer, we experienced a mega-drought in Europe. We have to face the fact that commodities like water and energy, and values like the freedom to move, or peace, can no longer be taken for granted in Europe in 2022.

HiPEAC is the European network on high performance embedded architecture and compilation.

hipeac.net

@hipeac hipeac.net/linkedin

hipeac.net/tv

HiPEAC has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 871174.

Cover image: Montri on Adobe Stock

Design: www.magelaan.be

Editor: Madeleine Gray

Email: communication@hipeac.net

We assume that the pandemic and the Russia-Ukraine war will eventually end, but the water and energy problem will most probably remain. Abundant and affordable water and energy are essential to sustain a modern society. If they become scarce, it will impact almost everything: people, industry, even nature and wildlife. The first effect will be that sectors that use a lot of water (such as agriculture or mining) or energy (such as transportation or the chemical industry) might become less profitable and move elsewhere. Steel and fertilizer production in Europe has basically halted, and if this lasts long enough, it will permanently move elsewhere. So Europe risks losing part of its water-intensive and energy-intensive industry.

This is a wake-up call. We will have to adapt to this new normal in which the sky is no longer the limit, and in which we will have to carefully consider how to use the finite resources available to us. Industry will both have to become less dependent on water and energy and to come up with solutions that enable customers to save water and energy.

This magazine’s theme is the internet of things, undoubtedly one of the key enabling technologies to tackle the growing sustainability challenges and to slow down climate change. It is also a technology area in which Europe could take the lead. HiPEAC will emphatically support the community in these green and digital transitions. In 2023, HiPEAC will morph into a network on 'High Performance, Edge And Cloud Computing’. The activities we will organize will largely remain the same, but the focus will gradually shift. More information about the next HiPEAC will be available in the January magazine – make sure you get your copy.

Koen De Bosschere, HiPEAC coordinator

welcome
HiPEAC INFO 67 3

The IoT is taking off in many areas thanks to recent technological developments, and the European Union is establishing a holistic strategy to ensure that Europe can take full advantage of new upcoming opportunities. In this article, the European Commission’s Rolf Riemenschneider (Head of Sector IoT) sets out key implications for the HiPEAC community.

The IoT and the future of edge computing in Europe

A network of connected digital devices, sometimes known as ‘smart’ devices, the internet of things (IoT) encompasses research domains including artificial intelligence (AI), 5G, cloud computing, blockchain and micro- or nano-systems. Recent developments, including augmented device capabilities, faster communication networks, the standardization of commu nication protocols, and more affordable sensors and microelectronic devices, have all contributed to turbocharging the IoT phenomenon.

As outlined by the European Commission’s Max Lemke (Head of Unit IoT) in HiPEACinfo 66 and at the 2022 HiPEAC conference, both the IoT and the shift to edge computing offer immense potential for Europe to create innovative services and business models rooted in verticals around applications in a variety of sectors. In addition, in a post-COVID-19 era, the IoT is accelerating pathways to the green and digital transitions: facilitating energy resilience, such as through the integration of renewables; supporting the switch to e-mobility; and the digitalization of key industrial sectors, to name a few examples. A holistic strategy for mastering the cloud-edge-IoT continuum will open up new ways to exploit innovation across the computing continuum.

Data plays a key role in such a trans formation. An unavoidable consequence of the IoT, and the devices and applications that it powers, is the massive amount of constantly changing data that is generated as a result of digitalization. Yet this data can be harnessed to achieve positive ends, such as by optimizing wind power or enabling real-time traffic avoidance.

A prospering data economy can only become a reality with the right legislative framework, and with computing power moving closer to the edge, legislation will influence rules right across the computing continuum. The Commission has responded with digital policy initiatives such as the Data Act, proposed to the Council and Parliament in March 2022, with the goal of making more data available and setting rules on data usage and access. Similarly, the Data Governance Act seeks to increase trust in data sharing, strengthen mechanisms to increase data availability and overcome technical obstacles to the reuse of data.

Of course, IoT data also has to be processed in real time if meaningful conclusions are to be drawn and decisions made swiftly. With advances in embedded computing, microprocessor capabilities and lightweight AI, more data processing and decision-making is possible at the edge. We look forward to seeing more developments in this area from the HiPEAC community.

Policy corner
Rolf Riemenschneider at IoT Week 2022
“The IoT is accelerating pathways to the green and digital transitions”
HiPEAC INFO 674

The new IoT landscape will help catalyse the green transition

New developments in focus at IoT Week 2022

The return of IoT Week in 2022 was an opportunity to bring the community up to date on these announcements, along with a host of other updates. Co-located in Dublin with its research-focused counterpart, the Global IoT Summit, the event brought together almost 700 participants from 49 countries, with a total of 155 conference sessions on offer. One highlight was the exhibition space, opened by Irish Green Party Minister of State Ossian Smyth, where EU-funded projects such as VEDLIoT (see pp.18-19) were able to showcase their work.

This year’s edition of IoT Week was the first of its kind in the context of the area ‘From Cloud to Edge to IoT’, and took the opportunity to bring together stakeholders of the first Horizon Europe call under Cluster 4, Destination 3. On 22 June, a launch session was dedicated to a new group of future European platforms for the IoT and edge. This comprised six metaoperating systems research and innovation actions – ICOS, FluiDOS, NEMO, NebulOus, aeROS and NEPHELE – and three coordination and support actions –OpenContinuum, Unlock-CEI and HiPEAC

– in the cloud-edge-IoT domain, receiving a total of €64 million in EU funding. The session also discussed the launch of a new edge-cloud-IoT web portal (see ‘Further information’, below) acting as a platform to support the Horizon Europe ecosystem and promoting opportunities for open calls and large-scale piloting.

On a broader scale, the event also highlighted European initiatives that demonstrate how responses to geopolitical challenges can be an opportunity to accelerate the green and digital transitions. To achieve this, industry must join forces at EU level and beyond to embrace innovation and push for the security, resilience and carbon neutrality of the EU’s industrial fabric. As an example, in an IoT Week session featuring speakers from the United States, the EU explored new ways of collaborating on fundamental research between the European Commission and the United States National Science Foundation, focusing on new concepts for distributed computing and swarm intelligence.

Harnessing the power of IoT technologies will have a positive impact in many sectors of activity. Tightly interwoven

Policy corner

with the Digital Decade and upcoming digital policies such as the Data Strategy, the IoT in Europe will also benefit from Europe’s ambition to supply nextgeneration chips through the Chips Act, thanks to the considerable demand side that the IoT represents. Through their development of disruptive technologies, HiPEAC researchers can be central to this effort, and to steering the direction of the IoT in Europe.

FURTHER INFORMATION: Edge-cloud-IoT web platform eucloudedgeiot.eu

IoT Week 2022 iotweek.org/iot-week-2022-dublin

EU-funded project factsheet: ‘Meta-operating systems for the next-generation IoT and edge computing’ bit.ly/EU_meta-operating_systems_IoT

´Two different twins´: Article by Sandro D’Elia on the green and digital transitions HiPEACinfo 62, pp.4-5 bit.ly/HiPEACinfo62_policy_corner

HiPEAC INFO 67 5

Budapest sees triumphant return of the HiPEAC conference

A year and a half after it was originally scheduled, the Budapest edition of the HiPEAC conference finally took place on 20-22 June 2022. Bringing together almost 400 attendees from 28 countries, the event was a welcome opportunity to reconnect with the computing systems community after two years of global pandemic restrictions.

HiPEAC 2022 General Chair Cristina Silvano (Politecnico di Milano), who opened the event, told HiPEAC: ‘After two years of planning with the conference committee, and a virtual conference in 2021, it was such a pleasure to see colleagues in person. Thanks to the efforts of our programme chair, Diana Marculescu, and workshops chair Sascha Ührig, we were able to offer participants a varied and stimulating programme of activities.’

Three keynote talks covered diverse areas of interest for the computing systems community. Kicking off the conference, Hai ‘Helen’ Li (Duke University) explained methods for accelerating machine learning

through sparsity in deep neural net works and tailored architectures. Ovidiu Vermesan (SINTEF) briefed the commu nity on the technologies for the future internet of autonomous systems – see p.20 for our main takeaways. On Wednesday 22 June, Bianca Schroeder (University of Toronto) gave an overview of how flash-based storage devices actually behave in the field.

Meanwhile, the European Commission’s Max Lemke also addressed the audience, setting out the implications of the shift towards edge computing for Europe and stressing the importance of the HiPEAC conference in this area.

As usual, the main paper track, spanning areas from hardware-software acceleration to compilation, was complemented by a vibrant programme of workshops and tutorials. These ranged from longrunning workshops such as the RAPIDO workshop on simulation and performance evaluation to a new series of workshops on cyber-physical systems: ENHANCE, FORECAST and STEADINESS.

The dynamic industry exhibition show cased innovations across the compute continuum, while companies had the opportunity to pitch and network during the industry session. Meanwhile, the science, technology, engineering and mathematics (STEM) student day brought new students into contact with the HiPEAC community, allowing them to make valuable contacts and get up to date on trends in computing systems research.

During the conference, HiPEAC Coordi nator Koen De Bosschere (Ghent University) paid tribute to former HiPEAC member Béla Fehér, who was due to host the conference but who tragically passed away in the autumn of 2020. Koen commented: ‘It is very sad that Béla could not be in Budapest with us. He is much missed within the HiPEAC community.’ He thanked Béla’s colleague Péter Szántó (Budapest University of Technology and Economics) for stepping in as local host and helping the event run smoothly.

The HiPEAC conference would not be the same without the generosity of its sponsors, many of whom continue to support HiPEAC year in, year out. A full list of sponsors is available on the HiPEAC 2022 website.

HiPEAC news
hipeac.net/2022/budapest HiPEAC22 Google Photos album: bit.ly/HiPEAC22_photos HiPEAC20 YouTube playlist – including full keynote talks: bit.ly/HiPEAC22_videos
HiPEAC INFO 676

Bentornati a Fiuggi! Notes from ACACES 2022

Taking place on 10-16 July in Fiuggi, Italy, this year’s ACACES summer school was a resounding success, bringing together around 200 participants from all over the world. This was the eighteenth edition of the event, which has become synonymous with top-quality courses and relaxed intergenerational networking in an idyllic Italian setting.

The opening keynote talk was provided by IBM’s Andrea Corbelli, who set out the company’s quantum roadmap. For the entrepreneurial talk, AMD’s Maximilian Odendahl gave an inspiring account of how his company, Silexica, progressed from university spinoff to being acquired by technology giant Xilinx.

Meanwhile, the courses – delivered by world-class tutors – spanned the compute continuum across high-performance computing, the cloud, and cyber-physical systems, covering topics from acceleration to compilation, and from security to sustainability.

Participants were given the chance to get feedback on their own research at the poster session, attended by senior staff. In a now-established ACACES tradition, the careers roundtable gave early career researchers valuable advice on different career paths in industry, academia and entrepreneurship. The event also provided an opportunity for information gathering for the 2023 edition of the HiPEAC Vision.

Surrounded by the green hills of Frosinone, about an hour and a half from Rome, the quiet spa town of Fiuggi is known for its healing waters and picturesque local trails. Little wonder that many ACACES attendees return year after year, or that the summer school has been the setting for many productive research partnerships.

The next edition of ACACES will take place in July 2023 – keep an eye on the HiPEAC website for further information and details of how to apply.

HiPEAC news
67 7

New-look HiPEAC launching 1 December

In December, the seventh edition of the HiPEAC project will kick off, representing a continuation of European Union investment in the HiPEAC community until May 2025. With a title of ‘High Performance, Edge And Cloud computing’, the next phase of HiPEAC aims to contribute to rapid technological development, market uptake and digital autonomy for Europe in advanced digital technology and applications across the whole European digital value chain.

HiPEAC 7 will act as a catalyst, creating the arena for exchanging concepts, future trends, and discussions among ecosystem players. In turn, it will support the European Commission and the European computing constituency by providing roadmaps for research and innovation relating to next-generation computing technologies and applications.

The project responds to the call HORIZON-CL4-2021-DATA-01-08 –‘Roadmap for next generation computing and systems technologies’ (Coordination and Support Action).

Thanks to successful applications to competitive funding rounds, HiPEAC has been funded continuously since 2004. We are grateful to the European Commission for their continued support of the network, which is both a reflection of its value to the computing systems community in Europe and a tribute to its many active members.

Further information will be available in due course on the HiPEAC website and in future issues of HiPEACinfo. In the meantime, the CORDIS factsheet provides an introduction to the new project.

Digital Europe calls –apply by 24 January

The European Commission has opened a third call within its Digital Europe programme, with a total €200 million available in funding. There are calls for the creation of data spaces – concrete arrangements facilitating data sharing and pooling – for smart communities, media, manufacturing and mobility.

There is a call to deploy the European AI-on-demand platform, as well as a call for large-scale pilots for cloud-to-edge based service solutions. To help expand digital skillsets in Europe, a further open call is looking for specialized education programmes or modules in key capacity areas.

Meanwhile, the call for an initial network of European Digital Innovation Hubs (EDIH), with a deadline of 16 November, seeks to establish the EDIH network in Europe. A key plank of the Digital Europe strategy, EDIHs provide a ‘one-stop shop’ of technical expertise and testing facilities, helping companies to respond to digital challenges and become more competitive.

Further calls under the 2021-2022 Work Programme are due out shortly – keep an eye on the funding and tenders portal for more information.

FURTHER INFORMATION:

release – Digital Europe Programme: Commission opens calls

invest €200 million in digital tech

HiPEAC news
Press
to
https://bit.ly/Digital_Europe_calls_3_PR
cordis.europa.eu/project/id/101069836 HiPEAC INFO 678

MachineWare: Fast virtual platforms made in Germany

Rainer Leupers (RWTH Aachen)

Thanks to its open instruction-set architecture concept, RISC-V hardware is getting enormous industry traction, with roughly 10 billion chips already shipped on the semiconductor market. Yet how can we make the RISC-V software ecosystem keep pace? Fast systemlevel simulators (or ‘virtual platforms’) are among the key tools for early software development and are enablers for hardware / software codesign. While virtual platforms are widespread today in many application domains, there is an ever-increasing ‘need for speed’, due to growing complexity levels. Simulating advanced applications at binary-code level under control of a full operating system stack demands simulation speed in the range of giga-instructions per second.

The Aachen-based start-up MachineWare GmbH is on a mission to dramatically cut simulation time for RISC-V-based systemson-chip and beyond. The new company, founded in April 2022 by Lukas Jünger, Dr Jan Henrik Weinstock, and Professor Rainer Leupers as a spinoff of the ICE Institute at RWTH Aachen University, provides several innovative technologies and products in the virtual platform domain:

• Virtual Components Modeling Library (VCML): An open-source SystemC TLM-2.0 modelling library including a wide selection of off-the-shelf models for commonly required components, such as buses, memories, timers and input / output (I/O) controllers, e.g. Ethernet, PCI/e and VIRTIO.

• Fast Translator Library (FTL): A retargetable infrastructure for building high-performance, functional instruction set simulators. Its extreme performance enables interactive debug and comprehensive test coverage of complex target software stacks. FTL-based processor models can be easily integrated into any existing virtual platform environment. Through VCML, FTL processor models seamlessly connect to standard software debuggers and development environments, such as GDB and Eclipse.

• SIM-V, an ultra-fast and parallelizable RISC-V instruction set simulator built on top of FTL, delivering over twice the simulation speed of traditional open-source tools like QEMU.

‘For customers preferring QEMU due to legacy or other reasons, we also offer corresponding modelling services, generating added value; for instance by integrating QEMU into SystemC TLM-2.0 simulations,’ says MachineWare Managing Director Jan Henrik Weinstock. ‘However, given the huge speed advantage of our SIM-V product, we see more and more clients interested in migrating to our novel solutions.’

Reach out to MachineWare for a product demo via contact@ mwa.re. Special licence conditions are available for academic research and teaching purposes.

HiPEAC news
FURTHER INFORMATION: machineware.de
MachineWare founders Lukas Jünger (left) and Jan Henrik Weinstock (right)
“MachineWare is on a mission to dramatically cut simulation time for RISC-V-based systems-on-chip”
“Given the huge speed advantage of our SIM-V product, we see more and more clients interested in migrating to our novel solutions”
HiPEAC INFO 67 9

SparkLink Alliance

Francesc Fons, Huawei Technologies

The wireless networking space is gradually expanding from traditional communications to emerging areas in a digital world dealing with more and more interactions between devices, people, and their environments. In many applications, wires are being reduced or even dropped altogether. With many connected devices and terminals demanding the transfer of large volumes of data at high speed, applications are migrating to stable, reliable wireless technologies with appropriate bandwidth, latency, capacity, connections and quality of service.

Some incredible examples come to mind. In the smart cockpit of the future, it is expected that wireless links will increasingly connect screens, microphones and cameras. In the office, devices such as your laptop and screen will be able to communicate with one other through high-precision wireless controls. Complex and multi-terminal, realtime interactions will allow enhanced control of third-party devices. In a home setting, ambient lighting will adapt to the position of the occupant and their phone, creating a better true light experience. Curtains will automatically open to pave the way for the vacuum cleaner, while movable devices could move away to avoid collisions with small children. Machinery and equipment in smart factories will be controlled precisely by wireless signals, working in tandem with sensors to generate a simpler, more efficient and stable environment. These are all great ideas for the future, but they pose serious challenges for existing wireless technologies.

The SparkLink Alliance was born to make wireless connectivity opportunities a reality, taking them to the next level. SparkLink is an initiative which brings together communications, vehicle, module, and chip manufacturers, along with application providers. All participants are focused on delivering ultra-low latency, ultra-high speed, ultrareliable and ultra-high concurrency wireless transmission capabilities, to provide better wireless connectivity for smart cockpits, smart homes, smart terminals and smart manufacturing.

The vision of SparkLink Alliance is to become a robust team of experts who join forces to work on research and development (R+D) at the different levels of the short-range wireless networking stack, from the physical substrate to the application layers. The aim is to build the most reliable wireless technology ever, beyond today’s state of the art. Currently comprising over 200 members from academia and industry, it intends to develop, from scratch, the new PHY and MAC layers that deliver the right set of features to enable and support all these new challenging, wireless-driven applications.

SparkLink

organized into eight different working groups,

and

We

fields

to the promotion of these areas in industry

community working on related

this challenging initiative.

HiPEAC news
is
as follows:
invite all members of the HiPEAC
technology
to join
FURTHER INFORMATION: sparklink.org.cn/en
“The SparkLink Alliance was born to take wireless connectivity opportunities to the next level”
Towards reliable high-quality wireless short-range connectivity for smart cyber-physical systems
requirements
standards • security • spectrum • test certification • smart automotive • smart home • smart device • smart manufacturing Dedicated
HiPEAC INFO 6710

Book: 3D Interconnect Architectures for Heterogeneous Technologies

A new book with the title 3D Interconnect Architectures for Heterogeneous Technologies: Modeling and Optimization has been published by Jan Moritz Joseph from the Institute for Communication Technologies and Embedded Systems (ICE) at RWTH Aachen University, Germany. It describes the first comprehensive approach to the optimization of interconnect architectures in 3D systems on chips (SoCs), specially addressing the challenges and opportunities arising from heterogeneous integration.

Readers learn about the physical implications of using heterogeneous 3D technologies for SoC integration, while also learning to maximize the 3D-technology gains, through a physical-effect-aware architecture design. The book provides a deep theoretical background covering all abstraction-levels needed to research and architect tomorrow’s 3D-integrated circuits, an extensive set of optimization methods (for power, performance, area, and yield), as well as an open-source optimization and simulation framework for fast exploration of novel designs.

To access the book via Springer, follow the QR code on the image.

Remembering Michel Dubois and Martin Kersten

This summer, we were saddened to hear of the loss of two titans of the HiPEAC network: Michel Dubois and Martin Kersten.

Alexandra Kourfali and Dirk Stroobandt win ACM TRETS Best Paper Award

Alexandra Kourfali and Dirk Stroobandt, members of the Computer Systems Lab at Ghent University’s Faculty of Engineering and Architecture, received the 2022 TRETS Best Paper Award for their paper ‘In-Circuit Debugging with Dynamic Reconfiguration of FPGA Interconnects’.

TRETS (ACM Transactions on Reconfigurable Technology and Systems) is the most prestigious journal in the field of field-programmable gate array (FPGA) design and reconfigurable technology and systems. The award was presented to both authors in San Francisco in July 2022 at the Design Automation Conference (DAC), the oldest and largest conference in the field of electronic design automation.

On behalf of HiPEAC, congratulations to both!

Known for his seminal work on memory coherence and consistency models, Michel Dubois was a computer engineering professor at the University of Southern California. Among his research were papers on multiprocessor architecture, micro-architecture, reliability and scalable parallel algorithms. He was a fellow of the IEEE and of the ACM.

Martin Kersten was a fellow of the Centrum Wiskunde & Informatica (CWI), the Dutch national research centre for mathematics and computer science, emeritus professor of computer science at the University of Amsterdam, and a Knight of the Order of the Dutch Lion. Over a research career spanning more than 40 years, he broke new ground in data-intensive computer science, creating the open-source system MonetDB which became the basis of the spinoff company MonetDB Solutions.

Our thoughts go out to the families and friends of Michel and Martin, who are sorely missed in the HiPEAC community.

HiPEAC news
HiPEAC INFO 67 11

Intel acquisition of Codeplay Software

In June, Intel announced that it had signed an agreement to acquire Codeplay Software, an Edinburgh-based company within the HiPEAC umbrella specializing in open-source system software for artificial intelligence (AI) processors. The aim of the acquisition was to further expand Intel’s oneAPI ecosystem, according to a blog post by the semiconductor company.

‘Codeplay is globally recognized for its expertise and leadership in SYCL, the Khronos open standard programming models used in oneAPI, and its significant contributions to the industry ranging from open ecosystem activities like SYCL and OpenCL to RISC-V, automotive software safety, and medical imaging,’ said Joe Curley, vice president and general manager of Intel software products and ecosystem, in the blogpost.

‘Intel is focused on open standards, and Codeplay has led and contributed to multiple open standards including SYCL™, a royaltyfree open standards programming model developed by the Khronos Group. In fact, our engineers have worked closely with engineers from Intel alongside other key organizations to bring the SYCL standard to the level of maturity it now has,’ wrote Codeplay’s chief executive officer and HiPEAC member Andrew Richards in response, adding that ‘Intel [had] chosen SYCL to be at the heart of the oneAPI initiative’.

oneAPI is a cross-industry, open programming model that aims to provide a common developer experience across accelerator architectures, removing the need for separate code bases, programming languages, tools and workflows for each architecture.

to Codeplay on behalf of the HiPEAC community!

appoints new CEO and CTO

In July, MonetDB announced the appointment of Niels Nes as the company’s new chief executive officer (CEO), succeeding Martin Kersten, who sadly passed away in the same month.

An experienced business leader, Niels Nes is a recognized pioneer in column-store database technology. He is a co-founder of MonetDB and has acted as a technical advisor to the company since its inception in 2013. For more than 27 years, he has been one of the driving forces behind the open-source database system MonetDB, the company's core product.

‘I look forward to moving into my new role in a team of top database engineers with whom I have worked for many years, and to further developing Martin’s ideas and legacy. We are a really solid team with complementary expertise which will continue growing in the future,’ commented Nes.

At the same time, MonetDB Solutions announced that Sjoerd Mullender will succeed Niels Nes as chief technology officer (CTO) at MonetDB. Mullender is also a co-founder of MonetDB Solutions, and has been the company's certified quality auditor (CQA) for years. For 20 years, Sjoerd has worked on the technology used in MonetDB and has made many improvements to increase its performance and reliability.

‘I am pleased about this new position within our robust team. I have exciting plans in store for the future of MonetDB,’ Mullender said.

HiPEAC news
MonetDB CEO Niels Nes (left) and CTO Sjoerd Mullender (right) MonetDB Solutions
FURTHER INFORMATION: ‘MonetDB Solutions appoints new CEO and CTO’: Announcement on MonetDB website, 21 July 2022 bit.ly/MonetDB-CEO-CTO
Congratulations
FURTHER INFORMATION: ‘Expanding our Open Standards Vision with Intel®’: Blogpost on Codeplay website, 1 June 2022 bit.ly/Codeplay_Intel_PR_blog Codeplay CEO Andrew Richards HiPEAC INFO 6712

HiPEAC Vision updates

The HiPEAC Vision editorial board is working on the 2023 edition of HiPEAC’s biennial roadmapping document, which will be launched at the 2023 HiPEAC conference in Toulouse.

In the meantime, a number of articles from the 2021 edition have been updated, as follows:

• ‘Cybersecurity must come to IT systems now’ and ‘Privacy: whether you’re aware of it or not, it does matter!’ – Olivier Zendra and Bart Coppens

• ‘The artificial programmer’ – Harm Munk and Tullio Vardanega

• ‘AI for a better society’ and ‘COVID-19 is more than a pandemic’ – Koen De Bosschere

The recommendations and introduction have also been updated.

In parallel, HiPEAC has teamed up with Belgian comic artist Arnulf to create a series of cartoons based on key points of the Vision. Look out for these in the updated articles.

Dates for your diary

hipeac.net/vision/#/latest

What’s on HiPEAC TV

Have you checked out the HiPEAC YouTube channel recently? You'll find talks, interviews and animated shorts on HiPEAC TV, across a range of horizontal and vertical application areas.

Featured video: HiPEAC Vision panel on sustainability Why is overall energy use increasing even as computing devices are becoming more efficient? How many elements of the periodic table are used to make computing devices? How can computing contribute to sustainability goals?

Find out in this panel talk from Computing Systems Week Lyon: bit.ly/CSWAutumn21_HiPEACVision_sustainability_video

HiPEAC 2023

16-18 January 2023, Toulouse, France

Calls for workshop papers currently ongoing Sponsorship and exhibition opportunities available hipeac.net/2023/toulouse

PLDI 2023: ACM SIGPLAN Conference on Programming Language Design and Implementation 19-21 June 2023, Orlando, FL, USA

HiPEAC Paper Award conference Paper submissions: 10 November 2022 pldi23.sigplan.org

DAC 2023: Design Automation Conference 9-13 July 2023, San Francisco, CA, United States HiPEAC Paper Award conference Abstract submission: 14 November 2022 dac.com

EFECS 2022: European Forum for Electronic Components and Systems 24-25 November 2022, Amsterdam, The Netherlands

Join HiPEAC at the industry exhibition at EFECS efecs.eu

DATE 2023: Design, Automation and Test in Europe 17-19 April 2022, Antwerp, Belgium

Various calls for contributions ongoing date-conference.com

FCCM 2023: IEEE Symposium on Field-Programmable Custom Computing Machines Dates TBD, Los Angeles, CA, United States HiPEAC Paper Award conference Abstract submission: 9 January 2023 fccm.org

HiPEAC news
FURTHER INFORMATION:
HiPEAC INFO 67 13

Non-cellular networks are shaking up the internet of things (IoT), providing unprecedented coverage at low power. HiPEAC caught up with Wirepas Chief Executive Teppo Hemiä to find out what non-cellular 5G is, what advantages it offers, and what kind of applications it can enable.

Very very good IoT

An interview with Teppo Hemiä, Wirepas

What are the main differences between the cellular and non-cellular approaches?

Cellular communication was developed for mobile devices, which today basically refers to smartphones. It’s excellent for that purpose!

Massive IoT applications are different. They are typically small, low-bandwidth devices, and there can be massive amounts of them in buildings, factories, supply chains, etc. For this purpose, it’s impractical to create indoor coverage by adding base stations outside – especially when they can be connected in a mesh topology. That’s what non-cellular communication is about.

What are the main advantages of non-cellular 5G? What is DECT-2020?

Devices form and optimize the network autonomously in noncellular 5G architecture. They are mesh networks where every end device is an access point to other devices and extends the network. This results in better indoor coverage, consumes less power, and is more reliable and resilient due to redundant routing options. It also enables much denser networks than cellular.

Non-cellular 5G doesn’t need any base stations or mobile operators. It can be run on-premise. All that makes it more affordable than cellular and more secure from a privacy point of view. Anybody can deploy a non-cellular network in any place, at any time.

DECT-2020 is a standard made especially for massive IoT use, which has not existed before. Now that standard is a part of the official set of worldwide 5G standards (ITU-R, IMT-2020).

“Non-cellular 5G offers better indoor coverage, consumes less power, and is more reliable and resilient due to redundant routing options”
HiPEAC voices
Wirepas technology enables asset tracking in logistics
HiPEAC INFO 6714

Tampere, where Wirepas was born, is a telecommunications centre - Photo credit: Laura Vanzo / Visit Tampere

What are some of the main applications of Wirepas technology?

Wirepas technology has been proven in about a hundred diverse use cases, from tracking healthcare equipment in hospitals to make time for the staff to actually do their work, rather than trying to locate tools, to monitoring sea container conditions in large cargo ships and anything in between.

We are proud of all our partners and their successes, but our whole team is exceptionally thrilled about any use case that has the ability to make the world in general, and industry in particular, more sustainable. For example, by reducing waste, optimizing energy use, tracking assets, so there’s no need for an excessive amount of backup resources, reducing water use, and so on.

What developments would you like to see in the IoT over the next few years?

The IoT will be around us just everywhere, although it is not often visible. The connection cost will be halved in five years, enabling new markets for the IoT. I expect to see an enormous step in overall supply chain and asset visibility with condition data. It will be a massive change when industry has comprehensive data on its operations, supply, and assets: this will remove waste in processes and lower energy and water consumption. Transforma Insights has estimated that artificial intelligence (AI) and the IoT will save 5.3 PWh in energy by 2030. That represents more than 10 times the consumption of the whole of Finland and over 3% of global energy use.

Wirepas is ‘straight outta Tampere’. What makes Finland – and Nordic countries more generally – leaders in IoT technologies?

Tampere is one of the best places to develop deep tech and IoT technologies. The region has a long history of advanced signal

processing and radio design. Tampere University made the world’s first call via the Global System for Mobile Communications (GSM), which became the international standard. Nokia had an important mobile research and development (R+D) centre in Tampere, and many industrial and automation companies have their research here. The combination of digitalization, connectivity, industrial experience, and a strong university has made the Tampere region one of the most robust IoT ecosystems in the world.

Finnish ecosystems are also used to working collaboratively with their customers and competitors. Finland has a history of significant contribution to international wireless connectivity standards, and Wirepas is also giving its own share and will increase investment in this area.

FURTHER INFORMATION:

Wirepas website wirepas.com

Check out the video of Teppo’s keynote talk at Computing Systems Week Tampere: youtu.be/pbyNhWAq35E

HiPEAC voices
“It will be a massive change when industry has comprehensive data on its operations, supply, and assets: this will remove waste in processes and lower energy and water consumption”
HiPEAC INFO 67 15

As a child, Sergi Abadal was fascinated by the bleeps of modems. Later he marvelled at how you could send and receive data over the internet without disrupting the phone line. Now a distinguished researcher at the Univeristat Politècnica de Catalunya - Barcelona Tech, HiPEAC caught up with Sergi to talk about tiny antennas, meeting the pope, and how on-chip wireless communication could contribute to faster, more efficient computing systems.

Sergi, we have a communication problem. That’s right. Communication has become increasingly important in computing systems: architects are turning to multiple cores and more processors to continue performance gains, but that means that data needs to be transferred between each of these cores and processors. In fact, for modern computing systems to work efficiently, communication needs to be as fast as computation. That’s currently not the case, and communication is causing a bottleneck. Even optical wires – which transfer data at the speed of light – don’t deliver the necessary bandwidth.

Another issue is that, with wired networks, once the network is fixed, you’re stuck with a given topology. With architectures increasingly incorporating different kinds of processor– so-called ‘heterogeneous architectures’ – it’s also important to be able to adapt. So speed and versatility are two major reasons for introducing wireless communications into networks on chips (NoCs).

And wireless could deliver that speed and versatility?

Exactly. Wireless could improve efficiency in two ways: first, with a single hop, one transmitter can broadcast to the whole chip –and not only to elements that are physically distant but also to multiple antennas at once. Second, through protocols, wireless communication can be very versatile: you can share channels between different antennas, or you can use wireless to control the rest of the network, for example.

Broadcasting has traditionally been avoided in NoC communi cation as it is hard to do with wired networks. This is where wireless communication could make a real difference: in use cases like synchronization, for example, it allows fast, efficient broadcasting to multiple receivers at once.

So does this mean we can start ripping all the wires out? Not quite. Wireless would only ever be a complement to wired communications, which would be needed for short-range, local communications.

It all sounds great. What’s stopping us from implementing it right away?

Although a nice idea, this is still at an early stage of research, and there are some significant technical challenges which we’ll be investigating as part of my ERC Starter Grant project, WINC (see ‘Further information’, below). Four come to mind immediately: first, parts of the metal chip could interfere with the signal received by the antennas. Second, all the models we have of signal wave propagation are in open spaces, very different to the dense, closed environment of a chip, so we need to model how they would propagate within a chip.

Third, there’s the issue of protocols. Imagine if the pope came to visit: there would be a protocol for how to behave. It works similarly with networks – there are protocols for when antennas can talk, when not to interfere. There’s been lots of research on this in cellular networks, but in a NoC scenario we would need to come up with protocols that are radically simplified, but also very fast and very opportunistic. The good thing is that, while in a cellular network lots of different environmental conditions need to be considered – no two rooms are the same, for example – this is exactly the opposite of the closed, stable environment of a chip, so you can simulate it once and then seize opportunities for future optimizations.

investigating

Finally, the shift to wireless networks would inevitably have an impact on computer architecture. Until now, things have been done in a certain way so as not to stress the architecture too much, with consequences either for programmability or performance.

‘For modern computing systems to work efficiently, communication needs to be as fast as computation’
IoT special WINC is
wireless on-chip communication HiPEAC INFO 6716

The challenge will be to work with the computer architecture community to develop architectures that can fully exploit the new possibilities wireless networks offer. This is something we are working on as part of the Horizon 2020 FET Open WiPLASH project (see ‘Further information’, below).

Are there any major drawbacks to wireless networks on chips? Security is obviously a concern with wireless networks; although, as chips come packaged in hard-to-penetrate metal packaging, it is hard for attackers to get access. Obviously, malware could be inside the chip, and in that case one way of staying ahead of the attackers would be to use lightweight encryption that can be quickly changed.

Another issue is that, because you are radiating energy everywhere, broadcasting is obviously less efficient than wired transmission. However, if the architecture is adapted in such a way that you can broadcast when it makes sense, then the architecture as a whole may even save energy.

How would wireless help power fancy concepts like threedimensional (3D)-stacked chiplet-based architectures or accelerators? Or even quantum computing?

Wireless communication could be a real efficiency saver here – again, always in conjunction with wired communication. In the case of chiplets, currently communications need to go from chiplet to interposer or package substrate and back again to the next chiplet, which is inefficient. With wireless communications, you could create a ‘highway’ to bypass this and create greater bandwidth.

With accelerators, some functions which naturally lend them selves to broadcasting have been avoided in the past. With wireless communications, you could implement broadcasting to

exploit the full range of the accelerator’s capabilities, for example in dataflow functions.

As for quantum, communication is arguably more important than in classical computing. In quantum computing, qubits need to be physically next to each other to operate with them, and they cannot be copied as you do in classical computing, so communication to move those qubits becomes essential. The concept of multicore is also coming to quantum, with engineers linking chips through quantum coherent interconnects. Wireless allows you to send a signal to the other side of the system instantaneously, which is important in quantum as qubits degrade as time passes and can waste the entire computation if it’s not done fast enough.

Quantum also lends itself to wireless as the issue of noise – molecules vibrating as the temperature rises – essentially disappears in an environment cooled to 4K, just above absolute zero, which is required for quantum computers. However, will the electromagnetics interfere with the qubits? That’s something else we’re investigating in WINC.

FURTHER INFORMATION:

WINC: Wireless Networks within Next-Generation Computing Systems winc-project.eu

WiPLASH: Wireless Plasticity for Massive Heterogeneous Computer Architectures wiplash.eu

WINC has received funding from the European Research Council (ERC) under the Starting Grant 2021 programme, while WiPLASH has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 863337.

IoT special
The WiPLASH consortium
HiPEAC INFO 67 17

Traditionally, data centres were required to process compute-hungry machine-learning algorithms. Funded by the European Union (EU), the VEDLIoT project seeks to change this, bringing high-performance deep learning to the edge. For this interview, VEDLIoT coordinator Jens Hagemeyer (Bielefeld University) spoke to Wisse Hettinga (The IoT Radar) about how the project is making the internet of things (IoT) smarter.

Teaching the IoT to learn with VEDLIoT

With over 40 billion connected devices predicted by 2025, the IoT’s expansion seems unstoppable – yet the usefulness of many of these devices could be improved by the addition of intelligence. ‘Traditionally, machine learning was designed to be executed in the cloud, rather than at the edge,’ explains Jens Hagemeyer (Bielefeld University). ‘In VEDLIoT, our focus is on enabling deep learning at the edge, where you have the data.’

However, this is not without its technical challenges. ‘The reason why deep learning was carried out in the cloud was that it requires huge computing resources for the inference, or learning, stage,’ says Jens. Data centres, with powerful processors such as graphics processing units (GPUs), were the obvious choice for these workloads. Shifting towards the edge imposes strict limitations in terms of the energy budget and heat dissipation, he notes.

‘Within VEDLIoT, we are developing our own modular hardware platform to support the whole range, from simple devices at the far edge, over to the near edge and even to the cloud. Our “microservers” – embedded computing modules – are integrated into these platforms,’ explains Jens. ‘There is also a focus on accelerators, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to improve energy efficiency.’

Although the project is working on custom hardware platforms, the idea is to integrate existing technologies rather than develop chips from scratch. ‘Obviously, there are many accelerators already available on the market, and we conducted an extensive benchmarking campaign on many of these before integrating them into industry-supported form factors such as SMARC, COMHPC and COM Express. We also participate in working groups of the different standardization bodies like PICMG, to interact with them and integrate new ideas.’ says Jens.

‘However, what is currently missing on the market is the ability to carry out heterogeneous processing of deep learning algorithms at the edge, such as GPUs with FPGAs or GPUs with ASICs. VEDLIoT allows you to combine accelerators in a coherent, modular architecture so that you can choose what is best for your application,’ he adds. Thanks to the collaboration with VEDLIoT partner Christmann, the aim is to make this customizable architecture commercially available.

These technologies could be used to power any number of applications, with three main areas – automotive, industrial IoT and smart homes – forming the focus of the project. ‘The automotive use case involves distributing the processing for a pedestrian detection system across the car, the edge and the

The

consortium

IoT special
VEDLIoT
HiPEAC INFO 6718

cloud, while ensuring functional safety, robustness, verifiability, certifiability and explainability,’ says Jens.

‘With regard to the industrial IoT use cases, we’re working with VEDLIoT partner Siemens on a retrofit solution for predictive maintenance on a wide range of electric motors, which involves lots of sensors and processing while needing to ensure batteries can last for at least two years,’ he adds. ‘The other industrial IoT use case involves electrical arc detection in DC distribution cabinets. The main requirements for this are speed and accuracy: electrical arcs can cause fires and so must be detected as soon as possible, but while false negatives could result in safety hazards and destruction to equipment, false positives could result in entire production lines being shut down for no reason.’

Finally, the project is also working on a smart home use case, featuring the smart mirror developed at Bielefeld University (see box for more information).

VEDLIoT also launched an open call for application proposals, with funding available for projects presented. ‘We now have applications in agriculture and biomedicine, among other areas,

and we’ll be working to power these with VEDLIoT technologies by the project’s end in 2023.’

FURTHER INFORMATION:

VELIoT website vedliot.eu

Check out Wisse Hettinga’s IoT Radar interview with Jens Hagemeyer on YouTube: bit.ly/IoT-Radar_VEDLIoT

Talks by the VEDLIoT consortium are available in the Computing Systems Week Tampere playlist on HiPEAC TV: bit.ly/CSWSpring22_HiPEACTV_playlist

HiPEAC interview with Jens Hagemeyer at the 2022 HiPEAC conference on HiPEAC TV: youtu.be/ACNSBQT6WsM

VEDLIoT (Very Efficient Deep Learning in IoT) has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 957197. The project is part of the Next Generation Internet of Things (NGIoT) initiative.

The smart mirror: Reflections on EU-funded research

Developed at Bielefeld Unviersity, the VEDLIoT smart mirror provides an interaction interface for the smart home: recognizing the owner, it delivers information and services – such as the weather forecast –tailored to the user. As such, it can be used as assistive technology to support disabled and / or older people to keep living independently, for example, by providing reminders or suggestions. All data is processed locally, meaning that privacy is maintained.

The smart mirror is a near-edge incarnation of VEDLIoT technology, which can also be integrated into a chassis for data centres. The hardware has been enhanced over a series of European Union (EU)-funded projects:

• M2DC: Modular Microserver DataCentre developed a modular, highly efficient, cost-optimized server architecture for cloud and high-performance computing (HPC) applications, with a focus on heterogeneity.

• LEGatO: Low Energy Toolset for Heterogeneous Computing extended this architecture for edge applications, with a focus on energy efficiency.

This meant that one platform could be continuously maintained and helped pave the way for research to be transferred from universities and research centres to real customers. Indeed, a startup company, EmbeDL was born out of the LEGaTO project and is now a partner in VEDLIoT, where it works on accelerator integration.

IoT special
Hall of mirrors: The smart mirror on display at Hannover Trade Fair in 2019 (left) and the 2022 HiPEAC conference (right)
HiPEAC INFO 67 19

What are the ingredients of the future internet of things (IoT)? In his keynote talk at HiPEAC 2022, Ovidiu Vermesan

set out a vision of intelligent, self-organizing systems necessary to accelerate the digital and green transitions, while outlining the technology advances we need to get there. Here are some key takeaways.

Edge distributed intelligence in the internet of autonomous systems

Characteristics of hyperconnected technologies

Networks are getting ever more complex, while there has been a shift from a central computing paradigm to decentralized and distributed computing. Early networks of wired devices were replaced by wireless sensor networks, then IoT platforms, and in future we will see autonomous, meshed, intelligent IoTs. The scale of networks – the number of devices communicating with one another – is growing immensely. Networks are also heterogeneous, comprising different hardware, components, platforms and networks.

These networks will have to adapt to constant changes. They will need to provide intelligent connectivity, while devices will need context awareness and autonomous capabilities. Networks as a whole will have to offer scalability, efficiency, adaptability, dependability and transparency.

The IoT and the metaverse IoT technologies will accelerate the development of other technologies such as the metaverse, an immersive virtual world. The backbone of the metaverse is formed by neural interfaces, digital

twins, networking, natural language processing, machine vision and blockchain. Artificial intelligence – underpinned by machine-learning algorithms, deep-learning architectures, and swarm intelligence – will enhance the immersive experience.

Internet of intelligent things

The internet of intelligent things comprises:

• Edge computing: processing and data storage close to the source of data, reducing the response time and communication bandwidth.

• Ubiquitous computing: computing anytime, anywhere, using any device, in any location and any format.

• Neuromorphic computing: the design of processing units that mimic biology, using analogue, digital, mixed-signal, and very large-scale integration (VLSI) systems, along with software for perception, motor control and sensory integration.

• Swarm intelligence (see overleaf): a branch of artificial intelligence (AI) addressing the collective behaviour of decentralized, self-organized systems, in which agents interact with one another and the environment.

(SINTEF)
IoT special Slide from Ovidiu Vermesan’s keynote talk at HiPEAC 2022 HiPEAC INFO 6720

Swarm intelligence

Swarm intelligence addresses some of the challenges imposed by edge distributed intelligence. A swarm is essentially a system of brains which pools resources to create new forms of intelligence. One obvious example is a honey-bee colony, where bees can solve complex problems when they ‘think’ together as part of a system.

To be dependable, and therefore trustable, swarm intelligence systems need the following:

reliability

safety

security

resilience

connectability

availability

maintainability

Future application areas of swarm intelligence include the internet of energy and the internet of robotic things.

Research priorities

To make the internet of autonomous things a reality, the following research priorities should be considered:

• Collective intelligence in a system with no centralized control structure

• Orchestrators or trusted meta-level operating systems

• Swarm computing for self-organizing IoT systems.

• New, more intuitive ways of programming intelligent orchestrators, for example through voice or drawings.

• New IoT and edge open, decentralized, distributed architectures

• Methods for updating and upgrading IoT edge devices and providing parallel processing capabilities.

• Operating systems and orchestration mechanisms to address the heterogeneity of devices and technologies.

• Scalability, efficiency, dependability, trustworthiness, adaptability and transparency

• Advances in processing capabilities, bandwidth, resources, management and orchestration

IoT-powered healthcare improvements

The research project THIEM:COTTBUS5G, launched in January 2022, is investigating how 5G networks may improve healthcare in hospitals. For this purpose, a 5G campus network will be established on the grounds of the Carl Thiem Hospital Cottbus, covering both indoor and outdoor areas. Medical applications with two different requirement profiles will then be implemented and evaluated using this network.

The first group of applications aims at an internet of medical things (IoMT). Various medical resources will be connected to the 5G network to transfer (and ultimately unify) patient-related data from many different sources and to track the location of mobile resources. This requires the development of cost- and energy-efficient 5G-capable devices, possibly in combination with alternative wireless communication technologies.

While the challenges of IoMT are more related to the large number and heterogeneity of connected devices, other prospective applications require high data rates. These include, for example, video broadcasts of surgical procedures or other treatments, as well as the transmission of data from medical imaging examinations. Where the purpose of these transmissions

is to interact with a remote expert, low latencies must also be ensured. Based on measurements and predictions of its coverage, the network will therefore be adapted dynamically to the required quality of service, for example by prioritizing data streams according to their criticality.

Partners involved in this project are the Brandenburg University of Technology Cottbus-Senftenberg as leader, the Fraunhofer Heinrich Hertz Institute, the Carl Thiem Hospital Cottbus and the city of Cottbus. The research is funded by the German Federal Ministry for Digital and Transport.

IoT special
HiPEAC INFO 67 21

Can we take inspiration from biology to create a more energy-efficient internet of things (IoT)?

In his keynote talk at Computing Systems Week Lyon, David Atienza (EPFL) set out a compelling vision of how the nervous system can inspire better, more efficient computer architectures. For this interview, we talked to David about selective data collection, specialized accelerators and in-memory computing, among other things.

´Brain architectures – or biological systems – use really fascinating concepts that we don’t use today in computer architectures. These concepts are the result of millions of years of evolution trying to optimize for power and energy use, which is precisely what we are trying to do right now in computer architectures, especially in the internet of things (IoT),’ explains David.

In fact, computer architects can learn a lot from the brain and nervous system, which perform significantly better than computers in terms of energy use, says David. He highlights three main areas. ‘First, selectively choosing which information is to be processed and why. Currently, IoT systems try to prepare for the worst-case scenario, so we collect all the data. This results in a lot of information being collected that will not be used. On top of the huge energy costs, there is not enough storage for all this information,’ he says.

To avoid this, David’s research focuses on the concept of ‘events’ and local filtering. ‘In an IoT – whether that comprises wearables on people or connected cars or buildings, for example – you have sensors continuously monitoring the environment,’ he notes. ‘This information is typically sent to the cloud where it is analysed and acted upon.’ Rather than sweeping up all the data, he proposes a more targeted approach.

To do this, David takes inspiration from the nervous system, which, as he points out, is like a complex IoT sensing and processing data. ‘You don’t have the brain – analogous to the cloud – processing all the data; you have local processing in every single part of your organism – and this filters a lot of data,’ David says. ‘The only events that are escalated to the brain are things that can’t be handled at the periphery.’ Translated to use cases in wearable applications, for example, this approach would involve only transmitting data that offers new information for review by a relevant healthcare professional. ‘This would have significant benefits in terms of reducing energy use on the analogue front end and the acquisition part from the IoT devices and computer architectures as well.’

As well as data collection, biological systems can also provide inspiration on how to improve processing, says David. ‘When you think about processing, in general you process the data with

general-purpose processors. Biological systems, on the other hand, particularly the peripheral nervous system, are highly specialized and adapted to the particular organ or signal you are sensing. So the idea here is to create heterogeneous, very specialized architectures, and selectively activate the computing blocks required at a particular moment.’ This, David says, could reduce energy consumption by up to 100 times, compared to current computer architectures.

The third way in which biological systems can inspire IoT computer architectures is the structure of the brain, says David. ‘In the brain, you don’t have separate sections for storing and computing the data. Everything is integrated into a three-dimensional structure where you have the data close to where you are going to process it,’ he points out. The computer architecture version of this, he suggests, is in-memory computing. ‘This is essentially the idea that you don’t have to change the information between the memory subsystem and the processing, but you actually process the data directly in the memory.’

IoT special
David’s research takes inspiration from the nervous system
‘Our nervous system as a whole is like an IoT’
HiPEAC INFO 6722

Event-based sampling is one way the nervous system can inspire a more efficient IoT

Open-source accelerator

Building on these concepts, David’s group at EPFL, in collabo ration with the Digital Circuits and Systems Group at ETH Zürich, created an open-source accelerator which uses BitLine logic computing, known as BLADE (BitLine Accelerator for Devices at the Edge’). BLADE undertakes simple operations – such as additions, multiplications, subtractions, shifts or bit wise operations – on the data where it is stored. ‘These simple operations can be combined to represent a very significant proportion of the machine learning workload,’ explains David. ‘Doing this means you can speed up processing and use less energy, as the data isn’t transferred to the processor.’

To promote interdisciplinary working and help create the most appropriate computer architectures, the research team decided to make BLADE open source. ‘We have added BLADE onto a multicore platform (called HEEP, Heterogeneous Energy Energy Efficient Platform) implementing the RISC-V open-source instruction set so that anyone can work with this accelerator,’ explains David. ‘We need the community to work together on new methods in machine learning, both creating algorithms and finding the right way to use these brain-inspired concepts to really

accelerate these algorithms. If we can get hardware creators, computer architecture engineers and algorithm designers to work together using these principles, we can come up with some very well thought-out, co-designed frameworks that actually work much better than the state of the art.’

FURTHER INFORMATION:

Video of David Atienza’s keynote talk: ‘Brain-Inspired Edge AI Computing Architectures for Smart Objects and Wearables in the IoT Era’, Computing Systems Week Lyon, 27 October 2021 bit.ly/CSWAutumn21_keynote_video_DA

Video interview: ‘David Atienza on brain-inspired computer architectures’ bit.ly/CSWAutumn21_interview_DA

Simon, William Andrew; Qureshi, Yasir Mahmood; Levisse, Alexandre Sébastien Julien; Zapater Sancho, Marina; Atienza Alonso, David. ‘BLADE: A BitLine Accelerator for Devices at the Edge’. Proceedings of 29th Edition of the ACM Great Lakes Symposium on VLSI (GLSVLSI 2019), 9-11 May 2019 bit.ly/BLADE_EPFL

IoT special
HiPEAC INFO 67 23

Modelling IoT-fog-cloud systems with DISSECT-CF-Fog

The number of inter connected internet of things (IoT) devices has been growing exponentially, due to technological advances and expanded user demand; in fact, Cisco has reported that the number of mobile devices across the world will reach 13.1 billion by 2023. The need to process and store the amount of data generated by these devices represents a significant burden on traditional clouds.

Fog computing complements cloud technology by bringing services closer to the user for latency- and / or time-sensitive IoT applications. However, operating and maintaining real IoT-fogcloud infrastructures are very costly and time-consuming tasks. Hence simulation tools – which allow systems to be tested in the absence of a real environment – have become popular in the research community and in industry. These tools provide a costeffective way to test concepts, try out new procedures, modify existing ones, and then change the real environment based on the conclusions drawn from the results measured.

The DISSECT-CF-Fog simulator is an open-source, event-driven simulator, which has two main parts: infrastructure and IoT modelling. For the physical layer of the infrastructure, detailed infrastructure-as-a-service (IaaS) simulation is offered, including physical and virtual machines, storage and datacentre network properties. Furthermore, both horizontal and vertical connections are represented among fog and cloud nodes. In the virtual layer, applications utilizing computing nodes are responsible for processing data.

DISSECT-CF-Fog is also capable of modelling smart devices, sensors and actuators. To be as realistic as possible, the mobility of smart devices is also considered. Since a node is typically responsible for serving IoT devices in its environment, and the coverage of computing nodes is limited, the movement of mobile devices may cause increased latency and unpredictable response time, which can degrade the quality of service. DISSECT-CF-Fog also offers a solution for large-scale experiments, especially if the number of active entities exceeds tens of thousands.

With this simulator we focus on the following questions and topics:

• Connectivity: When many IoT devices are present in a certain area (due to movement, for example), the increased load can easily cause bottleneck effects, therefore it is important which IoT device connects to which computing node. With different selection algorithms and handovers, the overlapping of computing nodes can be leveraged.

• Offloading: The data to be processed can be moved depending on the load on the given compute node. By using different trade-off algorithms, the system can be optimized for energy load, utilization cost or resource usage.

• Billing: Utilization cost is not only important from the provider's perspective, but also from that of the end user. DISSECT-CF-Fog takes into account both IoT and cloud-side costs and can measure energy consumption as well.

• Resource management: The proper allocation of IoT services to computational resources is particularly important for device mobility. Metrics such as latency, storage and free computing power can change continuously according to the actual load of the moving devices. Since a device can move to a position that is not covered by any node, the data can be processed locally on that device. Several proactive and reactive service migration strategies can be applied in the simulator.

FURTHER INFORMATION: DISSECT-CF-Fog on GitHub github.com/sed-inf-u-szeged/DISSECT-CF-Fog Paper: ‘Actuator behaviour modelling in IoT-Fog-Cloud simulation’ peerj.com/articles/cs-651/

IoT special
HiPEAC INFO 6724

Launched in 2007 in Patras, Greece, Think Silicon is a provider of ultralow-power graphics intellectual property (IP) for the wearables and internet of things (IoT) markets. HiPEAC caught up with Chief Scientific Officer Georgios Keramidas at the 2022 HiPEAC conference in Budapest to find out more about the company.

Low-power, high-quality graphics made in the EU

What’s the Think Silicon story so far?

Think Silicon has over 15 years’ experience designing applica tions in ultra-low-power 2D and 3D, video and display, as well as machine-accelerated applications for the microcontroller market. A major milestone was the acquisition of Think Silicon by Applied Materials two and a half years ago.

The target market of Think Silicon is ultra-low-power embedded devices that are equipped with a display – think of wearable devices, home appliance devices, home infotainment devices, for example. In those kinds of devices, users require high-quality graphics in their interactions with the applications. On the other hand, these kinds of devices are typically battery operated. At Think Silicon, we’ve worked with our customers for many years to optimize the graphics processing units (GPUs), the hardware and the software of these devices, and we’ve managed to extend the battery life from days to weeks.

What are some of the most exciting applications powered by Think Silicon IP?

Think Silicon GPUs are highly reconfigurable GPUs and serve a wide range of applications, from graphics rendering in IoT devices – for example wearable devices, fitness trackers, health trackers – security and infotainment devices, all the way up to

video overlaying in data centres. Our next generation of GPUs is going to include artificial intelligence (AI) functionality for edge AI devices.

In the last quarter of 2021, more than 15 million of the smart watches sold on the market contained our GPU technology and, according to our projections, our technology will be in 15 different smart watches in the next year.

How have European Union (EU)-funded projects like HiPEAC helped Think Silicon grow?

Think Silicon has been a member of the HiPEAC network for many years, and through HiPEAC we managed to build collabo rations with companies such as Codeplay and Samsung. We worked with these companies as part of two European-funded projects, LPGPU and LPGPU2. Another strategic project for us was the TETRAMAX Innovation Action, which helped us better understand the European community in the area of customized and low-power computing.

What are the company’s plans for the future? We will be expanding our product line targeting the extended graphics and AI (machine-learning) market. The NEOX™ architecture supports a new concept of a smart GPU architecture used in MCU-driven system-on-chips (SoCs) by lightweight graphics and machine learning frameworks. The NEOX™ IP-Series can be customized for graphics, machine learning, vision / video processing and general-purpose compute, and it helps offload workloads from the main central processing unit (CPU). The new offering supports the flexibility to use high-quality graphics and machine-learning features at the same time, if desired, and configure end-point devices, such as next-generation smart-/ fitness-/ health watches, augmented reality (AR) eyewear, video, set-top box (STB) entertainment, and smart displays in point-ofsale / point-of-interaction devices.

IoT special
FURTHER INFORMATION: Think Silicon website https://www.think-silicon.com/ Interview with Georgios Keramidas on HiPEAC TV bit.ly/HiPEAC22_Think_Silicon
Georgios
Keramidas
(centre) with colleagues Spyridon Garmpis and Ilias Vasileiou at the
2022 HiPEAC conference,
which the
company sponsored HiPEAC INFO 67 25

It takes a particularly tough processor to withstand the space environment. In this article, Fabio Malatesta of HiPEAC sponsor CAES Gaisler introduces the GR765, a multicore microprocessor component which will bring GHz computing performance to space applications.

Introducing the Gaisler GR765 space processor

Space is a harsh environment, imposing tremendous challenges on space engineers. The launch is already an extreme condition; the rocket that places the spacecraft into orbit also submits it to extremely high vibrations. After reaching orbit, things don’t get any easier: temperatures can range from hundreds of degrees below freezing to hundreds of degrees above. Moreover, radiation naturally present in space can damage electronic devices. Radiation effects range from degradation in parametric performance to complete functional failure, leading to catastrophic impacts on the whole mission.

Cobham Gaisler (part of CAES, provider of solutions for critical missions) is a world leader in embedded computing systems for harsh environments, along with other diverse radiation-hardened products, with footprints in many parts of the solar system. The components developed at Gaisler have a history of high reliability and have been defined as mission-enabling technologies.

Gaisler is now developing the GR765, a new multicore microprocessor component. It seeks to bring GHz computing performances to space applications, which traditionally have lagged behind their on-Earth counterparts. This state-of-the-art, radiation-hardened processor is ideal for spacecraft on-board computers, payload processing, high-altitude avionics and other high-reliability aerospace applications.

The GR765 is the next-generation radiation-hardened faulttolerant octa-core system-on-chip (SoC), with a user-selectable boot option between LEON5FT SPARC V8 and NOEL-V FT RV64 RISC-V processor cores. These processors embed fault-tolerant features and can deal with radiation-induced errors (SEUs –see ‘Glossary’) using error correction codes (ECC) applied to the on-chip memories. The ECC encoding / decoding is done in parallel with normal operation, and a correction cycle is fully transparent to the software, which can continue to run uninterrupted.

The GR765 supports DDR3 SDRAM and NAND Flash memory interfaces with advanced error detection and correction capabilities. Communication interfaces include a SpaceWire router, SpaceFibre, Ethernet, MIL-STD-1553, and CAN-FD

interfaces. The GR765 is currently under development; prototypes are expected to be available in early 2024.

Glossary

• EDAC: Error detection and correction: techniques that enable the reliable delivery of digital data over unreliable channels.

• FT: Fault tolerance is the property that enables a system to continue operating properly in the event of one or more faults within some of its components.

• SEU: A single-event upset (SEU) is a change of state caused by one single ionizing particle (ions, electrons, photons) striking a sensitive node in a microelectronic device, such as in a microprocessor, semiconductor memory, or power transistors. The error in device output caused by the strike is called an SEU or a soft error.

• SEFI: A single-event functional interrupt (SEFI) is a condition where the device stops normal functions and usually requires a power reset to resume normal operations.

GRLIB

The hardware modules implemented in the GR765 are largely based on the open-source library, GRLIB, for the high-speed hardware description language VHDL. GRLIB includes reusable intellectual property (IP) cores designed for SoC development, centred around a common on-chip bus, along with memory controllers and many other peripherals. The library is vendor agnostic, provides support for many different computer-aided design (CAD) tools and target technologies, and is distributed under GNU General Public License or commercial licences; see ‘Further information’ for the link.

FURTHER INFORMATION:

Cobham Gaisler website gaisler.com

CAES radiation-hardened solutions and high-reliability components bit.ly/CAES_radiation_reliability

Gaisler GRLIB library gaisler.com/getgrlib

Inside the box
HiPEAC INFO 6726

DEVELOPMENT

8-core processor

The user can select at boot between 8x NOEL-V FT RISC-V cores and 8x LEON5FT SPARC v8 cores. The processors can run complex operative systems providing performance comparable to an ARM Cortex A53. They are also equipped with special FT features that allow them to handle radiation-induced errors without software interruption.

DDR3 Memory controller

Particularly suitable for space applications. Equipped with a 96-bit interface, it uses a strong error correction code to achieve doubledevice correction capability: this allows it to deliver correct data despite one full device failure and random SEU-induced errors on the other devices.

GRSCRUB/SoC bridge

You can never have enough computing power. Sometimes a companion field-programmable gate array (FPGA) is needed to accelerate complex tasks. The GR765 includes two cores to facilitate integration: the GRSCRUB is an external FPGAscrubber controller responsible for programming and monitoring the FPGA configuration memory, which is also vulnerable to SEUs. The SoC bridge is a simple synchronous inter face that allows the companion FPGA to be used as a memory-mapped device for software use.

Nand Flash memory controller

Supports ONFI 4.0 and provides DMA transfers. The core implements a BCH EDAC with the capability of correcting 60 errors per chunk of 1024 bytes of data. The EDAC can be combined with a data randomizer to break any repetitive bit patterns, thereby increasing memory endurance. The memory controller also supports detection and recovery from SEFI.

Embedded FPGA

Gaisler is studying the inclusion of a small embedded FPGA, allowing end users to implement their functions in programmable logic. This kind of device could be particularly useful in many missions, as a lot of instruments and sensors have non-standard interfaces, and it is often difficult to communicate with such equipment without using a lot of circuit board area.

SpaceWire router Offers a configurable solution for high data-rate routing switch functionality for onboard satellite networking. The SpaceWire protocol defines a bi-directional, full-duplex, serial data communication link and it supports highly fault-tolerant networks and systems.

HSSL controller

Implements SpaceFibre and Wizard Link controllers complemented by direct-memory access (DMA) engines. SpaceFibre is a multi-Gbits/s, on-board network technology for spaceflight applications, which runs over electrical or fibre-optic cables. It provides quality of service, fault detection, isolation and recovery capabilities. WizardLink is a minimal protocol with the objective of streaming raw data with occasional comma characters for synchronization.

TT/TSN Ethernet

While the GR765 already includes Gbit ethernet controllers, Gaisler is evaluating the TTEthernet and TSN extensions. Time-Triggered Ethernet (TTEthernet) is a time-critical network for industrial and avionics applications while Time-Sensitive Networking (TSN) is a set of IEEE 802 Ethernet substandards that deterministic real-time communication over Ethernet. Such extensions have become quite important in applications with strict deterministic requirements, such as those found in the automotive, industrial, aerospace and space domains.

The GR765 is a radiation-hardened octa-core system-on-chip. It is ideal for spacecraft on-board computers, payload processing and any high reliability aerospace application.
www.caes.com/gaisler
UNDER
HiPEAC INFO 67 27

Perfect timing MASTECS delivers multicore timing analysis for safety-critical systems

Take-up of new processor architectures has traditionally been slow in the automotive and avionics sectors, as manufacturers need assurances that vehicles’ systems will meet safety standards. However, with computing performance demands growing at a rapid pace – to enable the artificial-intelligence functions necessary for future autonomous vehicles, for example – more compute resources need to be exploited.

To complicate this, computations must complete within a specific time frame, usually milliseconds, so that the system can react appropriately. Until recently, there was a lack of commercial software tools capable of analysing a multicore system’s timing abilities to meet regulators’ requirements.

Funded by the European Union, the MASTECS project, coordinated by Francisco J. Cazorla at Barcelona Super computing Center (BSC), sought to provide the missing piece of the puzzle. HiPEAC caught up with Francisco to find out more.

While multicore processors have started gaining traction in the automotive and avionics domains, the safety aspect is still a relatively open problem. ‘It was not until recently that the

automotive and avionics sectors had started to use multicore processors to deliver performance gains, but there was no methodology that encompassed safety considerations,’ explains Francisco. ‘In a previous EU-funded project, PROXIMA, we had started exploring multicore solutions within a safety framework, but PROXIMA was an academic project. MASTECS was the bridge which allowed us to take these solutions to industry.’

Building on earlier projects, MASTECS therefore created the first certification-ready timing analysis solution – comprising a toolsuite and methodology – capable of handling the complexity of multicore systems in safety-critical systems. The methodology responds to the requirements of the AM(C) 20-193 standard used by original equipment manufacturers (OEMs) and tier-1 companies in the and avionics sectors.

As a ‘Fast Track to Innovation’ project, MASTECS comprised an agile consortium of four partners: BSC (who contributed their multicore software microbenchmark technology, known as MμBT), Rapita Systems (who enhanced their Rapita Verification Suite during the project), Colllins Aerospace (providing avionics applications to test the technology) and Marelli Europe (for testing in automotive applications).

Some of the members of the MASTECS project consortium at the kick-off meeting
Innovation impact HiPEAC INFO 6728

‘During MASTECS we delivered three tools for software timing analysis, along with the associated methodology for their use,’ explains Francisco. ‘The project advanced these technologies from technology readiness level (TRL) 6 to 8 along the twin pillars of automatization and certification. The work carried out during the project also earned the authors a best paper award at the Embedded Real-Time Systems [ERTS22] conference in 2022.

From cutting-edge research to industry-ready solutions MASTECS builds on previous technology transfer activities between the partners. As reported in HiPEACinfo 54, BSC had already created a partnership and framework agreement to provide multicore timing analysis services to Rapita, whereby BSC’s MμBT microbenchmarking technology was integrated into Rapita’s analysis tool RapiTime. This work was recognized with a HiPEAC Technology Transfer Award in 2017. At the time, Francisco noted that the relationship with Rapita had been cultivated in part thanks to HiPEAC; prior to the network’s existence, it was harder to establish contacts with industry researchers active in Europe.

Technology transfer was also a key part of MASTECS itself: to allow the creation of a purely industrial stack, the microbenchmark suite developed at BSC was transferred to a spin-off company, Maspatechnologies. In creating the company, the academic team drew on the expertise of the technology transfer department at BSC.

‘The experience of founding Maspatechnologies was an eye opener. Creating a commercial product requires a change in mentality for academics who are used to publishing papers,’ explains Francisco. ‘Rather than just focusing on your own openended research idea, you have to adapt the work to the needs

of your customers, and make sure you really respond to their requirements.’

Having mastered this change in outlook, the company is going from strength to strength. In terms of future plans, the aim is to consolidate the methodology, extend it to other processors (such as accelerators) and work on growing Maspatechnologies. ‘This is a real success story of EU-funded research delivering market value. The creation of Maspatechnologies was timed perfectly, as demand for the benchmarking suite is soaring,’ says Francisco. ‘Now we are working on hard on meeting our customers’ needs and consolidating Maspatechnologies as the market leader in this area.’

FURTHER INFORMATION: MASTECS website mastecs-project.eu Maspatechnologies website maspatechnologies.com ‘MASTECS multicore timing analysis on an avionics vehicle management computer’ ERTS22 Best Paper Award in the ‘Embedded computing platforms and networked systems’ category mastecs-project.eu/media/news/erts22-best-paper-award Interview with Francisco J. Cazorla in ‘The innovation factory: HiPEAC Technology Transfer Award winners 2017’ HiPEACinfo 54 p.21 bit.ly/HiPEACinfo54_TT_BSC_Rapita Innovation impact HiPEAC INFO 67 29

Funded under the ‘Smart Anything Everywhere’ initiative, DigiFed helps low-digital industries in the European Union (EU) harness cyber-physical and embedded technologies for new market opportunities. The project is part of a wider push to implement a Europe-wide approach to digital transformation. Having previously led the FED4SAE project, Ana-Maria Gheorghe, Linda Ligios, Ramona Marfievici (all Digital Catapult) and Isabelle Dor (CEA-Leti) bring a wealth of experience to DigiFed. HiPEAC caught up with them to find out more.

Catapulting innovation Building new innovation pathways with DigiFed

To this end, DigiFed implements three interrelated innovation pathways:

• Application Experiment: a cascade funding pathway that selects and finances SMEs and mid-caps to develop CPS solutions based on existing or to-be-developed prototypes and products.

What does DigiFed aim to achieve?

DigiFed helps EU industries digitize their products and services to reach new markets enabled by cyber-physical and embedded systems. The project supports small and medium enterprises (SMEs) and mid-caps in the EU through cascade funding and by supplying technology expertise and innovation management. The objective is to support the companies selected to achieve sufficient technical maturity in specific projects to allow them access to the market, creating solutions to enable new use cases and services.

Aided by European funding, DigiFed promotes cross-border collaborations between companies, generating new opportunities beyond national and regional initiatives. Opportunities for collaboration are broad in terms of, for example, technologies, business objectives, market opportunities, and application domains. However, all these opportunities share a focus on supporting low-digital companies to progress towards digitalization and addressing more traditional application sectors where the digitalization gap is more evident.

How will it achieve its aims?

Part of the Smart Anything Everywhere initiative, the DigiFed consortium brings together 12 partners with expertise in digital technologies and innovation management from nine countries. DigiFed offers €3.9 million in ‘cascade funding’ to SMEs and midcaps to enhance their assets through the inclusion of innovative digital technologies. In addition, the project offers technical expertise provided by research and technology organizations (RTOs), universities, accelerators and industrial entities, along with innovation management support.

• Generic Experiment: this revolves around key-enablingtechnology building blocks developed by an RTO through exploration of the international market and value-chain requirements. This inquiry is conducted with SMEs partici pating in activities such as workshops to implement advanced technology demonstrators with co-funding from regional authorities.

• Digital Challenge: an innovation initiative introduced by DigiFed, and designed by Digital Catapult, where a large enterprise (the ‘digital challenge owner’) acts as an early adopter seeking cutting-edge digital solutions to a defined challenge. The purpose is to highlight attractive market needs to be addressed through CPS and embedded systems to solve industry challenges.

How does DigiFed encourage a pan-European approach to innovation?

Central to the three innovation pathways proposed by DigiFed are cross-border collaborations between the different stakeholders. Digital Innovation Hub (DIH) members of the DigiFed consortium work with and enhance cooperation within established ecosystems, linking to other networks to create an EU-wide federation of DIHs. These DIHs offer sustainable crossborder services and partnerships between relevant European innovation stakeholders (e.g. RTOs, universities, accelerators, etc.) and actively promote open calls and matchmaking.

Through the three innovation pathways described above, DigiFed has so far supported 117 start-ups / SMEs / mid-caps from 23 EU and associated countries. The projects selected cover a wide variety of application domains, including industry 4.0, agritech,

Innovation impact
HiPEAC INFO 6730

health, transportation, manufacturing, construction and water management.

What is the Digital Challenge, the new innovation tool introduced by DigiFed?

The Digital Challenge programme was designed to create partner ships between industry leaders and technology start-ups and mid-caps to accelerate the adoption of advanced technologies, while addressing key industrial challenges and opportunities. A new innovation pathway, it leverages DigiFed’s international networks to identify industry challenges from different countries and connect their owners to the skills and expertise required to solve problems aligned with key business objectives.

Corporate organizations are invited to participate in this open innovation initiative and support European companies with

match funding and access to training, sites, etc. to develop a solution for a specific digital or technological challenge.

This innovation pathway allows participants to minimize risks, offers them an alternative way of procuring new suppliers from a broader innovation ecosystem, and accelerates the company’s digital transformation journey. It unlocks new opportunities, helps develop new skills, and builds new partnerships and products. Innovators can also benefit from access to the network of DIHs, along with technical and business support, webinars and networking opportunities.

FURTHER INFORMATION: digifed.org

DigiFed has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 872088.

Water, water, everywhere: How DigiFed tackles real-world challenges

One of the three Digital Challenges DigiFed is tackling focuses on building next-generation performance solutions for predictive operation of water systems. While 72% of the earth’s surface is covered with water, 97% of this water is salty or brackish. The distribution of the remaining 3% (freshwater) is very uneven. As a result, many regions in the world are water-stressed areas, while the potential of unconventional water resources, such as seawater / brackish water desalination, remains relatively untapped.

Desalination is a separation process used to reduce the dissolved salt content of saline water to a usable level. Reverse osmosis, the most advanced and energy-efficient system, is the leading desalination technology globally. The key element of the process is a semipermeable membrane that, applying pressure, allows water to pass through while rejecting salts. A stream of permeate water, i.e. water for consumption (once remineralised), is produced.

Currently, in the water-management industry, the monitoring of the performance of reverse osmosis elements is carried out through manual, laborious and time-consuming procedures. This is not practical, especially for large desalination plants comprising thousands of elements.

This is why the Spanish company ACCIONA, a desalination market leader, came to DigiFed, seeking innovative digital solutions that could continuously monitor the performance of each membrane element in a multi-vessel array. ACCIONA needed an end-to-end digital solution for measuring, reporting, and visualizing the properties of each membrane. Thanks to the immediate detection of malfunctioning elements, this would facilitate maintenance and result in fewer plant shutdowns.

DigiFed paired ACCIONA with Instrumentation Technologies from Slovenia to develop a bespoke solution for ACCIONA’s business needs. The result is SWICSSY, an end-to-end system that exploits new ways to measure various physical quantities to assess the performance of the reverse osmosis membranes in real-time and identify problematic membranes inside the vessel, allowing the development of predictive operational strategies.

SWICCSY will help ACCIONA monitor the health of critical infra structure such as desalination facilities. It will support plant operators to adopt the optimal strategy in terms of shutdowns, rotation or replacement of membranes, thereby extending the lifetime of the plants. This is a unique opportunity to monitor the health of a desalination plant using an innovative technology and methodology.

Ana Jiménez Banzo, who heads the innovation management department at ACCIONA, commented: ‘DigiFed’s Digital Challenge allowed ACCIONA to find a cutting-edge technological solution in record time. It made it possible to identify best-in-class technology developers and to achieve a strategic alliance with Instrumentation Technologies to develop SWICCSY, a game-changing solution for the desalination market.’

Innovation impact
HiPEAC INFO 67 31

In the latest in our series on pioneering European tech companies, Aitronik Chief Executive Roberto Mati explains how the company draws on the latest research to create bespoke robot vehicles which precisely fit customer needs.

How to robotize any vehicle, with Aitronik

COMPANY: Aitronik

MAIN BUSINESS: vehicle robitization (ground, air, sea)

LOCATION: Pisa, Italy

WEBSITE: aitronik.com

Aitronik was born in 2017 to offer customized vehicle robotization services to manufacturers of ground, aerial, and marine / submarine vehicles. Located in Pisa, the cradle of robotics in Italy, the company delivers full-range, rapidly scalable, solutions to design autonomous vehicles.

At Aitronik, engineers robotize every kind of vehicle. Their background has solid roots in academic research and industrialgrade autonomous ground, aerial, and marine applications, ranging from participation in the 2007 DARPA Urban Challenge (the event that paved the way to self-driving cars), to the more recent robotization of forklifts, cleaning machines, lawnmowers, tractors, aerial drones and underwater vehicles, for their respective industrial manufacturers.

In 2011, Aitronik’s founders decided to develop a novel, platformindependent software architecture to serve as the enabling core technology to robotize almost any kind of vehicle. Today, the autonomous guidance, navigation, control, and perception modules can be integrated and customized on multiple embedded devices for robotic applications, from the complete range of embedded platforms provided by NVIDIA, to PC104 single-board computers. Interfaces for rapid software- / hardware-in-the-loop simulations are integrated by design and, where required by the application, ROS/ROS2 middleware can easily be integrated.

A robust sensor fusion module can analyse measurements from lidars, radars, cameras, inertial measurement units (IMUs), GPS,

and many more sensors adopted on autonomous robots. Sensor fusion integrates up-to-date robust and efficient algorithms from the scientific literature, and specific customizations that optimize efficiency, performance, and resource requirements.

Working with vehicle manufacturers in different applications, Aitronik engineers start from understanding the specific industrial scenario. Each project is tailored to the specific application, whether it is an autonomous towing tractor for airports, an autonomous underwater vehicle for surveillance, or a fleet of reach stackers for container handling at ports.

Together with the client, they select the most appropriate project management tools (traditional or agile), then move on to concept sketching. In a continuous co-design approach, roboticists at Aitronik select sensors and electronic boards to be integrated on board the vehicles, and define the optimal architecture both with the client’s research and development (R+D) and industrial design departments. Integration of the entire software stack for vehicle autonomy occurs at a later stage, along with the development of new software and custom functions.

In addition, Aitronik engineers collaborate with national and international research centres, and help raise the technology level of robotics research by participating in European-funded projects. Under the H2020 ECSEL Comp4Drones project, for example, Aitronik is delivering an autonomous rover that cooperates with an aerial drone on farms to quickly identify plant diseases, save water, and improve the quality of crops.

While you may never see its logo on your robotic vehicle, Aitronik technology may well be powering it.

SME snapshot
HiPEAC INFO 6732

Industry focus

With modern vehicles processing massive amounts of data in real time, automotive manufacturers are looking to novel processor combinations. In this article, Daniel Madroñal, Raquel Lazcano, Francesca Palumbo (all University of Sassari), Tiziana Fanni and Katiuscia Zedda (both Abinsula) describe how they teamed up to create an integrated workflow for systems based on field-programmable gate arrays (FPGAs).

FPGA-based SoC customization for the automotive market

Today, sensor-equipped objects exchanging information have become pervasive. These objects are used to build systems capable of processing data closer to the source, often in a distributed manner, while actionable data are collected elsewhere.

As an example, modern vehicles are fitted with an array of sensors and actuators, along with ever-increasing advanced computational capabilities. They are connected systems that constantly exchange data about the local environment, traffic situation, emergency alerts and weather conditions. Consequently, they need to be capable of generating and processing a large variety of data in real time.

System-design engineers are starting to consider FPGAs as a fitting solution in this context, thanks to the flexibility and customization that these reconfigurable processors offer. Many system-on-chips (SoCs) also couple FPGAs with external processors. This allows them to offer the specialization provided by FPGAs while maintaining the simpler programmability of more general-purpose processors, and also allows an operating system (OS) to be run.

As part of a long-established collaboration, the University of Sassari (UNISS) and the Italian software company Abinsula are investigating the adoption of FPGA-based SoCs featuring a lightweight embedded OS for application in the automotive domain. To simplify the design and management of FPGAbased SoCs – which is not straightforward for many developers – UNISS provides the Multi-Dataflow Composer (MDC) tool. Originally developed with the University of Cagliari, MDC automatically generates hardware accelerators and the co-processor infrastructure to connect these to the embedded cores, together with the scripts to automate system generation and the application programming interfaces (APIs) to manage the accelerator from the software application.

For their part, Abinsula provides Ability (Abinsula Linux for Ubiquity), a meta-Linux distribution based on the open-source Yocto Project. Ability’s modular structure facilitates the inclusion of new features for specific projects. In addition to layers inherited from the open-source community, Abinsula has developed specific layers for Ability that make it particularly suitable for the automotive domain.

The current collaboration of UNISS and Abinsula aims at providing an integrated workflow for the generation of FPGAbased systems, including hardware accelerators and a lightweight embedded OS customized for the automotive sector, aligned with the recommendations of the ISO 26262 standard. The goal is to provide methods and tools for the design, development and management of such systems.

To this end, we are working on a safety-relevant use case (a virtual rear-view mirror scenario), in which multiple cooperative cameras capture the context outside the vehicle and the system must autonomously react according to the stimuli. On-board processing and information fusion increase the vehicle’s capabilities to adjust the set-up autonomously and to cope with unpredictable and highly variable conditions. Self-adaptive capabilities will be exploited against possible failure risks due to light conditions (e.g. light spots vs. hard shadows) and / or hardware problems (e.g. availability loss).

FURTHER INFORMATION:

Multi-Dataflow Composer on GitHub mdc-suite.github.io

Ability by Abinsula abinsula.com/ability

Inside Industry Association (formerly Artemis) whitepaper: From the IoT to system of systems, 2020 (pdf) bit.ly/Artemis_IoT_SoS Intel FPGAs for automotive applications intel.ly/3QKQEWd

HiPEAC INFO 67 33

computing

regular

intelligence

cancer.

AI vs. colorectal cancer: REVERT’s data-based therapy

One of the most exciting possibilities of artificial intelligence (AI) is its potential to power personalized medicine. With the right datasets, tailored treatments could be found for diseases such as metastatic colorectal cancer, the formal name for cancer of the bowel which has spread to other parts of the body.

Funded by the European Union (EU), the REVERT project aims to help researchers better understand the physiological changes caused by colorectal cancer in patients responding either well or poorly to different therapies, in order to identify the best therapy for each patient. REVERT is building an AI-based decision-support system using real-world data from hospitals across the EU.

Standardized biobank samples with related structured data and clinical databases (including known and new biomarkers) are being used to create the REVERT DataBase. In turn, this database is being used to build the sophisticated, AI-based framework capable of analysing the impact of different treatments on patient survival and quality of life.

The project is also contributing to a network comprising research centres, small / medium enterprises (SMEs), clinical centres and biobanks focused on research and development in the field of AI health for the development of personalized medicine in Europe.

HiPEAC member Horacio Pérez-Sánchez leads the Structural Bioinformatics and High Performance Computing at the Universidad Católica de Murcia. The group’s role in the project is to coordinate the creation of a platform that allows the stratification of patients into different groups according to their individual response to treatment with drugs which have already been approved.

‘Our participation will allow us to build on our proven expertise in the application of high-performance computing to pharmacological applications, paving the way to the personalized medicine of the future,’ Horacio commented.

PROJECT NAME: taRgeted thErapy for adVanced colorEctal canceR paTients (REVERT)

START/END DATE: 01/01/2020 – 31/12/2023

KEY THEMES: artificial intelligence (AI), healthcare, personalized medicine

PARTNERS:

Universidad Católica de Murcia (Spain)

Scientific Institute for Research, Hospitalization and Health Care (IRCCS) San Raffaele Pisana (Italy)

ProMIS – Mattone Internazionale Salute (Italy)

Biovariance GmbhH (Germany)

IMAGO-MOL (Romania)

Universitá degli Studi di Roma “Tor Vergata” (Italy)

Luxembourg Institute of Health (Luxemburg)

Malmö Universiteit (Sweden)

Genxpro GMBH (Germany)

Bundesanstalt für Materialforschung und prüfung (Germany)

Umea Universiteit (Sweden)

Institutul Regional de Oncologie Iasi (Romania)

Olomedia SRL (Italy)

Servicio Murciano de Salud (Spain)

BUDGET: €5,000,000 (approx.)

Innovation Europe
revert-project.eu @revert_eu Revert Project In this edition of our
section on the latest EU-funded advanced
research, we look at a project which harnesses artificial
(AI) to tailor treatments for colorectal
The REVERT consortium HiPEAC INFO 6734

In July, Federico Iori (Barcelona Supercomputing Center-BSC) took over the role of HiPEAC Jobs manager from his predecessor, Xavier Salazar. HiPEAC caught up with Federico to find out about his career so far and his plans for HiPEAC Jobs.

Career talk: Federico Iori

Welcome to HiPEAC! Can you tell us a bit about yourself?

I’m a computational physicist from Modena, Italy. I’m an expert in material modelling using ‘first-principle’ methods – I got my PhD in theoretical physics at the Università di Modena e Reggio Emilia in 2008 and then did postdocs in different research labs in France, Italy and Spain for 10 years, an amazing period! In 2018 I joined Air Liquide R&D in Paris as a computational material researcher. Following a move to Barcelona in 2021, I was hired at BSC as a European project innovation officer, a new challenge for my personal and professional growth and a way to explore new fields beyond academic research.

How do you see HiPEAC Jobs evolving?

With the experience gained since I was a recent PhD graduate, I can appreciate the utility of having a free jobs portal and careers advice. I believe that HiPEAC could be used to catalyse and bring together researchers’ experiences to support young researchers looking for a career in public or private research, with experienced researchers offering their experience to guide the younger ones.

This could be done through formal approaches like the website, where information is organized in a more structured way, and more informal ways. From my participation at the HiPEAC conference and ACACES summer school, I can confirm that

informal, in-person exchanges are the best way to consolidate relations between students and HiPEAC. This allows early career researchers not only to network among themselves and with more senior researchers, but also to familiarize themselves with the complexity of the academic and industrial world, European calls for projects, and different career opportunities.

More practically, HiPEAC Jobs aims to organize the info-obesity related to computer science job offers on the web by providing a central place for announcements. In parallel, we are reinforcing contact with computer science companies and institutions, working together to promote the opportunities widely.

What are your plans for the next few months?

I think the HiPEAC Jobs activities are well tested and they should continue in the future. Students seem to like the current format and companies show interest in posting jobs announcements and participating in careers events.

We should also systematically take into account questions and comments arising from young researchers during careers events. Based on such feedback, I would like to implement a section on the HiPEAC Jobs portal with practical information about research funding opportunities, careers tips, and eventually a discussion forum. Since the research world is becoming so complex, we could provide information that people in science normally have to learn on the fly, giving early career researchers the tools and knowledge to overcome barriers and avoid bottlenecks.

Do you have any career tips for early career researchers?

As a researcher who has moved between different sectors, I would just suggest a few simple ideas:

• A PhD helps you grow both as a scientist and as a person (hard and soft skills too!).

• You have all the necessary tools and qualities for your career.

• Be curious but don’t get stressed.

• Look for opportunities that really match your preferences.

• High-salary jobs might be cool, but remember to ask yourself every day if you are really happy. Really follow your inspirations, no matter what the outside world says!

HiPEAC futures
HiPEAC Jobs session at ACACES 2022
HiPEAC INFO 67 35

Digitalization will only become a reality in Europe if there are enough skilled engineers to develop the enabling technologies. To help ensure a steady supply of qualified individuals, the European Union invests in initiatives to train early career researchers, such as Marie Skłodowska-Curie Innovative Training Networks (ITN). Focusing on the burgeoning area of wearables, A-WEAR is one such ITN, as Lucie Klus (Tampere University-TAU, Universitat Jaume I-UJI) explains.

Prêt-à-porter How A-WEAR is training the next generation of wearables engineers

What are the main aims of the A-WEAR ITN? How did the project come about?

A-WEAR aims to create new knowledge in the field of wearable computing and provide high-quality training to 15 early stage researchers over three years, so that they will become valuable assets to European industry and academia. Support is given by five European universities and numerous industrial partners. The project also makes its research outputs – including scientific papers, videos and presentations – publicly available to share the knowledge acquired and boost European competitiveness in the wearables sector.

The idea for the project was born during a research stay by our coordinator, Simona Lohan, at the Universitat Jaume I. Building on the successful experience of the Tampere University and UJI research teams with the ongoing Geotec European Joint Doctorate, the plan was further refined before other partners were invited based on their expertise to form a complementary, well-balanced consortium. The project started in January 2019, and in the spring of that year the 15 early stage researchers were chosen from over 300 applicants from all around the world.

Why did A-WEAR decide to focus on wearables? How is this market expected to grow over the coming years? The growing interest in wearable technologies goes hand in hand with the innovations and new options offered by 5G, a new generation of wireless networks. Individual technologies are becoming interconnected and data transfer rates are increasing, while delays in communication are being drastically reduced. As such, the devices that we wear deliver smoother operation and increased functionality in crucial areas such as e-health, offering benefits to health, safety and wellbeing. Thanks to their small size, low power consumption and ease of use, wearables are an attractive alternative to bulkier devices.

There are still many open research questions surrounding wearables, which is why A-WEAR dedicated its research to them. At the same time, the number of connected wearables is expected to surpass one billion devices in 2022, over three times more than in 2016. We expect that number to grow exponentially with the addition of smart patches and smart implants to the current market in the near future.

What are some of the main challenges addressed in the A-WEAR network?

Challenges include improving the localization capabilities of wearable and internet-of-things (IoT) devices, e-health and industrial applications, digitalization, outsourcing computations, introducing approximate computing, and implementing artificial intelligence (AI). Across all these challenges we also focus on privacy, safety and security.

These topics are usually treated as separate research subjects, but in A-WEAR we bring all of them together. As a diverse group of researchers with different backgrounds, we also try to exploit our cross-disciplinary knowledge. Work with wearables and their applications is quite specific and has particular restrictions, as wearables are usually small devices with significant constraints on battery life. That said, it requires much more multi-domain knowledge to implement functionalities for a wearable device than, for example, a smartphone.

Can you give us some examples of A-WEAR research?

My research focuses on finding ways to enable efficient communication between the wearable and a device such as a smartphone by lowering energy requirements and increasing efficiency. As an example, I’ve been looking at ways to compress data to reduce energy consumption without damaging the data itself. For this purpose, we proposed a time-series compression scheme that outperforms its predecessors.

HiPEAC futures
HiPEAC INFO 6736

In more recent work, I’ve been focusing on device localization. Together with other A-WEAR researchers, I proposed several machine-learning methods that increase user location accuracy and significantly reduce the time required for positioning. At the same time, a colleague has been developing a lightweight, privacy-preserving authentication protocol that doesn’t require a centralized entity, while ensuing anonymity.

Other A-WEAR contributions include:

• a method for unsupervised, non-line-of-sight detection in ultra-wideband

• evaluation of computation offloading for industrial wearables

• energy-efficient multicast communication

• the creation of multiple, openly available databases using a wide range of measurements

Our team also adapted swiftly to the recent COVID-19 outbreak. Some researchers adjusted their topics to focus on topics such as wearable-based disease tracing, contact tracing and disease identification (COVID-19 / influenza) to swiftly offer important tools in the fight against the pandemic.

What skills does A-WEAR help early career researchers develop?

Thanks to close collaboration between European universities, the network has offered us a range of courses in both technical and soft skills. One example is a very intense course on the IoT led by Simona Lohan, where we learned about numerous architectures and schemes. We later produced a series of short videos about the IoT and wearable devices for our YouTube channel; see ‘Further information’, below.

There have also been a number of project events where we attended lectures by experts in the field as well as giving presentations ourselves. These activities have provided us with opportunities to obtain top-tier technical expertise and greatly improve our presentation, language, research ethics, and entrepreneurial skills.

Why is cross-border collaboration important? How does A-WEAR help early career researchers to defend their ideas in an international context?

Scientific progress today is a global phenomenon, rather than the product of a single researcher or even country. Scientific papers are published not only by universities, but also by multinational industry giants such as Samsung, Google and Ericsson. A modern researcher has to be able to interact within both academic and industrial sectors in an international setting.

The A-WEAR network is distributed across the whole of Europe, with five main university beneficiaries in Finland, Spain, Italy, the Czech Republic and Romania, along with 13 industrial partners. Each participating early stage researcher has:

• been led by a team of experienced scientists and industry experts

• completed an industrial secondment

• worked towards a doctorate over three years from two universities simultaneously

If that doesn’t make you a high-tier researcher, nothing will!

FURTHER INFORMATION:

A-WEAR website projects.tuni.fi/a-wear

A-WEAR on GitHub github.com/a-wear

A-WEAR YouTube channel bit.ly/A-WEAR_YouTube

Marie Skłodowska-Curie Actions marie-sklodowska-curie-actions.ec.europa.eu

A-WEAR has received funding from the European Union’s Horizon 2020 (H2020) Marie Skłodowska-Curie Innovative Training Networks H2020MSCA-ITN-2018 call, under grant agreement no. 813278.

HiPEAC futures HiPEAC INFO 67 37

A HiPEAC internship is a great way to build your technical skills, gain valuable contacts and find out what it’s like to work in industry. By choosing a small or medium enterprise (SME) like Campera Electronic Systems, which specializes in the development of high-performance FPGA-based embedded systems, interns have an opportunity to work on multiple aspects of a project, as Theodoros Tsiakiris explains.

HiPEAC internships: Your career starts here

Imaging analysis for satellite propulsion at Campera

NAME: Theodoros Tsiakiris

RESEARCH CENTER: Aristotle University of Thessaloniki

HOST COMPANY: Campera Electronic Systems

DATE OF INTERNSHIP: 01/10/2021 - 01/02/2022

With a wide variety of high-quality intellectual property (IP) cores and extensive experience in the field, Campera Electronic Systems was the perfect environment for a junior fieldprogrammable gate array (FPGA) designer like me.

During my internship I had the opportunity to work directly on projects for Campera’s clients and research projects as well. My activities were mainly focused on the research project ‘PhAST: Photonic-based AeroSpace Technologies: from Satellite Propulsion Diagnostic tools to Ground Station Equipment’, coordinated by Aerospazio Tecnologie S.r.l.

The objective of this project is to develop a non-intrusive testing and monitoring system for Hall-effect thruster engines, based on based real-time spectroscopic and imaging analysis. Campera-ES is a member of the PhAST collaboration, responsible for the development of the FPGA-based-real time monitoring system.

During my internship I developed a system that interfaces with a high-speed camera, receives the data, compartmentalizes the frame, and proceeds to perform fast Fourier transfrom (FFT) and other analyses. The results are then sent to the external (PC-based) user interface via ethernet. The project is implemented on a Zynq Ultrascale+ FPGA device.

The system was developed using Xilinx Vivado and Aldec ActiveHDL/ Aldec Riviera Pro simulators. The coding standards respect the inhouse protocols defined by Campera-ES. I developed the main processing module of the spectroscopic analysis, and I also adapted a pre-existing camera-FPGA interface to the project’s current needs. The code I developed was based on the hardware description language VHDL and fully parametric. I familiarized myself with generic, vendor independent modules, as well as Xilinx proprietary IP cores.

Thanks to this internship, I was able to integrate myself in a research project collaboration and learned how to work with Campera-ES and Aerospazio engineers, as well as with external partners. I participated in meetings and wrote progress reports. This experience helped me build valuable experience, grow as an engineer and make the next step in my career.

Calliope-Louisa Sotiropoulou, Campera Electronic Systems’ research and development manager, said: ‘It’s always a pleasure to have HiPEAC interns at Campera-ES. Theodoros is our fifth HiPEAC intern; like previous students, he was very skilled, focused and determined. We look forward to hosting more HiPEAC interns in the future.’

HiPEAC futures
Photo credit: © ESA/Planetary Visions
“Campera Electronic Systems was the perfect environment for a junior FPGA designer”
HiPEAC INFO 6738

Three-minute thesis

Featured research: Domain-specific low-touch FPGA design flows

NAME: Yun Zhou

RESEARCH CENTER: Ghent University

SUPERVISOR: Professor Dirk Stroobandt

ADVISOR: Dr Alireza Kaviani, Senior Fellow, AMD

THESIS TITLE: FPGA Placement and Routing: From Academia to Industry

My doctoral research aimed at bridging the gap between academic and industrial computer-aided design (CAD) tools for field-programmable gate arrays (FPGAs), while also improving FPGA designer productivity. FPGAs are programmable integrated circuits that can be used to accelerate applications. To use FPGAs, a designer needs CAD tools, which compile the high-level intents described in a programming language into the low-level hardware resources of the target FPGA.

The compilation can be divided into two major sub-processes: synthesis and physical implementation. As soon as the implementation completes, the designer can verify if the results meet the application requirements; if the requirements are not met, the description must be changed and re-compiled. This interactive, cyclic process is known as the FPGA design cycle. It is intrinsically slow, with multiple iterations often required during application development, which negatively impacts designer productivity. As FPGAs and FPGA applications become ever larger and more complex, this situation is growing even worse.

To remedy the issue, I focused on improvements of CAD tools, with a particular emphasis on the physical implementation, namely placement and routing (PnR). The improvements include reducing compile time and optimizing the quality of results (QoR), which are to reduce the runtime for each cycle and the number of cycles, respectively. The groundwork of this thesis is the academic PnR tools (LIQUID and CROUTE) previously developed in the Hardware and Embedded Systems (HES) research group at Ghent University. Improvements have been made to further accelerate these two tools while maintaining high QoR.

However, it is not always clear whether research claims based on hypothetical architectures and benchmarks translate to real benefits in commercial products. To find out, I did a remote research internship at the AMD Adaptive & Embedded Computing Group (formerly Xilinx). Thanks to their open-source RapidWright framework, I was able to identify and transfer our PnR techniques to industry. As part of the work, we have provided the FPGA community with a fully open-source router for commercial FPGA architectures, called RWRoute. RWRoute has been integrated into the RapidWright framework, laying the groundwork for fast, customized implementation solutions. As a use case, a customized version of RWRoute is able to achieve a speedup of 9x over the optimal runtime of the commercial counterpart Vivado, with only 14% QoR loss on average.

Dirk Stroobandt commented: ‘While I strongly believe universities should perform research that looks years ahead, that research also has to be applicable to real industrial problems. Yun’s work is the perfect bridge between our group’s earlier research on new FPGA physical design algorithms and the actual design problems that occur in real commercial architectures. It is therefore a giant leap towards applying our research in a commercial setting, a task that is much harder than solving the basic research problem.’

Alireza Kaviani commented: ‘Open source has become an essential component of software development. However, quality electronic design automation (EDA) software, such as backend FPGA tools, has remained mostly proprietary. Existing open-source attempts do not produce results of sufficiently high quality to be commercially useful yet. RWRoute is the first open-source router in RapidWright framework, producing commercially viable results. The open-source aspect of RWRoute enables a straightforward way to adapt the router for specific domains. As an example, we customized the router for emulation and prototyping with about three weeks of engineering work to improve the compile time by an order of magnitude.’

HiPEAC futures
In this article, Yun Zhou describes her research into speeding up design cycles for field-programmable gate arrays (FPGAs).
HiPEAC INFO 67 39
SPONSOR 2023 Largest computing systems research event in Europe Extensive range of associated events Be the first to find out about new technologies Reach scientific and technological experts 20% industry participants Attendees from around the world Demonstrations and product announcements Build industry-academia relations HiPEAC Conference 16-18 January 2023, Toulouse Excellent recruitment opportunity Around 25% of attendees are graduate students Advertise vacancies on jobs wall and hipeac.net/jobs HiPEAC Jobs booth and support arranging interviews sponsorship@hipeac.nethipeac.net/linkedin hipeac.net/2023/toulouse/sponsorship @hipeacjobs @hipeac Tailored sponsorship plans Interested in a tailored package? Contact us today Acknowledgement on HiPEAC website and conference communications Free booth in industry exhibition Larger booth on request Privileged booth location and customized booth options Eligibility to be co-sponsor of student activities: travel grants, prizes, student poster session, best student presentation, etc Free conference passes 3 Presentation during industry exhibition 20 min from €4,000 Gold from €8,000 Platinum 1 10 min depending on availability from €1,000 Bronze 6 30 min Sponsor the HiPEAC conference and gain valuable exposure to the European computing systems community 2 20 min Additional promotional opportunities for exceptional visibility Active year-round HiPEAC Jobs support from €2,000 Silver
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.