Edge Impulse Guest Edition - Preview

Page 1


Volume 51, No. 542

This edition of Elektor magazine has been produced in collaboration with and is sponsored by Edge Impulse.

ISSN 0932-5468

Elektor Magazine is published 8 times a year by Elektor International Media b.v.

PO Box 11, 6114 ZG Susteren, The Netherlands Phone: +31 46 4389444 www.elektor.com | www.elektormagazine.com

Content Director: C. J. Abate

Editor-in-Chief: Jens Nickel

For all your questions service@elektor.com

Subscription rates: Starting December 1, 2025, the print membership (Gold) will cost €144.95 per year for a one-year subscription. The digital membership (Green) will be €99.95 for a one-year subscription and €169.95 for a two-year subscription.

Become a Member www.elektormagazine.com/membership

Advertising & Sponsoring

Büsra Kas

Tel. +49 (0)241 95509178 busra.kas@elektor.com www.elektormagazine.com/advertising

Copyright Notice

© Elektor International Media b.v. 2025

The circuits described in this magazine are for domestic and educational use only. All drawings, photographs, printed circuit board layouts, programmed integrated circuits, digital data carriers, and article texts published in our books and magazines (other than third-party advertisements) are copyright Elektor International Media b.v. and may not be reproduced or transmitted in any form or by any means, including photocopying, scanning and recording, in whole or in part without prior written permission from the Publisher. Such written permission must also be obtained before any part of this publication is stored in a retrieval system of any nature. Patent protection may exist in respect of circuits, devices, components etc. described in this magazine. The Publisher does not accept responsibility for failing to identify such patent(s) or other protection. The Publisher disclaims any responsibility for the safe and proper function of reader-assembled projects based upon or from schematics, descriptions or information published in or in relation with Elektor magazine.

Print

Senefelder Misset, Mercuriusstraat 35, 7006 RK Doetinchem, The Netherlands

Distribution

IPS Group, Carl-Zeiss-Straße 5 53340 Meckenheim, Germany Phone: +49 2225 88010

Welcome to the Edge of Innovation

Following the success of Elektor’s collaborations with tech leaders like Arduino, SparkFun, and Espressif, we’re excited to welcome Edge Impulse as our 2025 guest editor. Founded in 2019 by Jan Jongboom and Zach Shelby, Edge Impulse has become a major force in the edge AI revolution, enabling developers to bring intelligence directly to the devices where data is created. The company’s rapid innovation and growth recently led to its acquisition by Qualcomm Technologies, further cementing its role in shaping the future of embedded AI.

In this guest-edited edition, we spotlight AI at the edge — where data meets action. The Edge Impulse team worked with us to curate projects, tutorials, and articles that show how far edge AI has advanced in just a few years. Inside, you’ll explore motion recognition with anomaly detection, keyword spotting for voice interfaces, and even vision-language models for edge devices. You’ll gain helpful insights and expert guidance from the Edge Impulse engineering team, who are making machine learning more accessible, efficient, and deployable on real hardware. Whether you’re optimizing MCUs for AI inference or taking your rst steps into embedded intelligence, this issue will inspire you to push the boundaries of what’s possible.

With platforms like Edge Impulse, tools once limited to AI experts are now available to every engineer, maker, and student. At Elektor, we’ve always championed hands-on creativity and open collaboration. We invite you to build, train, and share your own edge AI projects on Elektor Labs at elektormagazine.com/labs. Enjoy this issue!

The

Team X

Flash back to Bay Area Maker Faire 2019, bustling with incredible projects and hands-on workshops. Arduino’s Massimo Banzi had just given a keynote address, and now a few of us from the Make: magazine team (of which I was the executive editor) were meeting with C. J. Abate from Elektor in the crew quarters to discuss and share content concepts. It was a warm and insightful gathering, and Elektor’s innovative approaches and incredible history with the DIY electronics community left me inspired.

Three years later, I joined Edge Impulse, overseeing the content initiatives for the startup that was democratizing edge AI. One of my rst projects was contributing a piece for a special edition of Elektor. Reconnecting with their editorial team reminded me of the innovation they carry in their content. A spark lit. I knew that Edge Impulse’s groundbreaking AI tools, projects, and real-world use cases could make for an amazing guest edition of Elektor as well.

Now, at the end of 2025, I’m thrilled to be launching that very edition. The timing is perfect; this year has been a turning point for edge AI. We’re seeing it in meetings, conferences, and in the eld. Major customers are shipping products made possible by Edge Impulse, while global powerhouses are quickly focusing on intelligence at the edge. (Qualcomm Technologies in particular has made big acquisitions in this space this year, including bringing both Edge Impulse and Arduino under its helm. Wow!)

Working on this edition of Elektor with C. J., Jens, and the rest of the team, I have repeatedly thought back to that initial meeting at Maker Faire. I love how it has come full circle. I hope you enjoy the edition, and invite you to try Edge Impulse (registration is quick at edgeimpulse.com/signup). We can’t wait to see what you build!

International Editor-in-Chief: Jens Nickel | Content Director: C. J. Abate | International Editorial Sta : Hans Adams, Asma Adhimi, Roberto Armani, Eric Bogers, Jan Buiting, Rolf Gerstendorf (RG), Ton Giesberts, Saad Imtiaz, Alina Neacsu, Dr. Thomas Scherer, Jörg Starkmuth, Clemens Valens, Brian Tristam Williams | RegularContributors: David Ashton, Stuart Cording, Tam Hanna, Ilse Joostens, Prof. Dr. Martin Ossmann, Alfred Rosenkränzer | Graphic Design & Prepress: Harmen Heida, Sylvia Sopamena, Patrick Wielders | Publisher: Erik Jansen | TechnicalQuestions: editor@elektor.com

C. J. Abate (Content Director, Elektor) Jens Nickel (Editor-in-Chief, Elektor)
Mike Senese (Head of Content, Edge Impulse)

Meet Edge Impulse Studio

Step-by-Step Tutorial

26 Smart Appliance Control Using Voice Commands with the Nordic Thingy:53

38 Crash Course: Getting Started with Edge Impulse Learn to Collect, Train, and Deploy an ML Model with the Arduino Nano 33 BLE Sense

54 PCB Defect Detection Computer Vision with Raspberry Pi

68 AI Toaster When Edge AI Meets Breakfast

88 Project Update #5: ESP32-Based Energy Meter Using Edge AI to Recognize Household Loads

104 Smart Ventilation System: Fusing Sound and Environmental Data A Dual-MCU Machine Learning Approach for Automated Window and Louver Control

PCB Defect Detection

Computer Vision with Raspberry Pi 54

Industry

62 Scaling AI to the Smallest Devices

66 Optimizing Power Efficiency in Battery-Driven Edge AI Devices

76 Leadership, Embedded ML, and the Edge Revolution

115 Bringing Voice Control to Earbuds and Headsets

Next Edition

Elektor Magazine January & February 2026

As usual, we’ll have an exciting mix of projects, circuits, fundamentals, and tips and tricks for electronics engineers and makers. Our focus will be on Power & Energy.

> Low-Noise PSU

> Adjustable USB-C Power Supply

> Dynamic DC Load

> Mains Frequency Meter

> Lithium Battery Practice

Elektor Magazine’s January & February 2026 edition will be published around January 14, 2026.

Arrival of printed copies for Elektor Gold members is subject to transport.

BONUS EDITION

Want more from Elektor and Edge Impulse?

Check out the Bonus Edition — guest-edited by Edge Impulse — featuring projects like real-time object counting, material classification, a smart gauge reader, and an interview with Qualcomm’s Manvinder “Manny” Singh. Subscribe to the Elektor E-Zine (elektormagazine.com/ezine) to get it delivered to your inbox!

What the Heck Is Edge AI Anyway?

Bringing Intelligence to the Device

The traditional AI work ow follows an o -site approach: data gets sent to and processed by cloud servers, which then return the results to the user. Edge AI ips this by keeping everything on-device. This opens new bene ts and practical applications.

Picture this: You’re standing in your kitchen asking a smart speaker for the day’s temperature or to set a timer. Your voice command is processed instantly — no pause, no lag, no bu ering. Meanwhile, halfway around the world, a factory robot spots a tiny defect on a production line and pulls the product in milliseconds. In both cases, something remarkable is happening: AI is making split-second decisions without data ever touching the cloud.

Welcome to the edge, where intelligence is right where the action happens.

What Is Edge AI?

Edge AI, the fusion of artificial intelligence and edge computing, is the practice of running machine learning and arti cial intelligence models directly on endpoint devices such as smartphones, IoT sensors, cameras, industrial controllers, edge servers, or embedded systems, rather than centralized cloud data centers.

Think of it as distributing intelligence to the “edge” of your network, where data is actually generated and where decisions need to be made.

The neural network runs locally on the device itself, processing data right where it’s captured. A security camera, for example, can analyze video feeds on-board. A smartphone can perform facial recognition using its own processor. A medical device can interpret readings without an internet connection.

Why Edge AI Matters

For developers building intelligent applications, edge AI solves real problems that cloud-based approaches can’t address or can only address at signi cant cost and compromise.

Latency — When Milliseconds Matter

Autonomous vehicles, for instance, can’t wait for a cloud round trip to the cloud to decide whether an object ahead is a pedestrian or paper bag. By the time the data travels to the data center and back, even in best-case latency scenarios, a car traveling at highway speed could already move several meters. Edge AI enables inference in up to single-digit milliseconds, making real-time applications genuinely real-time.

Privacy—KeepingSensitiveDataLocal

When your smartphone’s camera unlocks your device, that’s deeply personal information, data that cannot leave the device. Edge AI addresses growing privacy concerns and regulatory requirements like GDPR and Health Insurance Portability and Accountability Act (HIPAA). Healthcare applications

can analyze patient data on local devices without transmitting sensitive medical information. This mitigates risks associated with transmitting data over networks.

Bandwidth — Economics and Practicality

Streaming raw sensor data to the cloud is expensive and often impractical. Consider a smart factory with hundreds of cameras monitoring production lines 24/7. Uploading all that video would cost thousands in bandwidth alone, not to mention the infrastructure needed to process it. Edge AI lets you analyze video locally and only transmit relevant events or insights. In IoT deployments with thousands of sensors, the cost savings could be substantial.

Reliability — Intelligence That Works O ine

Cloud connectivity isn’t always available or reliable. Imagine for a moment a scenario involving first responders or industrial workers facing life-threatening situations where they can’t be reached. Under normal circumstances, cloud-based systems can monitor vital signs, but if for any reason these workers “go dark” and are inaccessible via the cloud, that’s where real-time decision-making powered by edge AI has a huge impact.

Edge AI ensures that critical systems continue to work regardless of network conditions. This means a manufacturing line doesn’t halt production because the internet went down. For developers, it means designing systems that are resilient and autonomous, with cloud connectivity as an enhancement rather than a requirement.

Meet Edge Impulse Studio

Easily Build and Deploy Edge AI Models

Historically, creating an ML model and deploying it to an embedded device required considerable manual e ort and the use of a variety of di erent platforms. Edge Impulse Studio was built to consolidate and standardize this entire work ow. The cloud-based platform makes it easy to build, train, and deploy machine learning models for edge devices all in one place.

Creating a machine learning (ML) model and deploying it to an embedded device requires a lot of steps to take: From collecting and storing data, to using data processing tools, to training in yet another platform, to converting into a format for speci c embedded hardware, to ashing rmware for deployment. Historically, this was not a quick nor reliable endeavor for early practitioners of machine learning.

Edge Impulse Studio was built to consolidate and standardize this entire work ow. Our cloud-based platform makes it easy to build, train, and deploy machine learning models for edge devices all in one place, while providing visibility and features that greatly enhance the MLOps process and its capabilities for the user. Designed for engineers and developers whether they’re building a product or just picking up ML for the rst time, it streamlines the entire work ow — from data collection to deployment — using an intuitive web interface.

Getting started in Edge Impulse is simple: just visit edgeimpulse.com/signup and register your free account. From there, you can jump straight into creating a new project, which serves as your workspace for a specific application. Projects keep your data, models, and results organized, making it easy to iterate and improve your use case.

Other articles will take you through building a project from end to end. Here’s an overview of the platform.

Key Features in Your Project at a Glance

Once you are registered and logged into your Edge Impulse Studio account, you’ll nd your account overview in the top-right corner of each screen. Click on your user icon to nd your project list and account settings. Before you start building your project, it’s worth setting up the hardware Target for your project, also in the top right.

Edge Impulse provides on-device performance estimation throughout the platform, allowing you to balance your model performance with device resources. There are a number of example targets to choose from, or you can build your own.

The column on the left side of Edge Impulse Studio is the main navigation, with numerous features to guide you through building your ML pipeline. Let’s take a look at what each one is.

Dashboard

This brings up an overview of your project (Figure 1), allowing you to manage collaborators, generate API keys, and manage project settings.

Devices

On the Devices screen (Figure 2), you can connect hardware to collect real-world data for your models, connect to your smartphone’s sensors, or use our command line tool to forward data from fully supported development kits. If you want to collect data remotely from devices in the eld, you can also use our API to send data directly to a project. Connecting a device makes it easy to iterate on your model by collecting more data.

Data Acquisition

This is where you collect, label, and organize sensor data (audio, image, or time-series). You can also upload data directly here — the CSV Wizard (in the top nav of this window) makes it easy to ingest time-series data in

Keyword Spotting with Edge Impulse

Collect, Train, and Deploy

We’re all familiar with the voice commands for smart devices, such as, “Alexa,” “Hey Siri,” or “OK, Google.” But how does this actually work? The process is called keyword spotting or audio classi cation, and is a machine learning approach that can recognize audible events, particularly voice, even in the presence of other background noise or chatter.

Let’s learn how to build a keyword spotting model with Edge Impulse. We’ll collect audio data from microphones, use signal processing to extract the most important information, and train a deep neural network that can tell you whether your keyword was heard in a given clip of audio. Finally, we’ll deploy the system to an embedded device and evaluate how well it works. At the end of this tutorial, you’ll have a rm understanding of how to classify audio using Edge Impulse.

There is also a video version of this tutorial [1]. You can view the nished project, including all data, signal processing, and machine learning blocks, at [2].

Prerequisites

For this tutorial, you’ll need a supported device and the Edge Impulse CLI. If your device is connected under Devices in Edge Impulse Studio, as shown in Figure 1, you can proceed.

Collecting Your First Data

Your rst job is to think of a great keyword. It can be your name, an action, or even a growl — it’s your party. Do keep in mind that some keywords are harder to distinguish from others, and especially keywords with only one syllable (like “one”) might lead to false positives (e.g., when you say “gone”). This is the reason that Apple, Google, and Amazon all use at least three-syllable keywords (“Hey Siri,” “OK Google,” “Alexa”). A good one would be “Hello world.”

To collect your rst data in Edge Impulse, go to Data acquisition, set your keyword as the label, set your sample length to 10 s, your sensor to microphone, and your frequency to 16 kHz (Figure 2). Then click Start sampling and start saying your keyword over and over again (with some pause in between).

Note: Data collection from a development board might be slow; you can use your mobile phone as a sensor to make this much faster.

Figure 1: Devices tab with the device connected to the remote management interface.

A New Chapter for Arduino

From Hobby Board to Edge Computing Powerhouse

Arduino enters the AI era with the acquisition by Qualcomm Technologies, merging maker simplicity with cutting-edge intelligence. The new Arduino UNO Q and a new programming IDE named Arduino App Lab bring accessible AI and edge computing to millions of innovators.

Arduino Enters the AI Age

For two decades, Arduino has been the gateway to electronics, turning curious tinkerers and hobbyists into hardware developers with a simple board. Now, with over 33 million active users comprising a community that spans middle school classrooms to NASA labs, Arduino is taking its biggest leap yet.

In October 2025, Arduino entered into an agreement to join Qualcomm Technologies, marking a pivotal moment in the maker movement’s evolution. Apart from the tech acquisition, this is a strategic convergence that promises to democratize edge and AI development the same way Arduino democratized electronics prototyping.

The first fruit of this partnership is the Arduino UNO Q (Figure 1), a revolutionary board that features a dual brain design. One

brain is a Linux Debian-capable microprocessor powered by the Qualcomm DragonwingTM QRB2210, which offers AI and graphic acceleration, quad-core performance, camera/audio/display support, and a lot more (Figure 2). The second brain is an on-board STM32U585 MCU, which handles real-time control with microsecond precision. Starting at just $44 USD for the 2 GB RAM/16 GB eMMC version, it’s a full Linux computer that maintains the simplicity that has made Arduino so impactful.

Arduino App Lab — Unifying Development from Sketch to AI

The Arduino IDE revolutionized embedded programming by making it accessible to everyone. Now, Arduino App Lab takes that

philosophy into the AI era, creating a new software tool that bridges the gap between simple sensor readings and sophisticated machine learning applications.

Arduino App Lab is the all-in-one development environment that rede nes how developers, educators, students, and innovators build applications across embedded systems, Linux, and edge AI.

Preloaded on next-generation Arduino platforms such as the UNO Q, Arduino App Lab empowers users to seamlessly combine Arduino sketches, Python scripts, and containerized AI models into fully integrated applications, all from a single interface (Figure 3).

Figure 1: The Arduino UNO Q board.
Figure 2: The Arduino UNO Q features a Qualcomm Dragonwing QRB2210 processor.

Getting Started with Object Detection on Edge Devices

Object detection is an AI technique that can identify and locate speci c objects in an image, often coupled with a camera system that uses static images or a video feed. In this tutorial, we are going to use Edge Impulse to build an onboard machine learning system that can recognize and track multiple objects in your house through a camera.

Adding sight to your embedded devices can make them see the di erence between poachers and elephants, count objects, nd your Lego bricks, and detect dangerous situations. Follow along to learn how to collect images for a well-balanced dataset, apply transfer learning to train a neural network, and deploy the system to an edge device.

You can view the nished project, including all data, signal processing, and machine learning blocks by visiting [1].

Prerequisites

To get started, you’ll need a supported device [2], in order to build the dataset (in this case, a series of images, with annotations about the objects which are depicted). If you don’t have any of these devices, you can also upload an existing dataset through the uploader [3] — including annotations [4].

Building a Dataset

In this tutorial, we’ll build a model that can distinguish between two objects on your

desk — we’ve used a lamp and a co ee cup, but feel free to pick two other objects. To make your machine learning model see, it’s important that you capture a lot of example images of these objects. When training the model, these example images are used to let the model distinguish between them.

Capturing Data

Capture the following amount of data — make sure you capture a wide variety of angles and zoom levels. It’s fine if both objects are in the same frame. We’ll be cropping the images later to be square, so make sure the objects are not on the far edges of the frame. Gather:

> 30 images of a lamp.

> 30 images of a co ee cup.

You can collect data from the following devices:

> Collecting image data using a device connected to Edge Impulse Studio [5].

> Collecting image data with your mobile phone [6].

Or you can capture your images using another camera and then upload them by going to Data acquisition and clicking the Upload icon.

Labeling the Data

With the data collected, we need to label this data. Go to Data acquisition, verify that you see your data, then click on Labeling queue to start labeling (see Figure 1).

If you don’t see the Labeling queue option, go to Dashboard, and under Project info → Labeling method select Bounding boxes (object detection)

The labeling queue shows you all the unlabeled data in your dataset. Labeling your objects is as easy as dragging a box around the object, and entering a label. To make your life a bit easier, Edge Impulse tries to automate this process by running an object tracking algorithm in the background. If you have the same object in multiple photos, they can move the boxes for you, and you just need to con rm the new box. After dragging the boxes, click Save labels and repeat this until your whole dataset is labeled as shown in Figure 2.

Afterwards, you should have a well-balanced dataset listed under Data acquisition in your Edge Impulse project.

AI-Assisted Labeling

Use AI-Assisted Labeling for your object detection project! For more information, check out our blog post [7].

AI Toaster

When Edge AI Meets Breakfast

Many cheap, consumer toasters still rely on simple open-loop timers to control the toasting process. This project attempts to tackle this annoying problem with an over-the-top and su ciently over-engineered solution: adding a sensor suite and edge AI to “smell” when the toast is done. While the project is playful, it demonstrates the power of embedded machine learning to address various predictive maintenance issues by identifying problems before they become worse.

Most modern toasters and toaster ovens use a timer to determine the crispiness of your bread slice. This open-loop control system su ers from a lack of feedback, as there is no way for the machine to know when the toast is done. The user must guess at a cook time for their desired level of doneness. Achieving the perfect toast is easily thwarted by di erent sizes and types of bread, and the timer must be manually adjusted by the user.

In 2021, Benjamin Cabé built an AI-powered “Artificial Nose” [1] using various gas sensors and a relatively simple neural network to classify various odors, such as co ee and whiskey. Benjamin’s sensor fusion project inspired me to create a lighthearted application: an artificial nose that could detect the relative doneness of toast and stop the toaster at the perfect level. If I could smell when toast was burnt, I figured that I could train a machine learning model to do the same.

As a side note, some toasters do have a feedback mechanism. Many of these rely on temperature sensors to measure the surface temperature of the bread and stop the toasting process at a particular threshold. Toasters from the 1940s can be found with bimetallic strips near the bread slots. This type of feedback mechanism is simple and

Figure 1: Holding a piece of AI-produced toast. (Source: DigiKey, licensed under CC BY 4.0)

With Zach Shelby

Leadership, Embedded ML, and the Edge Revolution

Edge Impulse, a Qualcomm company

As edge devices grow more powerful, AI is shifting away from the cloud and closer to where data is generated. Zach Shelby (Cofounder and CEO, Edge Impulse) discusses the evolution of embedded ML, the challenges of scaling developer tools, and what’s next for intelligent hardware.

When Zach Shelby rst tinkered with electronics and dial-up Internet connections as a student, he never could have imagined that his curiosity would one day help shape the Internet of Things and drive the global edge AI movement. From co-founding startups to serving in executive roles at Arm, Shelby has consistently been at the intersection of emerging technologies and industry-de ning innovation. Today, as co-founder and CEO of Edge Impulse — now part of Qualcomm — he continues to champion the democratization of machine learning for developers worldwide. Shelby re ects on the evolution of embedded AI, the leadership lessons he’s learned from startups to corporate boardrooms, and his vision for how edge AI will rede ne intelligent devices in the years ahead.

Abate: Let’s start with your background. What was your interest while studying at university? Was AI on your radar back then?

Shelby: Not at all! When I was in high school, I got into electronics and programming and was fascinated by the early Internet (when you had to dial in via a modem). When I went to university, I studied computer engineering in Michigan and later specialized in communications during graduate school in Finland. That combined my early interests in electronics and the Internet, which inspired me to help create the Internet of Things.

Abate: As someone who has transitioned from startups to executive roles at Arm and back to founding again, how do you adapt your leadership style to di erent stages of company growth?

Shelby: Founders have to grow with our companies to succeed, and each stage is di erent. I guess I learned that the hard way with my rst startup — the key is not to hold on too tightly or try to micromanage things Early stages are very hands-on. As a company grows, you need to trust in the right leaders for each stage of growth. This ability to rapidly adapt to di erent situations is equally useful in corporate leadership.

Abate: In a 2019 Medium post titled “Embedded ML for All Developers,” you said that we were then experiencing the “third wave of embedded compute.” How has the third wave evolved since 2019?

Shelby: It certainly has more than I could have imagined. ML not only has enabled us to do so much more with embedded systems, but it has started to drive silicon roadmaps with NPU acceleration.

 Zach Shelby, Cofounder & CEO

Get to Know

Questions from the Elektor Community

Elektor’s global community comprises talented engineers, creative makers, and electronics innovators exploring everything from AI-powered IoT devices to embedded systems and edge computing. We invited our friends at Edge Impulse to share insights into their groundbreaking platform, their approach to enabling developers with edge AI, and how they’re shaping the future of machine learning on the edge.

How has the Qualcomm acquisition impacted your roadmap?

Muhammed Söküt (Germany)

The Qualcomm Technologies acquisition has been a catalyst for growth and innovation. It’s given us the resources and focus to accelerate our most important initiatives, while staying true to our developer-first roots. In practice, this means we can move faster on big leaps forward, like bringing generative AI-powered labeling to life, advancing our EON Tuner, our unique take on AutoML, and launching new capabilities such as model monitoring directly in the platform.

While we remain fully committed to supporting all edge AI hardware, being part of Qualcomm Technologies lets us take things a step further. We can now deliver deeper integrations and advanced optimizations specifically for devices with Qualcomm® technology, giving developers a seamless path from prototype to production on some of the most capable edge AI devices.

Most importantly, it’s allowed us to grow our team and invest even more in delivering an outstanding product experience for every user, from students to enterprise customers, while continuing to push the boundaries of what’s possible at the edge.

And this is just the start. We’re laying the foundation to make edge AI development faster, more accessible, and more powerful than ever before. Alessandro Grande

AI at the Edge: Powering the Next Generation of Devices

AI is moving from the cloud to the edge — reshaping how we interact with devices in our homes, factories, and daily lives. With powerful processors and foundation models running locally, intelligence is about to be embedded everywhere.

Over the past few years, it has been impossible to escape the hype around AI. The big AI research labs have been making headlines with huge fundraising deals, countless new data centers are under construction, and Internet personalities can’t stop debating how the future is going to look.

But the most interesting developments, the ones that will truly shape our day-to-day lives in the next couple of decades, are happening more quietly. In the past five years, machine learning (ML) and embedded systems have made an unlikely alliance. What started with experiments inside big tech companies has become a fully fledged industry, combining machine learning, advanced digital signal processing, and state-of-the-art AI with increasingly powerful and e icient embedded processors.

Source: Adobe Stock/Paul Studio

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.