Embedded Computing Design Spring 2025 with Embedded World Profiles

Page 6


Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries.

Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.

Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.

EMBEDDED COMPUTING BRAND DIRECTOR Rich Nass rich.nass@opensysmedia.com

EDITOR IN CHIEF Ken Briodagh ken.briodagh@opensysmedia.com

ASSISTANT MANAGING EDITOR Tiera Oliver tiera.oliver@opensysmedia.com

PRODUCTION EDITOR Chad Cox chad.cox@opensysmedia.com

TECHNOLOGY EDITOR Curt Schwaderer curt.schwaderer@opensysmedia.com

CREATIVE DIRECTOR Stephanie Sweet stephanie.sweet@opensysmedia.com

WEB DEVELOPER Paul Nelson paul.nelson@opensysmedia.com

EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

WEBCAST MANAGER Marvin Augustyn marvin.augustyn@opensysmedia.com

SALES/MARKETING

DIRECTOR OF SALES Tom Varcie tom.varcie@opensysmedia.com (734) 748-9660

STRATEGIC ACCOUNT MANAGER Rebecca Barker rebecca.barker@opensysmedia.com (281) 724-8021

STRATEGIC ACCOUNT MANAGER Bill Barron bill.barron@opensysmedia.com (516) 376-9838

EAST COAST SALES MANAGER Bill Baumann bill.baumann@opensysmedia.com (609) 610-5400

SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek len.pettek@opensysmedia.com (805) 231-9582

DIRECTOR OF SALES ENABLEMENT Barbara Quinlan barbara.quinlan@opensysmedia.com AND PRODUCT MARKETING (480) 236-8818

INSIDE SALES Amy Russell amy.russell@opensysmedia.com

STRATEGIC ACCOUNT MANAGER Lesley Harmoning lesley.harmoning@opensysmedia.com

EUROPEAN ACCOUNT MANAGER Jill Thibert jill.thibert@opensysmedia.com

TAIWAN SALES ACCOUNT MANAGER Patty Wu patty.wu@opensysmedia.com

CHINA SALES ACCOUNT MANAGER Judy Wang judywang2000@vip.126.com

PRESIDENT Patrick Hopper patrick.hopper@opensysmedia.com

EXECUTIVE VICE PRESIDENT John M. McHale III john.mchale@opensysmedia.com

EXECUTIVE VICE PRESIDENT AND ECD BRAND DIRECTOR Rich Nass rich.nass@opensysmedia.com

DIRECTOR OF OPERATIONS AND CUSTOMER SUCCESS Gina Peter gina.peter@opensysmedia.com

GRAPHIC DESIGNER Kaitlyn Bellerson kaitlyn.bellerson@opensysmedia.com

FINANCIAL ASSISTANT Emily Verhoeks emily.verhoeks@opensysmedia.com

SUBSCRIPTION MANAGER subscriptions@opensysmedia.com

OFFICE MAILING ADDRESS 3120 W Carefree Highway, Suite 1-640 • Phoenix AZ 85087 • Tel: (480) 967-5581

ADVERTISER

Acces I/O Products Inc. –M.2: The New, More Flexible Alternative to PCIe Mini Cards

Adacore – SPARK and Rust for Critical Embedded Systems: A Conversation with José Ruiz

bit.ly/ECDYouTubeChannel

www.instagram.com/embeddedcomputingdesign

Discover how SCI Semiconductor’s ICENI™ MCU family, built on CHERI technology, is transforming embedded computing. With enhanced memory safety, fearless code reuse, and compliance with evolving global regulations, the ICENI™ devices are designed to set a new standard for critical infrastructure industries. Visit SCI Semiconductor at embedded world 2025 to explore the future of embedded security. Show profiles from embedded world 2025 begin on page 28.

 Embedded Executive: Your Industrial Application Needs an Industrial MCU, Infineon Tune In: https://embeddedcomputing.com/ technology/iot/embedded-executive-yourindustrial-application-needs-an-industrial-mcuinfineon

 A Modular Future: Chiplets, AI, and Advanced Packaging Tune In: https://embeddedcomputing.com/ technology/processing/chips-and-socs/amodular-future-chiplets-ai-and-advancedpackaging

 Chipping In: Europe’s Role in the Semiconductor Industry

Tune In: https://embeddedcomputing.com/ technology/processing/chips-and-socs/ chipping-in-europes-role-in-the-semiconductorindustry

State of the Embedded Industry World

It’s 2025 and the signature event of the industry, embedded world, is going to be bigger than ever in Nuremburg Germany. This coming off a very successful first-ever embedded world North America, the signs seem clear that the embedded industry is growing and becoming ever more integrated into every vertical from automotive to zoology.

But what technologies and factors are driving this growth? Well, based on what I’ve been reading and gleaning from conversations over the past year, here are one humble editor’s thoughts. One thing to keep in mind: these are not predictions (I don’t do those, having not been gifted with clairvoyance, sadly), rather they are my subjective observations. So, with that sizeable grain of salt in mind, let’s press on.

AI

No surprise here, I’m sure. If you’ve been paying any attention at all, you’ll have seen for yourself that every technological discussion (and many that aren’t techfocused) comes around to talking AI and ML at some point. What I think is most interesting right now, however, is to my mind a distinctly positive evolution: many of the discussions at the most practical engineering and developer levels are not about AGI, LLM, or the Gen AI that relates to either. Rather, these tactical, software, and hardware implementation and execution plans are being made around tightly controlled, customized AI models, trained on only the specific data relevant to the task at hand. This is the best possible future for AI use at an industrial, supply chain, or edge computing level as far as I’m concerned and it’s great to see it.

HPC

Some of you that I spoke to in 2024 will be surprised at my take here, because I’ve nearly reversed my position over the course of the year (but not entirely, of course). I began last year expecting a downturn in the HPC space, because of all the hype and spending headed toward the edge of the network, rather than the cloud of the server. I seem to have been a bit … mistaken on that front. Instead, HPC has grown and expanded, driven in part by all the aforementioned AI deployments. What I still hold as likely, however, is that HPC, and data centers in general, are going to be investing in energy efficiency and renewable sources more than ever before. Look to GaN and SiC to gain big from this.

Sensor Fusion & IoT

I have written before about my grand unification theory that brings embedded systems, IoT, and AI all together into one horizontal layer with sensor fusion to enable fully integrated, powerful intelligent systems in almost every industry. I’m privately calling this horizontal layer the “EnableMantle” because I love a portmanteau and I’m bad at naming things. (Editor’s note: he does and is.) But I also think the name fits as this layer could act as an enablement layer in between hard- and software (talk about a fusion!) in any product, and 2025 seems like the year that we’ll start seeing these becoming more explicitly linked together.

THIS ONE’S A PERENNIAL TREND AND WILL ALWAYS BE. WHAT’S DIFFERENT NOW IS THE INTRODUCTION OF AI ALGORITHMS INTO THE PROCESS. WE DON’T YET KNOW WHAT THE CAPABILITIES OF THESE AI-CAPABLE BAD ACTERS WILL BE, AND SO SECURITY IS (AS ALWAYS) MORE IMPORTANT THAN EVER.

Security

This one’s a perennial trend and will always be. What’s different now is the introduction of AI algorithms into the process. We don’t yet know what the capabilities of these AI-capable bad acters will be, and so security is (as always) more important than ever. I’m not going to outline the strategies here but watch these pages for security coverage all year and forever, since the problems won’t get simpler or less serious as computing gets faster, more powerful, and more sophisticated.

embedded world 2025 #ew25

These topics and many more will be taking center stage in Nuremburg this year for embedded world 2025. The exhibit halls will be loaded with companies from startups to Fortune 100 behemoths, across the spectrum from hardware to M2M, from services to systems, and everything else.

I hope I’ll see you there.

The Benefits of Containerization for Embedded Systems

Containers are an evolutionary standardized and portable packaging technology. They were first introduced for web applications and microservices development and deployment, gaining wide adoption in the IT industry.

Today, we also see them being adopted in the embedded industry, for example, in the development of automotive electronic control units (ECUs). This is for both adaptive AUTOSAR and classic AUTOSAR meant for deeply embedded safety- and security-critical systems where C and C++ are the dominant programming languages.

Containers offer a quick reproduction of identical environments for embedded development, testing, staging, and production at any stage of software development, which increases overall productivity, code quality, reduction in labor, cost savings, and more.

With containers, organizations and their suppliers have found amazing levels of agility, flexibility, and reliability. Companies use containers to:

› Improve software development time to market.

› Improve code quality interest.

› Address challenges in managing the growing complexity of development ecosystems.

› Dynamically respond to the software delivery trials in a fast and continuously evolving market.

An example in use is how containers get deployed right into today’s modern Agile development workflows like DevOps/DevSecOps.

Before getting into the details and the benefits, let’s put this technology in context and answer the following questions.

› Why do containers exist and what are they?

› How do containers fit within the software development life cycle?

› How do they affect business outcomes?

Container Technology

The Open Container Initiative (OCI) is a Linux Foundation project established in 2015 by various companies for the purpose of creating an open industry standard around container formats and runtimes. The standard allows a compliant container to be seamlessly portable across all major operating systems, hardware, CPU architectures, public/private clouds, and more.

A container is an application bundled with other components or dependencies like binaries or specific language runtime libraries and configuration files. Containers have their own processes, network interfaces, and mounts. They’re isolated from each other and run on top of a “container engine” for easy portability and flexibility (Figure 1).

In addition, containers share an operating system. They can run on any of the following.

› Linux, Windows, and Mac operating systems

› Virtual machines or physical servers

› A developer’s machine or in data centers on premises

› The public cloud

It’s important to understand the role of the container engine, since it provides crucial functionalities like:

› Operating system level virtualization.

› Container runtime to manage the container’s life cycle (execution, supervision, image transfer, storage, and network attachments).

› Kernel namespaces to isolate resources.

There are also various container engines, including: Docker, runC, CoreOS rkt, LXD, CRI-O, Podman, Containerd, Microsoft Hyper-V, LXC, Google Container Engine (GKE), Amazon Elastic Container Service (ECS), and more.

Other concepts worth mentioning at a glance are container images and container orchestration.

A container image is a static file with executable code and includes everything a container needs to run. Therefore, a container is a running instance of a container image. Additionally, in large and complex deployments, there can be many containers within a containerization architecture. Managing the life cycle of all the containers becomes especially important.

Container orchestration manages the workload and services by provisioning, deploying, scaling up or down, and more. Popular container orchestration solutions are Kubernetes, Docker swarm, and Marathon.

Let’s now consider the technical and business gains from the flexibility of having your applications packaged up with all their dependencies so that they run quickly and reliably from one computing environment to another.

Technical and Business Gains

The development ecosystem of embedded software systems can be overly complex. Having large teams all work within a common or identical environment compounds the complexity. For example, development team environments consist of compilers, SDKs, libraries, IDEs, and, in some cases, the incorporation of modern technologies like artificial intelligence (AI) and much more. All of these tools and solutions are working together, as are all of their dependencies, plus the always evolving release versions that provide fixes to discovered security vulnerabilities, fixes to identified flaws, licensing, and much more.

Additionally, organizations should have separate environments for development, testing/validation, production, and perhaps for disaster recovery. With

FIGURE 1: Containerization architecture

containers, organization can effectively manage all these complex development environments by easily scaling up or down application dependencies, reverting the development environment to a specific state, and rolling out container images as needed, ensuring every team member gets a consistent development environment.

Today, many organizations replicate the development and test environments on each developer/tester machine, leaving room for human error.

I recall a time when I had developed and tested an embedded application on my desktop and it worked perfectly, so I committed the code. Much later, I was informed by the QA team during acceptance testing that the application did not work. I started

debugging the problem reported but could not produce the issue described.

I brought other development team members into the fold to help identify and resolve the issue, but we just could not reproduce the problem. After days of investigating, collaborating with the QA team, and at times grasping at straws, we finally started to investigate the QA’s build environment.

Everything was identical except that the QA team had updated their version of the operating system (OS) and compiler on their machines. In their version of the OS, modifications to the task prioritization handling had been made. The code logic was sound, but the running task was being blocked by another task with the same priority. Decreasing either one of the competing task priorities by one solved the problem.

Because code logic was the first suspected culprit, we spent a significant amount of labor and time investigating and resolving the issue. Looking into the problem consumed more time from more development and QA engineers. We held additional meetings and status reports and pushed out other assignments. The ripple effect is not entirely known, but substantial costs were incurred.

If a centralized management and deployment of development environments with containers had been in place, it would have kept both the development and the QA team’s deployment environment in sync, and this problem could have been entirely avoided.

Embedded Deployment Strategies

Software development using containers can be configured and deployed in many ways. Organizations get to determine and evolve in the use of containers based on existing tools being used, levels of automation desired, and team organization.

A strategy could be one where a common container is used on the developer’s host machine to make, build, and run their applications. This ensures that every developer is using the exact same set and version of build tools and run environment.

FIGURE 2: Sample build and run on developer’s machine with no use of containers.
FIGURE 3: A common container is used on the developer’s host machine to make, build, and run their applications.
FIGURE 4: In this example there are two containers. One container makes and builds the application, while the other runs the application.

Many embedded teams employ Jenkins, GitHub, Azure, GitLab, and others for continuous integration and continuous delivery (CI/CD). In this example there are two containers. One container makes and builds the application, while the other runs the application. This helps express the flexibility that containers offer.

Organizations may also have graphically distributed teams. Having libraries of container images can facilitate sharing containers and eliminate reinventing the wheel. This promotes efficiency by reusing existing containers for different purposes. Shared containers ensure quality throughout the development supply chain.

Embedded Test Automation in the CI/CD Pipeline

Containers are also being used for software testing in the DevOps workflow. By integrating containerized testing solutions into the CI/CD pipeline, organizations can perform static analysis to ensure compliance with standards like MISRA C:2012, MISRA C++ 202x, AUTOSAR C++14, CERT, CWE, OWASP, and others.

Software test automation tools like Parasoft C/C++test offer a container that can be found in Docker Hub. In addition, unit testing can also be performed, including structural code coverage of statements, branches and/ or modified condition decision coverage (MC/DC). Then, only upon successful completion of testing, will the software be committed into the master branch.

This containerization deployment within the build process produces amazing efficiencies in code development and code quality. Having multiple engineers working in parallel in this quick and automated continuous integration cycle, ensures that a solid software base is produced and maintained throughout the entire product life cycle.

Conclusion

Organizations building embedded realtime safety- and security-critical systems are adopting a DevOps workflow that includes containers. Others are in the process of adopting a containerized strategy. The ones that have constructed

embedded real-time safety- and security-critical systems are adopting a DevOps workflow that includes containers.

a CI/CD pipeline and have been using it for several years, report that they have been able to better predict the delivery of software and easily accommodate changes in requirements and design.

Along with improved productivity and lower costs in testing, development teams have reported an increase in product quality and time to market. Furthermore, embedded organizations have informed us of measurable drops in QA problem reports and customer tickets.

FIGURE 5: Having libraries of container images can facilitate sharing containers and eliminate reinventing the wheel.
FIGURE 6: Having multiple engineers working in parallel in this quick and automated continuous integration cycle ensures that a solid software base is produced and maintained throughout the entire product life cycle.
FIGURE 7: Organizations building

Consumers on AI: It’s About Engagement

Generative AI is everywhere. It’s on the phone you just looked at, the social media you consume, and that wearable that’s keeping you honest with your step count. As the technology becomes more ingrained in our daily lives, the new challenge facing companies isn’t simply innovation: It’s a matter of truly understanding consumers’ needs and delivering the products they want.

Consumer technology companies have gone all-in, promising innovation that’s “revolutionary,” “transformative,” and “customized.” And while AI’s potential is obvious, even undeniable, there’s a disconnect between those bold promises and their perceived value in the eyes of consumers.

Confronting the Engagement Gap

“How will this product improve my life?”

“Is it worth the price?”

Consumer skepticism around AI-integrated devices defines the Engagement Gap, a disconnect between the bold promises of AI-integrated devices and how consumers perceive them.

Only 13 percent of consumers are early adopters, creating a challenge for companies to communicate how their innovative new tech applies to, and can improve, consumers’ daily lives.

And let’s be candid, new tech isn’t cheap. Premium pricing further limits adoption for “smart” devices, especially when consumers don’t fully understand what’s in it for them. 20 percent of consumers  take a “wait and see” approach,

waiting until technology has been available for a while before purchasing.

So how do companies address the Engagement Gap?

It all starts with the customer.

True Understanding of Customer

Desire is a Competitive Edge

What does “good” look like for a consumer?

In broad terms, we can define “good” as something that feels uniquely tailored to each consumer while resonating widely. This might mean focusing on ease of use, sustainability, affordability or other factors that match the consumers’ daily routines.

Uber’s hyper-personalized ad platform1 shows a strong example of this approach. Uber made experiences simple, intuitive and personalized, matching advertisers with users at meaningful moments. Imagine heading to work in an Uber when you see an ad for a discount at your favorite coffee shop next door. On the way home, an ad reminds you that your favorite TV show kicks off a new season next week.

This strategy leveraged Uber’s ability to effectively integrate consumer data to reach them at precisely the right time. Uber was intentional in its consumer-first approach, knowing they could win ad revenue and maintain strong consumer satisfaction.

User-centricity

Devices are often positioned as futuristic and groundbreaking, but companies struggle to connect those products to practical, everyday use.

In the consumer technology industry, innovation is a necessity, a promise and a challenge. It’s also disruptive. The key is to make the disruptive feel natural, prioritizing intuitive design and matching real-world needs.

When companies steer innovation with empathy and a deep  understanding of what the consumer wants2, the results are loyalty, sustainable growth and lifechanging solutions.

Say Goodbye to Jargon

Hype, technical jargon and specifications may be great attention-getters for industry and investors, but it’s white noise to consumers.

The terminology used to explain new tech often leaves consumers scratching their heads. A staggering 71% of consumers report being confused by the way features and benefits are explained .

The most effective marketing doesn’t just sell a product; it sparks an emotional connection. The language makes consumers think, “I need this in my life.”

Let’s get rid of the jargon and instead meet consumers where they are, marketing in a way that’s transparent, relatable and human.

Help Bridge the Gap by Building Trust

Think about all the connected devices a person can own: smartphones, computers, routers, medical devices, video game consoles, speakers, etc. The average US household owns around 25 connected devices. Not surprisingly, users have privacy and security concerns around the technology in those devices.

Only 39 percent of consumers say they trust companies to act with good intentions, and 43% trust companies to make honest claims. Given that information, it’s clear that offering robust and visible security measures is essential.

If companies don’t clearly communicate the privacy and security benefits of their devices, they risk having their innovative devices thought of as a “nice to have,” rather than a “need to have.”

Closing the Gap

Generative AI technology has rapidly transformed our lives and will continue to do so. As companies continue to innovate, it’s important to not let the “revolution” overshadow what the market is saying it wants.

Closing the Engagement Gap means creating technology that truly resonates, focusing on security and privacy, aligning innovations with consumer needs, and delivering products that are easier to use. In doing that work, we

need communicate the benefits without leaning on jargon and hype. The consumer wants to know how technology can enhance their daily lives.

It’s time to go back to basics and design and market technology for consumers, not at them.

References

1. https://www.accenture.com/us-en/case-studies/software-platforms/uber-new-era-advertising

2. https://www.accenture.com/us-en/insights/consulting/empowered-consumer

The most demanding applications require the world’s most reliable components. For over 50 years PICO Electronics has been providing innovative COTS and custom solutions for Military, Commercial, Aerospace and Industrial applications. Our innovative miniature and sub-miniature components are unsurpassed in any industry. PICO Electronics’ products are proudly manufactured in the USA and are AS9100D Certified.

To learn more about our products and how you can benefit from our expertise visit our website at picoelectronics.com or call us today at 800-431-1064.

TRANSFORMERS & INDUCTORS

Think Pico Small - Over 5000 std Ultra Miniature

Ultra Miniature Designs MIL-PRF 27/MIL-PRF 21038

• DSCC Approved Manufacturing

• Audio/Pulse/Power/EMI Multiplex Models Available

• For Critical Applications, Pico Continues to Be the Industry Standard

DC-DC CONVERTERS

2V to 10,000 VDC Outputs — 1-300 Watt Modules

• MIL/COTS/Industrial Models

• Regulated/Isolated/Adjustable Programmable Standard Models

• New High Input Voltages to 900VDC

• AS9100D Facility/US Manufactured

• Military Upgrades and Custom Modules Available

How Microsoft Is Optimizing NLP Models With Dynamic Few-Shot Techniques

(NLP) is a powerful artificial intelligence (AI) application. It supports next-generation chatbots like ChatGPT, making advanced machine-learning capabilities accessible to the general public. However, training NLP models can be challenging.

Training an AI algorithm to be versatile enough for real-world use can take a lot of time and data. A technique called few-shot prompting offers a better solution, and Microsoft has recently unveiled a way to improve few-shot methods even further.

What Is Few-Shot Prompting?

Few-shot learning provides an AI model with examples, or “shots,” of an optimal output before asking it to provide its own. By including a small number of labeled data points or ideal answer formats, data scientists help the algorithm learn faster than it otherwise would. As a result, the technique consistently leads to higher accuracy across nearly all task types.

In addition to boosting accuracy, few-shot prompting makes models versatile. Because they learn to apply the examples to new tasks, they gain the ability to do the same in a broader sense, using existing knowledge to solve new problems. The method also reduces the amount of data necessary for training.

Despite these benefits, few-shot learning has some shortcomings. While  accuracy tends to improve with additional examples, including too many shots leads to large prompts. When these prompts get too big, training slows down, and meta-learning –the model’s ability to apply what it learns to other scenarios – decreases.

Microsoft’s Approach to Optimal Few-Shot Prompting

Microsoft recently unveiled a dynamic few-shot prompting method to address these issues. This new approach provides the model with a database containing a vast number of examples. Whenever a user asks a question, the algorithm compares it to this store to pull the most relevant shots itself and applies them to its answer.

Creating such a database may take time, but it streamlines training and usage processes down the line. Instead of users having to provide multiple examples, the AI

solution will find which of its existing shots best fits the scenario. As a result, there are no more lengthy, complex prompts to deal with, but accuracy and meta-learning potential remain high.

On top of making the model accurate and efficient, dynamic few-shot prompting can reduce costs. The more data within the prompt an algorithm must analyze, the higher the processing expenses will be. Considering  63% of executives today cite costs as their largest barrier to generative AI adoption, reducing that figure through simpler prompts is a highly beneficial strategy.

Applications of Dynamic Few-Shot NLP Models

Dynamic few-shot prompting is most advantageous in situations where a model needs to complete multiple kinds of tasks. Some of AI’s biggest use cases today fall under that umbrella.

Business Intelligence

Business intelligence (BI), which often involves complex reporting, can gain much

Natural language processing

from optimized few-shot learning. While  81% of businesses trust their AI outputs, reliance on inaccurate or underperforming models leads to an average of $406 million in annual losses.

Because dynamic few-shot prompting improves AI accuracy and versatility, it prevents such outcomes. Leaders can ask it to analyze performance, summarize reports or chart future growth, and the model will adapt to each task without confusing examples between them. BI solutions become easier to use and trust for various purposes as a result.

Education

These algorithms’ flexibility also lends itself to personalization, which is particularly important in education. Almost onethird of students drop out before their sophomore year in a conventional, one-

Tailoring materials to individual students enables better learning outcomes, and this requires adaptable AI models.

Few-shot learning helps AI solutions adapt to different students’ needs. Higher meta-learning ability means the technology does a better job of applying past information to new scenarios, making it an ideal fit for a hyperpersonalized environment.

Customer Support

Customer support, which 64% of modern businesses believe AI will improve, also benefits from dynamic few-shotting. Like in education, different customer service users have varying needs. Consequently, chatbots must be able to handle a wide range of queries and tasks.

Conventional training may result in

misunderstand prompts or apply the wrong example to the situation. The optimized few-shot approach resolves this problem by taking the burden of accurate prompting off the user. The chatbot itself will determine which shots best fit the individual situation, leading to higher satisfaction and ease of use.

Better Prompting Leads to Better AI Outcomes

Over-reliance on AI is a common issue. Organizations can avoid it by ensuring their NLP solutions better apply smaller datasets to a larger range of scenarios.

Dynamic few-shot prompting provides the flexibility and accuracy necessary to achieve AI’s full potential. As more businesses implement the practice in their AI training workflows, they’ll be able to capitalize on the technology’s

What to Know When Selecting a Hardware Platform for Your Next Project

Selecting the right hardware platform for your software project can feel overwhelming. Whether you’re developing a simple embedded application or a complex, resource-heavy system, the hardware you choose plays a crucial role in determining your product’s performance, features, and scalability. While many factors must be considered, the impact of these decisions often manifests in subtle ways that may not be immediately obvious.

Here’s a look at key considerations and how your choices can influence the behavior of your software.

The Core of Your Board: The System on Chip (SoC)

Unlike traditional desktop PCs, which feature modular components like the CPU, RAM, and storage spread across different parts of the motherboard, embedded systems generally use a System on Chip (SoC). An SoC combines essential components such as the CPU, GPU, and peripherals into a single chip.

Features of SoCs

SoCs can include various integrated features depending on the specific model. For instance, they may come with a Graphics Processing Unit (GPU), machine learning capabilities, video encoders/decoders (e.g., H.264, H.265), and peripheral interfaces like USB, UART/serial ports, Ethernet, Wi-Fi, Bluetooth, and even camera connections. The SoC may also include memory controllers, security modules (such as Trusted Platform Modules), watchdog timers, pulse-width modulation (PWM) controllers, and other specialized components.

While the variety of SoCs is vast, each one is unique, offering different combinations of capabilities and configurations. The SoC’s design will significantly impact your project’s final form and performance, so selecting the right one requires understanding both its specifications and the external components it will need to work with.

External Components: RAM and Storage

Although SoCs include many integrated features, they still require external components like RAM and non-volatile memory (e.g., eMMC or NAND). The selection of these components will depend on your application’s memory requirements and the SoC’s specifications. Understanding the interplay between the SoC and these external elements is crucial for ensuring optimal performance.

Pin Multiplexing: Unlocking the Full Potential of Your SoC

Another key consideration when selecting an SoC is its pin configuration. SoCs typically come with a set of pins that can be assigned to different functions. However, due to the limited number of pins, many SoCs rely on pin multiplexing, which allows a single physical pin to be used for multiple functions.

For example, a pin could serve as either a UART (serial port) or an I²C interface, depending on the configuration.

While this adds flexibility, it also introduces complexity. You may need to trade off some features to access others. For example, if you choose to use multiple UART interfaces, you might lose access to other peripherals like I²C. This is a critical consideration when designing embedded systems and choosing the right SoC.

The SoC Landscape: Exploring NXP’s i.MX Series

To give you a sense of the variety in SoC options, let’s look at one popular vendor: NXP. NXP’s i.MX series of ARM processors offers several families, each designed for specific use cases: automotive, edge computing, and vision.

For instance, within the i.MX 8 series, you’ll find multiple variations that cater to different needs. If you’re building an industrial application with minimal user interface, the i.MX 8M Mini might be the best fit. On the other hand, if you need powerful machine learning capabilities and a robust GPU, the i.MX 8M Plus could be the right choice. Each variant of the i.MX 8M Plus has its own unique features, with differences in packaging and internal capabilities.

This example demonstrates the vast range of SoC options available, emphasizing the importance of carefully considering the features and configurations you need before making a decision.

Additional Components to Consider

Once you’ve selected the right SoC, you’ll need to choose several other critical components to complete your embedded system. Here are some key considerations:

› RAM: Choose RAM that meets your performance requirements and works seamlessly with your SoC.

› Non-Volatile Memory: While eMMC is common, other memory types might be more suitable for your project, depending on your needs.

EMBEDDED HARDWARE DEVELOPMENT

› Oscillator: Ensuring your system runs at the correct speed and stability is essential for reliable performance.

› PCB Design: The layout of your Printed Circuit Board (PCB) plays a crucial role in connecting all these components. For instance, integrating complex components like video displays or RAM requires careful design to ensure signals are routed correctly.

In addition to these, your device might also need components like:

› Ethernet PHY: For network connectivity

› CAN Transceiver: If you plan to use Controller Area Network (CAN) communication

› UART to Serial Converter: For serial communication, if required

One more thing: SoCs typically operate at specific voltage levels (e.g., 1.8V, 3.3V, or 5V), while your peripherals may require different voltage levels. As such, additional circuitry for voltage level shifting may be necessary to ensure compatibility between the SoC and peripherals. This is an important part of the integration puzzle.

The Software Stack: Bootloaders, Kernels, and OS

Once your hardware is in place, you’ll need to address the software layer, which starts with the bootloader. The bootloader’s job is to initialize the system’s basic components (like the CPU and RAM) and load the operating system. It also manages device initialization through a configuration file known as the device tree, which describes the peripherals and their configurations.

AI in Consumer Technology

Whether they know it or not, consumers have been subjected to various levels of AI for years, a lot longer in some cases. As the processors get more refined and Edge AI can be deployed at the end point, even wearable devices can adapt to their humans.

In this session, we will look at some of the very low-power devices, how Edge AI is revolutionizing the design of new consumer products and how Infineon is spearheading the effort to deploy AI enabled microcontrollers at the edge with its new PSOC™ Edge ML-enabled microcontroller family.

Watch On Demand: https://resources.embeddedcomputing.com/ Embedded-Computing-Design/AI-in-Consumer-Technology

After the bootloader finishes its task, the system hands over control to the Linux kernel, which is responsible for managing the hardware and providing an interface for the application. The kernel also uses a device tree to identify and control the peripherals connected to the system.

But the kernel alone is not enough. You’ll also need all the applications that you expect on a typical Linux setup, normally called “userland.” All this software stack makes up the Linux distribution (distro) that forms the foundation of your system.

Custom Design vs. System on Module (SoM)

When embarking on an embedded project, you might consider designing your own custom PCB from scratch. While this is possible, it’s often more practical and cost-effective to use a System on Module (SoM). SoMs are pre-assembled, readyto-use solutions that integrate an SoC, RAM, non-volatile memory, and other

essential components on a single board. They typically come with a carrier board to handle power supply and additional connections.

Benefits of Using SoMs

› Pre-integrated, Maintained Software: SoM vendors handle much of the software integration, providing up-to-date operating system stacks and keeping the software secure and stable.

› Development Kits and Schematics: SoM vendors often offer development kits with carrier boards and schematics, helping you get started quickly.

› Offloading the Integration Effort: SoMs take care of much of the hardware and software integration, reducing the risk and complexity of your project.

› Scaling Up Production:  SoMs are great for projects with smaller production volumes (under 5,000 units per year), where custom hardware development may not be cost-effective.

› Limitations of SoMs: While SoMs offer many advantages, they may not be suitable for large-scale production or highly specialized projects. If your project requires a custom hardware configuration, designing your own PCB might be necessary.

Choosing the Right Hardware Platform is Essential to Project Success

The choice of an embedded hardware platform can significantly affect your project’s success. By carefully considering factors such as the SoC, peripherals, pin multiplexing, and external components like RAM and storage, you can ensure your hardware meets the performance and functionality requirements of your application.

When in doubt, leveraging a SoM can save you considerable time and effort, particularly in terms of software integration and development. Ultimately, your goal is to choose a platform that abstracts away hardware complexities so that your application developers can focus on creating innovative solutions.

SPARK AND RUST FOR CRITICAL EMBEDDED SYSTEMS: A Conversation with José Ruiz

As the demand for robust, verifiable, and high-performance software grows, AdaCore remains at the forefront of innovation, providing developers with the tools they need to build reliable embedded systems. José Ruiz, Product Manager at AdaCore, discusses the importance of SPARK and Rust for safety- and security-critical embedded systems.

Q: Why are safety and security so critical in embedded systems today?

A: Embedded systems are increasingly used in applications where failures can have serious consequences - whether in aerospace, automotive, medical devices, or industrial automation. As these systems become more complex and interconnected, ensuring their safety and security is paramount. A single vulnerability or failure can lead to catastrophic results, which is why robust software development practices are essential.

Q: How do SPARK and Rust help address these challenges?

A: SPARK and Rust provide strong guarantees that help developers avoid common programming errors. SPARK is a language with roots in Ada. It enables developers to mathematically prove the properties of their code via formal verification, eliminating entire classes of runtime errors such as buffer overflows and data races. On the other hand, Rust brings modern memory safety features through its ownership model, making it easier to write safe, concurrent systems without relying on garbage collection. Together, these languages offer strong foundations for developing high-assurance embedded software.

Q: When should a development team choose SPARK over Rust, or vice versa?

A: The choice depends on the project’s requirements. SPARK is the best fit for systems requiring strong formal verification and high-integrity assurance, such as avionics or railway control systems. Rust is well-suited for applications where memory safety is a primary concern, and developers want a modern systems programming language with strong tooling. Both languages can be used together for mixed-criticality projects, leveraging their respective strengths.

Q: What key trends are shaping the future of safety-critical embedded software?

A: Several key trends are influencing the future of safetycritical embedded software development:

1. Growing Demand for Formal Verification – As software complexity increases, traditional testing methods can no longer guarantee safety and security. There is a rising interest in formal methods, such as those enabled by SPARK, which allow developers to mathematically prove the absence of critical errors like buffer overflows, data races, and unhandled exceptions.

2. Increased Adoption of Rust – Rust is gaining traction in the embedded systems space due to its strong memory safety guarantees without the need for garbage collection. Its ownership model helps prevent common vulnerabilities such as null pointer dereferences and data races, making it an attractive choice for security-critical applications.

3. Increasing need for concurrent real-time paradigms – With the rise of multicore processors and the growing complexity of safety-critical software, industries are turning to architectures that handle multiple tasks concurrently while meeting strict timing constraints. Programming languages like Ada and SPARK, with built-in real-time concurrency support, are gaining traction for their ability to manage this complexity, statically analyze systems, eliminate race conditions, and ensure the correctness of task interactions.

4. Stronger Focus on Supply Chain Security –As embedded systems become more interconnected, the security of third-party components, compilers, and libraries is a growing concern. Organizations are placing greater emphasis on using verifiable, opensource toolchains and ensuring software provenance to mitigate risks from compromised dependencies.

5. Tighter Regulatory and Certification Requirements –Compliance with standards like DO-178C, ISO 26262, and IEC 62304 drives demand for tools and languages that provide high-assurance software development practices.

6. Rise of Mixed-Criticality Systems –The convergence of different criticality levels within a single system (e.g., infotainment and safety functions in automotive software) pushes developers to adopt architectures and tools that allow safe partitioning and rigorous verification. SPARK and Rust both play a role in ensuring that critical components remain isolated and robust.

The Shine on Rust

Although C and C++ continue to dominate embedded systems, the relatively new language, Rust, is showing promise as a language that provides much of the flexibility of older languages but with stronger guarantees of safe operation.

Elements of Rust use functional languages and other advanced concepts. But an important factor in the recent growth in Rust support is that Rust overcomes many of the memory-related problems encountered by C and C++ programmers and their users. These benefits are helping Rust gain recognition as a good choice for the development of new software modules for critical systems.

Thanks to its emphasis on reliability and memory safety, several large software companies are already major users of Rust.

Risky Business

A significant difference between C/C++ and Rust lies in the treatment of pointers, which is low-overhead way to manipulate data in memory. However, pointers can be easily created and modified by a C or C++ program, making them risky to use.

Rust users can avoid this and other memory-safety issues by taking advantage of its strict rules and built-in support for memory allocation. The memory model supported by Rust also ensures that temporary memory structures will be safely deleted once they are no longer required by the program. Importantly for real-time systems, there is no need to run a garbage-collection process in the background.

The Need for Legacy

However, safety-critical development needs to be conservative. It is impractical and even undesirable to rewrite modules in a new language, even if its protection mechanisms offer significant advantages over legacy C or C++. Existing modules need to be verified once integrated into a target that includes large portions written in Rust or a similarly memory-safe language.

Caught in the Net

Errors caught by Rust at runtime can terminate a program completely, which is unacceptable in critical systems. Static tools can determine the likelihood of such an occurrence and warn the development team to fix any problems before product release.

There are also behaviors that are defined but not desired. For instance, some errors can trigger a “panic” state that can lead a system to crash. Detecting such unwanted behaviors is key in using an advanced static analyzer for Rust.

Such tools will provide formal, verifiable proof of the absence of memory safety vulnerabilities that still exist in mixed legacy C/C++, as well as any newly written Rust code that could cause safety-critical software to behave unpredictably and dangerously. To prevent developers being overwhelmed with potential errors, there are tools that have been designed to limit the number of false positives to ensure they only point to code that is likely to suffer from memory-safety issues.

As Rust continues to make further inroads into high-criticality systems development, there will always be a need to verify that external code modules and low-level functions do not have latent issues that will disrupt operations in the field. Using additional static testing and verification ensures that developers will catch and fix undefined and undesired behaviors early in the integration cycle, and long before deployment.

WHY EMBEDDED IS BROKEN (AND HOW TO FIX IT)

Complexity is the enemy of innovation. I see this playing out every day in the embedded firmware industry. Consider this: the gap between software complexity and productivity – one measure of how effective innovation is – has widened considerably. That means in the fastest-growing industries, like automotive, medical devices, and aerospace, embedded teams struggle to keep pace with the modern system needs of today’s connected environments.

Embedded systems development must be modernized to keep pace with a rapidly-changing marketplace, tight product deadlines, and infinitely more complex software. From my perspective this is not only possible, but there are tried-andtrue processes we can implement to achieve the goal of modernization.

The Old Paradigm: Build it. Test it. Ship it.

I see firmware teams fighting a losing battle, and I hear the same refrains:.

› “It works on my machine! Why won’t it work on yours?”

› “How am I going to reproduce a build from three months ago?”

› “I know the quality systems we should have, but we don’t have the time to build them.”

› “Bugs? But I ran the tests and it passed!”

› “Why are we so slow to get features out?”

These challenges aren’t just frustrating – they’re also expensive.

The old paradigm – when left unchecked – results in time-consuming manual development processes with skipped steps, slow release schedules, distracted developers, unfocused testing, and finding bugs late in the process. At the end of the day, it’s an ineffective process that weighs down the entire product development lifecycle.

Modernizing Embedded Firmware Development

What do we mean when we talk about modernizing embedded development? For me, it goes beyond simple continuous integration (CI) principles, though those are an important aspect of modern firmware development.

Automation

One critical piece of modernization is automation, ensuring that every build, test, and deployment is standardized and consistent throughout the pipeline, with predictable and reproducible outcomes.

By using standardized, automated tools rather than bespoke processes, embedded developers can achieve better traceability than would be possible using manual tools. The result? Better quality builds and a clear path to find and fix defects, track code commits and artifacts, or roll back to a known good state when necessary.

Optimized Toolchains

Choosing the right toolchain can have a significant impact on development efficiency and build quality. I am a big believer in containerized toolchains, which can be easily shared and version-controlled. Plus, pre-built containerized toolchains for different chips can reduce setup time and complexity for new projects.

Containerized build environments ensure that every developer and pipeline uses the same tools and configurations. This approach guarantees reproducibility of builds, reduces onboarding costs for new team members, and allows for synchronized tool updates across the entire team.

Get Your Next Embedded Project Done on Time

I started Dojo Five to drive the firmware industry forward through modern firmware – the tools, techniques, culture, and technologies we believe support a joyful experience when envisioning, developing, and using embedded devices.

If your team is stuck, we can help with your embedded firmware development project.

Open Source and Open Standard Synergy: The Tools for Network Innovation

Today, the ‘connected home’ is no longer a visionary pipedream, but a reality, with smart devices quickly becoming staples in our homes and making our lives simpler (hopefully). Standardization, well-defined Application Programming Interfaces (APIs), and seamless interoperability between different platforms and applications act as the key pillars to achieving ubiquitous connectivity in the home.

Broadband Service Providers (BSPs) have the considerable task of upgrading and transforming their networks to cater and provide a seamless experience for all users in the connected home.   BSPs must enhance their management of end-user experiences, effectively measure and monetize new opportunities, and deliver services that extend beyond a connection point.

For the broadband network to thrive in the ever-evolving connected home landscape, industry wide adoption of these capabilities is crucial. The key to this lies in the synergy of open source and open standards.

Protecting Existing Investment

Network transformation is no easy feat. While cloud technologies are introducing advanced automation opportunities for business operations, protecting existing network investment remains a key consideration for BSPs. Due diligence must be spent ensuring that the network transformation facilitates seamless coexistence and interoperability with existing platforms and equipment, as well as virtualized, disaggregated systems and future technologies.

Cloud technologies, such as Software-defined networking (SDN) and Network function virtualization (NFV), can aid in the process, providing flexibility based on user demand

and allowing for the better allocation of resources. This avoids a complete ‘rip and replace’ strategy when BSPs are operating or upgrading their networks.

Equally, adopting automation will help streamline the management and operation of the network to truly power transformation. Driving a more agile and scalable infrastructure, automation will enable the faster deployment of services and greater responsiveness to customer demands.

Open source software is a crucial component for all of this as it provides a blueprint for BSPs to help implement and adopt these practices and achieve the objectives set out. But of course, that is not the only factor to consider.

Providing the Blueprint

When it comes to upgrading and future proofing BSP networks, open source software cannot act alone. The adoption of open standards helps to define lasting, normative descriptions and the requirements of the systems, interfaces, or APIs needed.

Standards lay down a common and uniform set of rules and design principles for companies across the broadband ecosystem, large and small, to adhere to. They are the backbone that underpin giant leaps forward in innovation and the growth of many industry verticals. In the computing world, standards continue to play a crucial, albeit often understated, role. From the language used to create websites, to the most used format for playing audio files, they have all been developed using standards. As new technologies are constantly introduced, it is imperative that standards are updated and refined to remain relevant for companies in the modern age.

OPEN SOURCE DELIVERS THE FOUNDATIONS AND CAN BE THE BASELINE PLATFORM FOR MORE SERVICES AND APPLICATIONS, WITH UPDATES TO THE CODE AVAILABLE AS THE IMPLEMENTATION MATURES.

Not only do they help BSPs better manage connected devices, but open standards promise an implementation path and a best of breed deployment strategy. They help align the industry on common architecture and future migration approaches.

Similarly, open source software is no new phenomenon. Back in the late 1990s, the concept of creating software through open collaboration emerged. This culminated in the development of a variety of systems from web browsers to operating systems. It was viewed as a vehicle to accelerate development times and deliver a blueprint for the industry to follow.

Open source promises improved flexibility and innovation, while open standards bring greater efficiencies, discipline and global scale. By marrying those complementary technologies together, we will see cost-effective development approaches, sustained network transformations and the universal delivery of future broadband access technologies and services.

The Power of Open Source

Open source software provides device and equipment vendors with a code base that they can integrate into their devices or use as a reference implementation ahead of their own deployments. It ensures a “head start” for software developers to base their implementations on. Vendors can use the published specification and existing standardized data models and integrate it within their existing systems. In turn enabling a faster time-to-market of their own solutions.

These open source implementations can also provide early, and invaluable feedback to the standardization processes.  An early open source implementation may uncover areas within a specification that require additional detail, changes to ease implementation, or adaptions to promote interoperability between implementations.

Open source delivers the foundations and can be the baseline platform for more services and applications, with updates to the code available as the implementation matures. Participating and collaborating in an open source community is a much more attractive proposition compared to costly development associated with completely proprietary or closed source solutions. Those involved can access the community developed and tested source code and be part of the development process.

Facilitating Innovation, Interoperability, and Integration Testing

In many cases where an innovative solution - such as Artificial Intelligence (AI) and energy efficiency - is being developed or researched, the activities are often enabled by access to open source. For example, if developers want to add AI to a network function as part of a Research and Development project, they can save time by using a pre-built code rather than building a new network function and starting from scratch.  If the research and development prove out the concept being studied, researchers can focus on the AI topics and their innovation, even if the final intention is to integrate the new code or approach to a commercial network function.

Alongside this, BSPs are also identifying new tactics to streamline projects from development and test stages to real-world deployments. They can embrace principles from modern Development and operations (DevOps) approaches for cohesive integration. Open source initiatives – such as Broadband Forum’s open broadband

projects – deliver open source reference implementations and development kits to support greater network scalability.

Collaborative efforts bring many benefits, such as early trials and adoption. This has been demonstrated with collaboration between open broadband labs, such as the University of New Hampshire InterOperability Laboratory1, BSPs, and vendors enabling rapid prototyping and demonstrations [e.g. Broadband Innovation Demos].

But there is still more work to be done industry-wide. As open source implementations are being created for more components of the network, including Linux Foundation’s Broadband2 and  OpenDay light3 and Broadband Forum’s OB-BAA, OB-CAS, and OB-STEER projects, there is an increasing need for those individual projects to collaborate and perform integration testing.

That is a challenge that has not yet been solved by the industry, largely because there has not been a model created yet

that effectively resources those crossproject / cross-organization activities. So, while the individual projects are very good at testing their own work, they are not yet fully integrating or interoperating with other open source projects, or even commercial implementations.

At the Broadband Forum, we see this, even in some of the requests from members to help with the interoperability and integration testing between things such as Linux Foundation Broadband and SDN controllers for the TR-413 interfaces and YANG implementations.

Powering a Connected Future

By unifying the best of open standards with the latest software developments, BSPs have the tools to deliver a ubiquitous connected home experience. They can empower migration to automated networks with open management and control systems, based on well documented, open, and tested API definitions.

It is critical that the whole broadband ecosystem continues collaborating and

working together to develop open standards to ensure they address the global and varying needs of BSPs.

Standards development organizations (SDOs) and open source communities must come together to share ideas and develop agile approaches to support future requirements of the connected home network. SDO collaboration brings many benefits as evidenced by the prpl Foundation and Broadband Forum as they set the benchmark where open source software stacks inherently recognize and even embed industry standards in their approach.

By embracing the combined sharing and collaboration that open source principles promise, alongside the efficiency and interoperability of open standards, BSPs have a clear migration pathway and turnkey solutions to deliver an improved user experience.

References: 1. https://www.iol.unh.edu/ 2. https://lfbroadband.org/ 3. https://www.opendaylight.org/

With the European Union’s (EU) adoption of the Cyber Resilience Act (CRA), the 36-month clock for full compliance with the regulation has begun. Any company looking to sell a product within the EU must meet strict security-oriented design and reporting requirements or face steep fines if they are found to not be in compliance. Significant engineering effort will be required to bring any products into compliance with the regulation. This not only includes engineering efforts to meet the security requirements of a product, but to meet the reporting obligations to notify the appropriate agencies, customers, and providers, publish corrective measures to customers, create and distribute updates.

https://embeddedcomputing.com/application/industrial/ industrial-networking-connectivity/secure-your-platform-for-the-cyber-resilience-act

out our white papers at www.embedded-computing.com/ white-paper-library

EDGE AI’S TIPPING POINT Why Market Maturity Changes Everything

The era of edge AI is having a profound impact on embedded systems, and as analysts predict market maturity within one to two years, we’re witnessing a massive shift from experimental deployment to mission-critical implementations. With the global edge AI market poised for explosive growth, developers, product managers, and business executives find themselves at the forefront of this technology that is redefining what’s possible in industrial automation, healthcare devices, smart infrastructure, and beyond. The question is no longer whether to implement edge AI, but how to do so efficiently, profitably, and at scale.

In the race to bring AI to the edge, however, flexibility, speed, and efficiency are critical. As machine learning (ML) at the edge unlocks massive value across all industries, millions of developers will unleash data to make billions of devices smarter. And they need help to make that mission a reality; a platform that can seamlessly adapt to any hardware target while eliminating traditional bottlenecks in the ML pipeline.

At Edge Impulse, the premier platform for edge AI developers, we’re proud of the fact that we deliver the ultimate developer experience for embedded machine learning solutions at scale by offering an end-to-end, streamlined path to deployment. Our focus has always been on removing barriers between great ideas and successful deployments. When you empower developers with the right tools, innovation follows naturally.

Market Evolution

Organizations across virtually every industry recognize edge AI’s versatility and transformative power to improve operational efficiency and create new revenue streams as they move beyond proof-of-concepts to full-scale deployments. This rapid growth is also fueled by advances in hardware capabilities, more efficient AI models, and growing demand for real-time processing and data privacy.

The market numbers tell a compelling story. Some analysts project the edge AI software market alone is expected to reach $13.67 billion by 2032, growing at a CAGR of 29.58%.1 With adoption rates accelerating, the growth isn’t just about the tech; it’s about solving real business problems, gaining a competitive advantage, and growing revenue streams.

Industry Transformation

In manufacturing, we’re seeing AI-enabled predictive maintenance rates reduced by up to 40%, while smart quality control can catch defects in real-time. Production lines equipped with edge AI can adapt to changing conditions instantly, optimizing output and energy efficiency.

Healthcare is experiencing a similar revolution. Edge AIpowered medical devices are enabling continuous patient monitoring with less privacy concerns, while diagnostic tools help healthcare providers make faster, more accurate decisions. Remote patient monitoring systems extend beyond hospital walls, improving patient outcomes while reducing costs.

Edge AI continues its expansive reach and accelerated adoption rates beyond industrial and healthcare to include sectors such as retail, transportation, agriculture, and smart cities to name a few. With edge AI, these shifts share a common thread: improved efficiency, reduced costs, and enhanced decision-making.

Streamlining the Path from Data to Deployment

By enabling rapid model development, testing, and optimization across the entire spectrum of edge devices – from resource-constrained microcontrollers to sophisticated neural accelerators – developers can now focus on innovation rather than infrastructure.

Modern solutions for performance prediction and model optimization take the guesswork out of field deployment, while automated data preparation and labeling capabilities slash development time. Additionally, a hardware-agnostic approach, combined with streamlined workflows, is transforming how teams bring AI to the edge, making previously complex deployments achievable and scalable.

Seizing the Edge AI Opportunity

As we stand at the edge AI tipping point, the convergence of market maturity, technological readiness, and pressing business needs create an unprecedented opportunity. With adoption rates accelerating, the next two years will separate the winners from those who were left behind. The time for experimentation is over – organizations that move decisively now to implement edge AI will achieve lasting competitive advantages in this new era of distributed intelligence.

If you’re ready to accelerate your edge AI development, join our community of over 160,000 developers — start building smarter and faster today at studio.edgeimpulse.com/signup.

1SNS Insider Strategy and Stats

How NIST SP800-193 Supports Resiliency

NIST SP800-193 “Platform Firmware Resiliency Guidelines” was designed to encourage and outline resiliency in the IoT.

The term “Internet of Things” or IoT came to be around two decades ago, at the boom of the Internet. At that time, experts envisioned that every system or appliance will be smart and have some internet connectivity that will serve some advanced features. This term was coined mostly to differentiate stand-alone connected devices from more traditional computers and servers, where regular manual servicing was the norm. At those times, cybersecurity was only considered in the scope of computers and servers and was a rarely used or understood concept for other types of equipment. Usability and innovation were getting the focus.

As time passed, more and more types of equipment came on-line, including mobile devices, networking, industrial control, home appliances, and alwayson-always-connected PCs. As there was no guidance, each vendor had their way about security or lack thereof. With the

rise of connectivity came cybersecurity risks, capable connected devices became targets for remote hackers that seek to exploit the capabilities of such devices for their own needs. It could be simply to abuse the connected system, use it as an attack vector to a more complicated attack, or create bots that will one day unleash a largescale cyber-attack.

As these connected devices evolve, economy becomes more and more dependent on their functionality and operation. What used to be novelty is now a major pillar of modern society. Infrastructure, utility, households, and government agencies all depend on the correct operation of connected equipment to maintain stable day to day living.

This led the US National Institute for Standards in Technology (NIST) to publish NIST SP800-193 “Platform Firmware Resiliency Guidelines” in May 2018. This publication targets every connected device which runs any sort of firmware. From large and complex servers to small and embedded controllers. This publication and its principles will be explored here.

NIST SP800-193 (abbreviated ‘193’ from here on) discusses the concept of “Resiliency” i.e. making a system or platform resistant to malfunction due to malicious attacks or spontaneous errors. At the basis of resiliency there are 3 pillars:

› Protection is all about ensuring that platform code and data remain in state of integrity. Integrity does not mean it has not been modified but rather that it is in a state that allows correct operation as required by the

vendor, user, and infrastructure. For code, this means that a verified, trusted version of the code is running the system. For data, it means that data was only modified by authorized entities and processes.

› Detection is the capability of the platform to identify corruption to code and critical data and alert. Detection must be handled by a separate layer since compromised code cannot be trusted to test itself or its relevant data.

› Recovery is the capability of the system to return to a correct working state in terms of code and critical data.

Recover from Attacks

When implementing protection, detection, and recovery, a system can be trusted to continue operation throughout cyber-attacks and various spontaneous errors (Figure 1).

SP800-193 only covers resiliency of firmware and critical data. It does not cover loss or corruption of other data or any hardware of physical damage to the system.

Protection

Keeping the firmware in a state of integrity allows it to be trusted: used as Root of Trust. The user and infrastructure can rely on the functionality of the firmware to correctly operate the system and verify the authenticity and integrity of other parts of firmware before executing them.

Cryptographic Write Protection

Cryptographic write protection allows the platform designer to protect sections of the flash device from erase or program operations. The only way to carry out programs or erase to these sections is via a cryptographic authentication process that requires knowledge of cryptographic keys securely stored in an inaccessible part of the flash device. These keys are unknown to an attacker as they are not present in cleartext in the system. These keys are also designed to

Protect against Attacks

Detect Attacks

FIGURE 1: The cybersecurity resilience cycle: Protect, Detect, and Recover

be unique per system, preventing wide-spread compromise of systems should a single instance be attacked and broken.

Authenticated Update Mechanism

SP800-193 requires that the system firmware will be updated in a timely manner to patch security vulnerabilities. ‘193 requires that the update will be done via an authenticated update mechanism, ensuring that the update is authentic (true to source) and integral (complete and unmodified). Firmware updates must thus be provided with a signed digest allowing the system to ensure that the update code is genuine, complete, and unmodified.

Detection

SP800-193 requires that the system will detect unauthorized changes to firmware and critical data before it is executed or used. Upon detection the system may initiate a recovery process.

Recovery

Recovery mechanism should restore the platform firmware and critical data to a valid and authorized state when it is detected to have been corrupted.

Summary

NIST SP800-193 outlines the requirements from a system in order to make it resilient to firmware and critical data corruption either by malicious attacks or malfunction. These requirements can only be fulfilled by a dedicated protection layer. Such protection layer is rooted in immutable code that is very difficult to implement.

THE FUTURE OF EMBEDDED SECURITY?

SCI Semiconductor Delivers World’s 1st CHERI-based Microcontrollers

Welcome to the Future of Embedded Security

By 2027 is it estimated that the cost of global cybercrime will exceed $23 Tillion USD (yes, trillion with a “T”) annually, equivalent to China’s annual GDP. While this is an astounding number, the bigger surprise is that we largely understand the technical issues that criminals use to impact systems and have done for over 50 years. Many reports, including analysis from Microsoft and Google, acknowledge that 70%+ of Critical Vulnerabilities & Exploits (CVE) are Memory Safety orientated, and these are widely unlisted to gain unauthorized access to systems, to escalate attacks, to steal credentials, and ultimately to hold large critical infrastructure to ransom.

A New Hope

SCI Semiconductor was launched by industry veterans from Arm and Microsoft to adapt CHERI (Capability Hardware Enhanced RISC Instructions) technology for a new era of embedded computing. Having traditionally focused on performance and low power, the industry is now seeing a third epoch based around security requirements, to both support legislation and regulation, and critically to embrace the fearless code reuse required to bring down the development costs and reduce the skills shortages impacting the industry.

At embedded world 2025, SCI Semiconductors are very pleased to demonstrate their new ICENI™ MCU family, utilizing the CHERIoT-Ibex RISC-V processor. The CHERIoT open source project is maintained by SCI alongside Microsoft and Google.

ICENI™ devices will be released across 2025 and are the first commercial CHERI-enabled devices available globally. The devices will ship with a variety of peripherals and is initially targeted at regulated industries where security and meeting legislative requirements are paramount. Critical infrastructure markets, including smart energy, aerospace, automotive, medical, and industrial applications are targeted due to inherent security regulations and legislation, coupled with a strong requirement to reduce the costs attributed to traditional formal methods. At embedded world, SCI will also highlight the flexible development platforms partners are already utilizing to meet the challenges of Memory Safe computing, resolving confidentiality of systems, plus additionally high integrity and availability requirements.

The “S” to IoT

The traditional joke of “The ‘S’ in IoT stands for security” is an unfortunate reality. Too many organizations consider security late, or never at all. Often security is perceived as a “necessary evil” rather than fulfilling consumer demand. The EU Cyber Resilience Act and RED legislation are challenging this assumption, but the industry still incorrectly views security as a costly insurance policy, because all code should “just be written correctly.” As an industry we know this is impossible, and yet we continue to falsely assume products ship with perfect code and zero flaws.

With the ICENI™ devices, plus underlaying CHERI technology, it is now possible for security to transform into a business benefit, that creates significant value along the supply chain, for developers, integrators, and end users. Benefits can be attributed the five core concepts:

Software Supply Chain Benefits

› Fearless code reuse is enabled through robust fine-grained isolation, significantly lowering software development cycles and accelerating time to market

› Limiting the “blast radius” of any attack, providing a “run-flat tire” for applications

› Maintain system availability by automatically restarting compartments that are compromized

Cost Efficiency Benefits

› Simplified lower-cost development

› Fearless code reuse via simple recompilation; no lengthy code re-writing to support custom TEE’s

› > 70% of bugs that would be vulnerabilities on other platforms mitigated, reducing the cost of the upgrade treadmill

Enterprise Mission Critical Benefits

› Enhanced integrity ensures exploits are trapped before data can be corrupted

› Improved availability ensures system stability isn’t impacted by compartment crashes

› Fine-granularity compartmentalization supports rapid and reliable restarts after attack detected

Application Security Benefits

› Multi-stage secure boot compartmentalization enables progressively reduced privileges for each stage, minimizing the attack surface, and reducing the costs associated with attacks

› Compartmentalization ensures that a compromised function cannot expose other software to attackers

› CVEs in third-party components are isolated within the system

Legislation & Regulation Benefits

› Delivering standard hardware-enforced framework reduces the cost of legislation

› Reduced application specific security requirements low costs and simplifies solutions

› Simple purchasing requirement ensure organizations meet customer demands

SCI Semiconductor invites you to visit booth (5-178) and the CHERI Alliance booth (5-160) at Embedded World 2025 to learn more about ICENI™ and the emerging CHERI technology.

Haydn Povey is CEO of SCI Semiconductors. Previously at Arm he launched the Cortex-M family and subsequently owned security technology across the company.

CHERIoT & CHERI Resolving the Memory Safety Challenge

Memory safety is a property of computer systems that ensures programs only access memory locations they are permitted to, preventing unintended or malicious behaviour.

› Security: Majority of critical software vulnerabilities stem from memory safety violations.

› Reliability: Memory errors often lead to program crashes or unpredictable behaviour.

› Maintainability: Ensuring memory safety makes debugging easier and reduces technical debt.

› Compliance: Many industries now require memory-safe programming to meet regulatory standards.

Core Concepts of CHERI

CHERI is an advanced architectural extension designed to enhance memory safety and software security at the hardware level. Developed over more than a decade by the University of Cambridge, in collaboration with SRI International and funded by DARPA, it is now the leading solution for securing computing systems.

At its heart, CHERI introduces capabilities, which are hardwareenforced pointers that integrate bounds checking, permissions, and provenance to mitigate common security. This fundamentally changes the way memory management and access control are handled at the processor level.

What are Capabilities?

› Base and Bounds: Pointers can only access a designated memory region, preventing buffer overflows.

› Permissions: Defines allowed operations (e.g., read, write, execute) to prevent unauthorized memory modifications.

› Sealing: Locks capabilities so they cannot be arbitrarily modified, preventing certain types of attacks.

› Tagging: Hardware to detect and prevent unauthorized pointer manipulations.

› Unforgeable: Capabilities are unforgeable making attacks, such as stack smashing or ROP, significantly harder.

CHERI additionally provides fine-grained memory protection ensuring that every memory access is checked at the hardware level.

The CHERIoT Platform

The CHERIoT platform is a hardware/software co-design project, and the smallest supported implementation of CHERI optimized for small, low- power devices.

Key innovation in CHERIoT include:

› Secure, Compartmentalize, Real-Time Performance with low power consumption suitable for a wide array of embedded and IoT applications.

› Efficient Capability-Bound Memory Protection for preventing common vulnerabilities such as buffer overflows, use-after-free, and privilege escalation.

› Hardware-Enforced Software Compartmentalization to securely isolate different system components, preventing one compromised module from affecting others.

› Privilege-separated RTOS with a Trusted Compute Base of only around 300 instructions.

› Compartmentalization Model designed for ease of use from higher-level languages.

Table 1: Comparison of ICENI security vs traditional models
Figure 1: Traditional Pointers replaced with powerful Capabilities

Edge AI Developer Platform

Edge Impulse’s platform empowers embedded engineers and ML teams to efficiently run AI at peak performance, enabling AI for any edge device: MCUs, NPUs, CPUs, GPUs, gateways with the latest sophisticated neural accelerators, sensors, and cameras. With Edge Impulse, developers can access a comprehensive MLOps workflow for building datasets, training models, optimizing features, and deploying models to any type of hardware.

Moving intelligence to the edge allows companies to create innovative, next-generation smart products. The Edge Impulse platform helps developers unlock sensor data, reduce bill of materials costs (BOM), speed up time to market and commercialization, and de-risk model development with agnostic and scalable edge AI tools.

Power your ML workflow with Edge Impulse to:

• Fine-tune your models by analyzing on-device performance and leveraging state-of-the-art optimization techniques

• Test your models with real-world data to catch bottlenecks and make the right tradeoffs before deploying to your device

• Export trained models to an optimized C++ library built for interoperability with any edge device

• Deploy your models with ease while leveraging the ability to continuously monitor and improve their performance

The Edge Impulse platform powers a broad range of use cases from companies such as Ultrahuman and Halma, including digital health, wearable devices, supply chain optimization, computer vision, manufacturing,

https://www.edgeimpulse.com/product

Edge Impulse www.edgeimpulse.com

industrial applications, utility monitoring, environmental monitoring, fall detection, smart cities, automotive, asset tracking, and predictive maintenance.

With Fortune 500 customers, partnerships with top silicon vendors, and over 160,000 developers, Edge Impulse has become the trusted ML platform for enterprises and developers alike.

FEATURES

Ą Optimized for Low-Power Devices – Run efficient AI models on microcontrollers and constrained hardware without compromising performance.

Ą Automated Data Pipeline – Simplify data collection, labeling, and pre-processing with built-in tools optimized for edge devices.

Ą Device-Agnostic Deployment – Build once, deploy anywhere, from MCUs and MPUs to FPGAs and custom hardware.

Ą Edge-Optimized Neural Architectures – Leverage state-ofthe-art models tailored for edge applications, like FOMO™ object detection.

SR-Series High-Performance Adaptive Edge AI MCUs

The Synaptics SR-Series high-performance adaptive MCUs provide scalable context-aware AI computing at the IoT Edge. Part of the Synaptics Astra™ AI-Native compute platform, the series comprises the SR110, SR105, and SR102 MCUs supported by the Astra Machina Micro development kit and open-source SDK.

The MCUs are optimized for multimodal consumer, enterprise, and industrial IoT workloads with accelerators and adaptive algorithms for vision, audio, and voice. They support three tiers of operation: performance (100 GOPS), efficiency, and low-power sensing.

Based on Arm® Cortex®-M55 cores at up to 400 MHz with Arm® Helium™ and Cortex-M4 and Arm® Ethos™-U55 NPU, these smallform-factor MCUs have a rich set of peripherals, including multiple camera and audio interfaces.

FEATURES

Ą SR110: Arm Cortex-M55 and Arm Cortex-M4 MCU with Ethos-U55 NPU; SR105: Arm Cortex-M55 MCU with Ethos-U55 NPU; SR102: Arm Cortex-M55

Ą Up to 4 MB of system memory, including ULP 16 kB always-on (AON) memory

Ą Low power autonomous image and audio capture (600 kB)

Ą Streaming vision and audio

Ą MIPI-CSI camera input and passthrough; digital camera input

Ą Low-power image signal processing

Ą Secure OTP, TRNG, AES-256, RSA-4096, SHA-512

Synaptics Incorporated www.synaptics.com

 press@synaptics.com  www.linkedin.com/company/synaptics

UDE® – Multicore Debugger for MCUs / Embedded MPUs

UDE® Universal Debug Engine for Multicore Debugging is a powerful development platform for debugging, testing and system analysis of microcontroller software applications. UDE® enables efficient and convenient control and monitoring of multiple cores for a wide range of multicore architectures within a single common user interface. This makes it easy to manage even complex heterogeneous multicore systems.

UDE® supports a variety of multicore microcontrollers and embedded multicore processors including Infineon AURIX, TRAVEO, NXP S32 Automotive Platform, STMicroelectronics Stellar, Renesas RH850, R-Car, Synopsys ARC, RISC-V and others.

The UAD2pro, UAD2next and UAD3+ devices of the Universal Access Device family provide the hardware basis for the powerful functions of UDE® and enable efficient and robust communication with the supported architectures and controllers. With its flexible adapter concept, the Universal Access Device family supports all commonly used debug interfaces.

FEATURES

Ą Debugging of homogeneous and heterogeneous 32-bit and 64-bit multicore MCUs and embedded MPUs

Ą Synchronous break, single step and restart for multicore systems

Ą One common debug session for complete system / all cores

Ą Convenient debugger user interfaces supporting multi-screen operation and perspectives

Ą Support for special cores including GTM, HSM, ICU, PPU, SCR

Ą Software API for tool automation and scripting

Ą AUTOSAR support and RTOS awareness

UDE® – Trace and Test for MCUs / Embedded MPUs

The UDE® Universal Debug Engine is the perfect tool for runtime analysis and testing of embedded multicore applications. With support for on-chip tracing, it offers comprehensive features for non-intrusive debugging, runtime observation, and runtime measurement. This helps developers to investigate, e.g., timing problems or misbehavior caused by parallel execution.

Used in conjunction with the UAD2next and UAD3+ devices from the Universal Access Device family, the UDE® enables trace data to be captured from various trace sources and via external trace interfaces. Trace modules for the UAD2next or trace pods for the UAD3+ are provided for this purpose.

UDE®'s debugging and tracing capabilities, coupled with a flexible and open API for scripting and integrating with third-party tools, make UDE® an ideal choice for automated testing on real target hardware. During the execution of automated tests, UDE® can also determine the Code Coverage to validate the quality of the test cases that are being used.

FEATURES

Ą Multicore trace support for various on-chip trace systems (incl. MCDS/miniMCDS for Infineon AURIX / TriCore, IEEE-ISTO 5001 Nexus for NXP MPC5xxx, STMicroelectronics SPC5, Arm CoreSight for Arm Cortex A/R/M based devices, Renesas RH850 on-chip trace)

Ą Visualization and analysis of recorded trace information (execution sequence chart, call graph visualization, profiling)

Ą Trace-based, non-intrusive statement coverage (C0 coverage) and branch coverage (C1 coverage) for proving test quality

Ą UDE SimplyTrace® function for easy and context-sensitive trace configuration

Ą Open and flexible software API for scripting and test automation

GNAT Pro for Rust delivers stability, security, and dependability for your critical, embedded Rust applications - all with the best-in-class support you’ve come to expect from AdaCore.

GNAT Pro for Rust is built using AdaCore’s fully reproducible build system– the same build system we use for all of our products.

GNAT Pro Assurance for Rust gives you a sustained branch of the complete toolchain with support for critical updates and known problem reports for as long as you need. We really mean that. We’re prepared to support you for decades if needed.

AdaCore is an ISO 9001-compliant and NIST SP 800-171 organization targeting SLSA Build Level 3 compliance. That gives you the confidence that our software hasn’t been tampered with and can be traced securely back to its sources. When we deliver GNAT Pro for Rust to you, you will have full access to relevant security-related documents – including a Software Bill of Materials.

AdaCore www.adacore.com

Embedded Software, including OS and IP

FEATURES

Ą Stability for decades to come.

Ą Best-in-class support.

Ą GNAT Pro for Rust offers support for popular embedded processors and RTOSs.

Ą GNAT Pro for Rust is ready to integrate your Rust code into your existing Ada, C, and C++ projects.

Ą Its syntax is easy to understand for people with a C or C++ background.

https://www.adacore.com/gnatpro-rust

GNAT Pro for Rust

Chassis managers, enclosures, & rugged NI SDRs

Pixus offers a vast array of specialty boards & accessories, enclosure cases, chassis hardware managers, & ruggedized NI SDRs. This includes card guides, handle/panel sets for plug-in cards, subracks, & more. The designs are used for various custom enclosures as well as openstandard architectures including OpenVPX/SOSA, CompactPCI/Serial, VME/64x, and ATCA/MTCA.

Pixus’ modular instrumentation cases, subracks and Eurocard enclosures come in various heights, depths, and widths. This includes 19" rackmountable, desktop, 1/2 19" size, and a wide variety of other options.

The ruggedized NI SDRs include versions for EW/SIG-INT, wideband spectrum monitoring, drone deterrence, advanced wireless prototyping, & more. Pixus offers IP67 outdoor units and MIL rugged types.

The chassis managers come in 3U/6U pluggable and mezzanine-based versions, including SOSA aligned configurations. Consult Pixus for customized options & specialty boards.

Pixus Technologies www.pixustechnologies.com

TS-7250-V3 Single Board Computer

The TS-7250-V3 from embeddedTS is a high-performance, low-power single-board computer (SBC) designed for industrial and embedded applications. Featuring an NXP i.MX 6UltraLite (ARM Cortex-A7) processor, this rugged SBC delivers high efficiency and robust connectivity options for IoT, automation, and industrial control systems.

With 512MB RAM (expandable to 1GB), 4GB MLC eMMC, and bootable microSD support, the TS-7250-V3 ensures reliable storage and fast data access. The board boasts dual Ethernet ports, USB, RS-232, RS-485, CAN, I2C, and SPI for versatile communication.

Its low-power design (as low as 0.5W), temperature tolerance of -40°C to 85°C, and rugged DIN Mount enclosure options make it ideal for harsh environments. embeddedTS provides long-term support, open-source software, and a quick-start Debian Linux environment for rapid deployment.

FEATURES

Ą Modular enclosures for all types of devices. Includes EMC shielded, semi-rugged, IP67, and more.

Ą Chassis managers & specialty boards for power/fan monitoring, VITA 46.11 / IPMI, serial MUX, etc.

Ą Ruggedized versions of NI SDRs including the X310, X410, B210, N310, and others available upon request.

Ą Handle/panel sets and other accessories for OpenVPX, cPCI, VME, and more.

Ą Various components for enclosures and boards – card guides, sidewalls, panels, rails, etc.

Embedded Hardware, including Boards and Systems

FEATURES

Ą Powered by NXP i.MX 6UL w/ ARM® Cortex®-A7 core operating up to 696 MHz

Ą 4GB MLC eMMC

Ą mikroBUS Socket and XBee Socket

Ą Typical power usage is about 1 W

Ą Fanless -40°C to +85°C Temp Range

sales@embeddedts.com

@ts_embedded

https://www.embeddedts.com/products/TS-7250-V3

R1 Edge

Get the edge with no-compromise industrial computing. The Relio™ R1 Edge Industrial Computer redefines reliability at the edge, combining rugged durability with next-generation processing power. Featuring versatile connectivity including dual 2.5 Gigabit Ethernet, Cellular 4G LTE, and Wi-Fi 6E, this future-ready platform brings decision-making computing power to the edge.

Built for uncompromising performance in harsh industrial environments, its fanless, solid-state design and strategic thermal management system enable continuous operation in temperatures from -40°C to +71°C.

The R1 Edge's anodized aluminum enclosure and innovative SeaLATCH locking connectors ensure dependable operation even under extreme shock and vibration conditions. Its COM Express architecture enables processor upgrades without system redesign, and comprehensive software compatibility for accelerated deployment.

The R1 Edge delivers exceptional performance where conventional computers fail.

Sealevel Systems, Inc. www.sealevel.com

FEATURES

Ą (1) M.2 4G LTE Cellular slot – 2 Antennas (optional)

Ą (1) M.2 Wi-Fi 6E – 2 Antennas (optional)

Ą (2) USB 3.1 SeaLATCH Charging Ports, (2) USB 2.0 SeaLATCH Ports, (1) USB C Port SeaLATCH Port

Ą (2) Video DisplayPort connectors

Ą (2) Full RS-232/422/485 Ports

Ą (2) 2.5 Gigabit (10/100/1000/2500 BaseT) Ethernet Ports

Ą Up to -40ºC to 71ºC wide operating temperature range

 sales@sealevel.com

864-843-4343  www.linkedin.com/company/sealevel-systems-inc/

Cutting-Edge Touch Embedded Computing Solutions

MACTRON GROUP – MTG from Taiwan specializes in touch-embedded systems, following the "TRANSFORMER" philosophy to adapt diverse hardware for different industries. At Embedded World, we showcase our touch panel PCs and mobile tablet PCs for Medical, Automation, and Commercial applications:

• WCP & WMP Series: Aluminum die-casting touch PCs with 8th, 12th, & 15th-gen Core-i, supporting AI & Edge computing for industrial & medical use.

• MAS & MAA Series: Rugged Windows/Android tablets with customizable performance, from entry-level to high-end, in various screen sizes.

• WAM Series: Panel Mount Windows/Android Touch PCs, supporting Intel x86 & ARM.

• WMR Series: Rechargeable medical PCs with dual hot-swappable batteries for uninterrupted operation.

• MMS Series: Medical-grade Windows tablets for data collection, diagnosis and record management.

The SP2-IMX8 Panel PC is a high-performance, energy-efficient solution specifically designed for industrial automation and human-machine interface (HMI) applications. Powered by the advanced NXP i.MX 8M Plus processor, this ARM-based panel PC delivers AI capabilities and real-time edge computing, making it an essential component for modern smart factories and industrial operations.

Available in compact, customizable sizes, the SP2-IMX8 features multiOS compatibility (Linux, Windows, Android), exceptional power efficiency (as low as 1.7W), and advanced multimedia support. Its integrated Neural Processing Unit (NPU), capable of 2.3 TOPS, enables precise AI-driven quality inspections and predictive maintenance, improving productivity and reducing downtime.

With its rugged construction, modular edge I/O options, and seamless scalability, the SP2-IMX8 is an ideal choice for industries seeking innovative edge computing solutions to optimize performance and ensure long-term growth.

FEATURES

Ą NXP i.MX8M Plus processor (with NPU 2.3 TOPS)

Ą 10-Point PCAP touch screen (7"/10")

Ą 32GB eMMC and external Micro SD slot

Ą Flexible expansion I/O design

Ą 12V DC in or 12-24V DC in

Ą Support 1GbE LAN TSN

Ą Support Multi-OS: Linux, Windows, Android

https://www.adlinktech.com/products/panel_pcs_monitors/panel_pcs_monitors/sp2-imx8_series?lang=en

ev.mkt@adlinktech.com

www.linkedin.com/company/adlink-technology

TROPIC01

TROPIC01 is an open architecture secure element that serves as a foundational security component for embedded systems, providing a Hardware Root of Trust to ensure security. Within its secure perimeter, it enables cryptographic key management, digital identity, and secure data storage for critical applications.

The chip integrates with any microcontroller (MCU) via the SPI interface, coupled with the publicly available SDK for easy integration. The secure channel protocol ensures runtime confidentiality and integrity between TROPIC01 and the MCU.

TROPIC01 features dedicated hardware engines for cryptographic algorithms, anti-tampering sensors, and protection from a wide range of digital and physical attacks.

Tropic Square enables transparency by publishing security design details, source files, and documentation for independent security audits at www.github/tropicsquare

TROPIC01 is ideal for use cases such as secure boot, firmware updates, key management, and device identity.

https://www.tropicsquare.com/tropic01

Tropic Square

www.tropicsquare.com

FEATURES

Ą Protection against various physical and side-channel attacks, including laser, electromagnetic fault injection, focused ion beam, and temperature glitching.

Ą 238kB of non-volatile memory for general-purpose data storage, secured by the ISAP encryption scheme using PUF-derived keys.

Ą Encrypted secure channel communication using the NOISE framework protocol, supporting authentication of up to 4 hosts.

Ą Upgradeable Elliptic Curve Cryptography engine with support for ECDSA, EdDSA, and ECDH curves.

Ą Physical Unclonable Function (PUF) on-chip, offering strong protection against reverse engineering.

Ą True Random Number Generator (TRNG) compliant with NIST800-90b and AIS31 for secure random number generation.

Ą Firmware customization and white labeling options to tailor TROPIC01 for specific application needs.

 support@tropicsquare.com

 www.linkedin.com/company/tropic-square-s-r-o @tropicsquare

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.