Embedded Computing Design Fall 2025 with Resource Guide

Page 1


Q&A with Raj Jain, VP, Engineering and Head of QNX India

EDITOR IN CHIEF Ken Briodagh ken.briodagh@opensysmedia.com

ASSISTANT MANAGING EDITOR Tiera Oliver tiera.oliver@opensysmedia.com

PRODUCTION EDITOR Chad Cox chad.cox@opensysmedia.com

CONTRIBUTING EDITOR Rich Nass rich.nass@opensysmedia.com

TECHNOLOGY EDITOR Curt Schwaderer curt.schwaderer@opensysmedia.com

CREATIVE DIRECTOR Stephanie Sweet stephanie.sweet@opensysmedia.com

WEB DEVELOPER Paul Nelson paul.nelson@opensysmedia.com

EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

WEBCAST MANAGER Marvin Augustyn marvin.augustyn@opensysmedia.com

SALES/MARKETING

DIRECTOR OF SALES Tom Varcie tom.varcie@opensysmedia.com (734) 748-9660

STRATEGIC ACCOUNT MANAGER Rebecca Barker rebecca.barker@opensysmedia.com (281) 724-8021

STRATEGIC ACCOUNT MANAGER Bill Barron bill.barron@opensysmedia.com (516) 376-9838

EAST COAST SALES MANAGER Bill Baumann bill.baumann@opensysmedia.com (609) 610-5400

SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek len.pettek@opensysmedia.com (805) 231-9582

DIRECTOR OF SALES ENABLEMENT Barbara Quinlan barbara.quinlan@opensysmedia.com AND PRODUCT MARKETING (480) 236-8818

INSIDE SALES Amy Russell amy.russell@opensysmedia.com

STRATEGIC ACCOUNT MANAGER Lesley Harmoning lesley.harmoning@opensysmedia.com

EUROPEAN ACCOUNT MANAGER Jill Thibert jill.thibert@opensysmedia.com

TAIWAN SALES ACCOUNT MANAGER Patty Wu patty.wu@opensysmedia.com

CHINA SALES ACCOUNT MANAGER Judy Wang judywang2000@vip.126.com

CO-PRESIDENT Patrick Hopper patrick.hopper@opensysmedia.com

CO-PRESIDENT John M. McHale III john.mchale@opensysmedia.com

DIRECTOR OF OPERATIONS AND CUSTOMER SUCCESS Gina Peter gina.peter@opensysmedia.com

GRAPHIC DESIGNER Kaitlyn Bellerson kaitlyn.bellerson@opensysmedia.com

FINANCIAL ASSISTANT Emily Verhoeks emily.verhoeks@opensysmedia.com

SUBSCRIPTION MANAGER subscriptions@opensysmedia.com OFFICE MAILING ADDRESS 3120 W Carefree Highway, Suite 1-640 • Phoenix AZ 85087 • Tel: (480) 967-5581

REPRINTS

MEDIA REPRINT COORDINATOR Marcia Brewer mbrewer@wrightsmedia.com (281) 419-5725

17 ACCES I/O Products, Inc. –The new, more flexible alternative to PCIe Mini Cards

6 BlackBerry QNX –How QNX everywhere is building the future of Embedded Innovation

1 Kontron – K3881-C µATX: Secure, scalable server-class motherboard built for diverse industries

21 PICO Electronics Inc –The big name in miniature components

1 Sealevel Systems, Inc. –Sealevel Systems, Inc.: Relio™ R1 Edge –Edge Computing Without Compromise

10 Sealevel Systems, Inc. –Built to be forgotten. Engineered to Endure

40 Tadiran – IIoT devices run longer on Tadiran batteries

1 Tria Technologies –TRIA Vision AI-Kit 6490

2 Vector Elect –Vector power backplanes

11 Vision Components GmbH –MIPI Vision Components: The Future of Embedded Vision is Modular

opsy.st/ECDLinkedIn

bit.ly/ECDYouTubeChannel

Today’s industries are shifting toward software-defined architectures, demanding talent that is capable of building safe, reliable, and real-time systems. Discover how QNX Everywhere is helping shape the future of embedded systems by investing in education, community, and accessibility.

Profiles for the 2025 Resource Guide begin on page 24.

 Embedded Insiders: Designing for Harsh Environments & Quantum Cryptography

Tune In: https://embeddedcomputing.com/ technology/security/designing-for-harshenvironments-quantum-cryptography

 ICYMI: Embedded Insights Ep 35 Editor Report from Silicon Labs Works With! Watch Now: https://embeddedcomputing. com/application/tech-news-roundup/icymiembedded-insights-ep-35-editor-report-fromsilicon-labs-works-with

 Embedded Executive: Manual Vs. Automated Code Generation TeleCANesis

Tune In: https://embeddedcomputing.com/ technology/software-and-os/ides-applicationprogramming/embedded-executive-manual-vsautomated-code-generation-telecanesis

Customer Experience Centers are Driving Innovation in Complex Embedded Systems

Theres a growing movement in the embedded space, especially in the most complex industries where regulatory compliance is mission critical, and standards parameters are particularly stringent. In these industries, like medical and industrial robotics, customer experience centers (CEC) can help engineers solve tricky challenges and lead to deeper engagements with OEMs and suppliers.

Because of this greater need for support, and the desire to develop lasting partnerships, companies are opening these CEC to showcase new solutions and products while helping their clients address custom challenges.

I had the opportunity to tour an Advanced Energy CEC in Sharon, Massachusetts, a company specialized in highly engineered, precision power conversion, measurement, and control solutions, where the company’s top engineers are working on power solutions for medical devices. At the facility, product experts and engineering leads work with medical device customers on product launches, emissions testing, ESD testing design verification, and lots more.

The testing is from firmware to hardware, and everything in between, and is freely available to customers. Advanced Energy has outlined a whole process workflow for the CEC process and customers who take advantage of it end up with more complete product design and better positioned to meet standards and regulatory inspection requirements.

The company certainly believes in the CEC strategy, since it’s opening more of these facilities already. According to an announcement, Advanced Energy opened a new design and service CEC in Wilmington, MA. This new state-of-the art facility will focus on the development of advanced power technologies for semiconductors, industrial, and medical applications.

Combining laboratory and office space, the new center reportedly will be the workspace for up to 50 employees.

“Advanced Energy is a leader in developing precision power technologies that enable semiconductor plasma applications, high-voltage industrial instruments and advanced medical equipment,” said Steve Kelley, Advanced Energy’s president and CEO. “Our new Wilmington facility, strategically located in Boston’s tech corridor, allows us to tap into local talent and strengthen our leadership in these key areas.”

BECAUSE OF THIS GREATER NEED FOR SUPPORT, AND THE DESIRE TO DEVELOP LASTING PARTNERSHIPS, COMPANIES ARE OPENING THESE CEC TO SHOWCASE NEW SOLUTIONS AND PRODUCTS WHILE HELPING THEIR CLIENTS ADDRESS CUSTOM CHALLENGES … I WOULD EXPECT TO SEE MANY OTHER COMPANIES OPENING CEC TO ENHANCE CUSTOMER RELATIONSHIPS AND DRIVE LONG-STANDING PARTNERSHIPS. IN A MATURING INDUSTRY LIKE THE FUSION OF EMBEDDED AND EDGE, THESE PARTNERSHIP RELATIONSHIPS ARE INCREASINGLY IMPORTANT.

AE isn’t the only company involved in this CEC strategy, of course. In another announcement, Bota Systems also opened a new CEC at HEIDENHAIN’s CONNECT Manufacturing Innovation Hub in Fremont, California to showcase the company’s complete portfolio of forcetorque sensors along with a Mecademic robot demonstrating the key applications. Bota said that visitors will experience firsthand how the sensors can optimize robotic performance in a range of industrial settings.

I would expect to see many other companies opening CEC to enhance customer relationships and drive longstanding partnerships. In a maturing industry like the fusion of Embedded and Edge, these partnership relationships are increasingly important.

How QNX Everywhere is Building the Future of Embedded Innovation

Q: What is QNX Everywhere, and why is it significant for the embedded systems community?

QNX Everywhere is a global initiative launched by QNX to make its QNX Software Development Platform (SDP) 8.0 more accessible to developers, educators, and innovators. At its core, the program is about removing barriers such as financial, educational, and technical hurdles that have traditionally limited access to professional-grade real-time operating systems. By offering free access to QNX SDP8 for non-commercial use, often bundled with affordable hardware like Raspberry Pi, the initiative empowers a broader audience to explore, experiment, and build with the same tools used in mission-critical systems across industries.

The significance of QNX Everywhere lies in its dual mission: to democratize embedded software development and to address the growing global shortage of skilled embedded engineers. As industries shift toward software-defi ned architectures, the demand for talent capable of building safe, reliable, and real-time systems is outpacing supply. QNX Everywhere is designed to close that gap.

Q: How does QNX Everywhere support the development of future engineering talent?

One of the most impactful aspects of QNX Everywhere is its focus on continuous education. Beyond the non-commercial access of QNX Everywhere, QNX also provides free online training courses designed exclusively for QNX OS developers and available to anyone within the developer community, covering topics such as real-time programming, developing and debugging, and system profiling and analysis.

The initiative also actively partners with universities and training institutions to integrate QNX tools into engineering curriculum. In India, for example, QNX has collaborated with Pi Square Technologies to reach thousands of students at some of the country’s most prominent universities and colleges. This partnership ensures that students are not only learning theoretical concepts but also gaining hands-on experience with industry-standard tools so that they are equipped to make a significant impact from the very first day of their working careers, whether at an automaker or aerospace company.

By embedding QNX into the classroom, the initiative helps students graduate with practical skills that are immediately

applicable in the workplace and is helping efforts at many prestigious universities around the globe. The program is not just about training individuals; it’s about building global ecosystems of innovation around technology.

Q: Why is this initiative especially relevant now?

The timing of QNX Everywhere is no coincidence. The embedded systems landscape is undergoing a transformation. From autonomous vehicles to smart factories and medical devices, embedded software is the backbone of modern innovation. Yet, as the software demands escalate, from edge-based compute, embedded AI, software-defined solutions, autonomous systems, and human-interacting robots, the talent needed to build and maintain these systems is in short supply. At the same time, the traditional pathways into embedded development, often gated by expensive licenses, proprietary tools, and steep learning curves, are no longer sustainable. QNX Everywhere addresses this challenge by opening access and creating opportunities for a more diverse and distributed developer base. It’s a strategic move that aligns with the broader industry shift toward open collaboration and community-driven innovation.

Q: How does QNX Everywhere balance accessibility with the demands of safety-critical development?

While QNX Everywhere is designed to be accessible, it doesn’t compromise on the rigor required for safety-critical applications. The initiative provides access to the same foundational technologies used in certified systems across automotive, industrial, and medical sectors. And while access is for non-commercial development and anything built on it cannot be safety-certified, developers and students are still learning with tools that meet the highest standards of reliability and compliance.

Q: What kind of impact is QNX Everywhere having on the developer community?

The response from the developer community has been extremely enthusiastic. By lowering the cost of entry and providing robust documentation, sample projects, and community support, QNX Everywhere has enabled a new wave of experimentation and learning. Developers who might have previously been limited to freely available hobbyist platforms now can work with a commercial-grade mission-critical RTOS.

This shift is particularly important for independent developers, startups, and academic researchers who are often at the forefront of innovation but lack the resources of large enterprises. QNX Everywhere gives them the tools to prototype, test, and refine their ideas in a professional environment, increasing the likelihood that those ideas can ultimately be commercialized and scaled into real-world applications.

Q: How does QNX Everywhere interact with the open-source community?

As a POSIX-compliant OS, QNX supports any software developed for POSIX Operating Systems, meaning that many open-source projects that run on Linux or other systems can be compiled for QNX with little or no modification. Sometimes those changes are upstreamed into their respective projects, while other times we maintain a separate fork, depending on the release cadence, resource commitments, and our community needs.

We want the open-source community to make QNX-related contributions, and we encourage broader participation. We also track all the QNX-related open-source porting work at oss.qnx.com, so that developers can find the projects they need, pull them down, and/or contribute as they see fit, and publish source code for open-source ports (under their original licenses) to give developers full access to build and innovate.

Q: How does QNX Everywhere support non-commercial and academic development?

QNX recognizes that some of the most meaningful innovation happens outside of traditional commercial settings. Academic institutions, open-source communities, and independent researchers often drive breakthroughs that reshape entire industries. By supporting these groups with free, noncommercial access to its platform, QNX is fostering a culture of experimentation and collaboration. This approach not only accelerates technological progress but also aligns with broader goals around ethical development, sustainability, and social impact. Whether it’s a student building a medical device prototype or a researcher exploring new real-time algorithms, QNX Everywhere provides the foundation they need to succeed.

But it’s not just the non-commercial space that can benefit. Lean teams with smaller IT budgets can also utilize pre-commercial prototyping with QNX Everywhere before moving onto a commercial license when they’re ready.

Q: What are the long-term goals of the QNX Everywhere initiative?

Looking ahead, QNX Everywhere aims to deepen its engagement with the global developer community. This includes expanding partnerships with educational institutions,

increasing participation in developer events, and enhancing the availability of reference materials and source code. The initiative also plans to continue introducing more pre-built software packages and optimized open-source ports, making it even easier for developers to get started and scale their projects.

Ultimately, the goal is to create a self-sustaining ecosystem where knowledge, tools, and talent flow freely. By investing in education, community, and accessibility, QNX Everywhere is not just responding to current industry needs, it’s helping to shape the future of embedded systems.

Q: Why should developers and educators pay attention to QNX Everywhere?

QNX Everywhere is helping strengthen the developer community in areas that are central to embedded software: creating mission critical systems, building for reliability, developing for security, maximizing performance on embedded hardware, and real-time responsiveness. These core values are of increasing demand across the ecosystem with a great many companies, yet finding talent with the needed skills is getting harder, not easier. For developers, it offers a chance to build with tools that are trusted in some of the world’s most demanding environments, and for the chance to develop skills that are in high demand at some of the world’s biggest and best companies. For educators, it provides a bridge between academic theory and industry practice. It lets them teach courses that use the exact same technology as used in realworld mission-critical applications. The offering is also constantly being updated, with additional support, new features, and Board Support Packages (BSPs), evolving to meet the needs of the current technological landscape.

In a world where embedded systems are becoming more complex and more critical, initiatives like QNX Everywhere are essential. They ensure that the next generation of engineers are not only well-trained but also well-equipped to lead the embedded revolution.

To join the QNX developer community and get a free QNX SDP 8.0 license for your personal non-commercial use, visit https://www.qnx.com/products/everywhere/. For faculty at academic institutions wishing to license QNX software for free on a multiuser basis, please visit https://blackberry.qnx.com/ en/company/qnx-in-education.

www.qnx.com

Rust Embedded Community Roundtable: Will Rust Replace C?

The code battle between Rust and C feels like it’s been heating up all year, and some folks are starting to pick a winner, at least in particular cases. Rust and C can both offer good performance and control for embedded hardware and systems, but the differences seem to end there.

C has all the advantages of maturity and experience (never underestimate an old gladiator). Its long history and broad ecosystem make C the reigning monarch of embedded code with frameworks and libraries for almost any extant use case readily available.

Don’t count Rust out so soon, though. The newer codebase carries the advantage in memory safety, especially in new systems without a ton of legacy code. Rust is also simpler and easier to use, so many users report that it’s speeding development times on projects, too.

We wanted to get to the bottom of this, so we opened the floor to the Embedded community to see what they would say when asked: Will Rust replace C?

Brendan Bogan-Ware, Founder and Lead Engineer for Bloxide Short answer: Yes. But the real driver isn’t language features; it’s how embedded teams will work in the AI era.

Can Rust completely replace C?

Not yet. There are some legacy targets that may never support Rust. But most new designs can choose hardware and toolchains that fit Rust. From a technical perspective Rust can do everything C can do since it compiles to the same machine instructions. The limits today are in toolchain maturity, vendor inertia, and workforce training. Luckily, the Rust ecosystem is becoming more mature every day, and workforce training can be accelerated with AI tools.

Will Rust improve end products?

Yes, by changing failure modes. Ownership and lifetimes remove entire classes of memory errors, cutting late-stage defects, warranty costs, and security risks. However, there will still be some niche projects with ultra-tight SRAM requirements or specific SDKs.

The European Cyber Resilience Act (CRA) will hold manufacturers liable for connected-product security flaws. Paired with the boom in manufacturing automation, IoT, medical devices, and smart homes, we’re entering a golden era that demands far more embedded software. Memory-safe languages like Rust reduce the opportunities for security problems, which helps the bottom line over the life of a product.

AI is pushing us into the future faster than any single technology shift in the past. If we assume code output quality continues to improve, the only objection left for today’s software developers is that AI-written code is “less fun” to write. For business, that’s like craftsmen resisting automation: irrelevant to market survival. The reality is that AI is already writing embedded software and will write the vast majority of embedded code moving forward. In C, that’s a minefield without heavy oversight. In Rust, the language itself enforces a baseline of safety and reliability. The winning

teams and companies will be those that combine AI codegen speed with disciplined, automated verification pipelines to deliver secure, production-ready firmware at scale.

Rust will dominate new embedded projects in an AI-driven, increasingly regulated future. The real question isn’t if Rust replaces C, it’s whether our teams and processes can keep up with how fast AI is changing what “embedded development” means.

Tim Reed, CEO of Lynx C has been the foundation of systems programming for decades. Its simplicity, predictability, and ability to interact directly and efficiently with hardware with minimal abstraction or overhead make it ideal for kernels, embedded systems, and real-time applications. It is deeply entrenched, supported by mature toolchains and well-established certification workflows. Replacing C completely will be hindered by return-on-investment (ROI) considerations. For some systems, the benefits Rust offers in terms of safety and reliability may justify the cost. And if generative AI can dramatically reduce the cost of translating C code to Rust, while simultaneously resolving all of the data design and memory bugs, adoption could accelerate.

Rust provides memory safety and enforces strict static typing without using run-time garbage collection. These qualities make it particularly well-suited for developing embedded safety-critical applications. Rust also eliminates common bugs found in C such as buffer overflows and use-after-free memory access. By

addressing these issues at compile time, Rust can reduce the cost and effort of verification and validation, which is a major hurdle in certifying real-time operating systems.

The language is seeing growing adoption across these sectors, particularly for new projects that demand stronger safety guarantees, better long-term maintainability, and higher developer productivity. That said, Rust isn’t a fullon replacement.

For example, the ongoing effort to integrate Rust into the Linux kernel reflects both the promise and complexity of shifting away from C in foundational software. Around 70 percent of high-severity security bugs in Linux are caused by memory safety issues, many of which Rust can eliminate at compile time.

But challenges such as interoperability with C, kernel API wrapping, and tooling gaps remain challenges, resulting in the community focusing on layering Rust safely to avoid regressions while modernizing selectively. This initiative serves as a powerful bellwether for broader industry transformation, showing how even the most mature and performancecritical systems are beginning to embrace the change.

Rust isn’t here to erase C, but to evolve what’s possible in systems programming. Replacing legacy assured C code can be costly, especially in safety- or certification-critical environments.

But for greenfield projects, Rust is increasingly the smarter choice. It offers stronger guarantees around safety, concurrency, and maintainability, even in lowlevel embedded environments where C once reigned alone. The coexistence of C and Rust will persist in the near term, particularly in mature, proven-in-use codebases. But the long-term trajectory is clear: where ROI permits, Rust is poised to become the dominant choice.

Rolland Dudemaine, Director, Field Engineering at TrustInSoft Rust has taken lessons from many languages, including C/C++. Adoption is no longer to be demonstrated. Interestingly, this adoption comes from two culturally

separate ends of the world, with web browsers and desktop environments on one side, and embedded on the other.

Several projects are moving forward with the adoption of this new language, but different strategies are applied:

Innovation projects start fresh with a pure Rust implementation, without legacy code.

Generational projects have new code written in Rust, but legacy, stable C/C++ code is left untouched, under the wise rule of “if it’s not broken, don’t fix it”.

Some critical projects, in all verticals, still consider the move to a full renovation or a multi-language setup to be a significant risk and keep their current practices.

Ultimately, what will probably happen is a balance: in the history of computing, there has never been a situation where a single language has taken over the world. But migration is happening, indeed!

Stephen

Hedrick, Technical

Product Manager for Rust at AdaCore

The short answer is no. Rust will not completely replace C, nor should we expect it to. Just as assembly language persists decades after higher-level alternatives emerged, C will remain relevant for novel hardware platforms and severely constrained environments where its simplicity and universal toolchain support outweigh Rust’s safety advantages.

However, this misses the more important question: should Rust replace C everywhere it’s practicable? Here, the answer is yes.

The fundamental issue isn’t about language preference but rather risk management in an increasingly connected world. Memory safety vulnerabilities represent a significant portion of security issues in major software systems, a point that becomes particularly alarming when applied to critical infrastructure. While C’s manual memory management offers theoretical control, the practical reality is that even experienced developers struggle to consistently avoid the pitfalls that Rust’s ownership model eliminates by design.

Explicit government endorsement is accelerating the transition from C to Rust. The US Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and Office of the National Cyber Director (ONCD) have all issued guidance favoring memory-safe languages like Ada, Rust, or SPARK for new development. This isn’t ideological; instead, it’s a response to decades of preventable vulnerabilities that have compromised critical systems. Similar regulatory momentum is building globally, suggesting that memory safety will increasingly become a compliance requirement rather than a best practice.

The ecosystem supporting this transition is maturing rapidly. Rust’s vibrant community has produced an extensive crate ecosystem that rivals established C libraries in many domains.

Vendors are providing and advancing enterprise-grade toolchains with the certification support that safety-critical industries require. The development of required tooling, like MC/DC coverage analysis, static analysis checkers, coding standards enforcement, safety-qualified compiler implementations, and certified runtime libraries, addresses the gaps that previously made Rust unsuitable for regulated environments.

The realistic path forward involves strategic replacement rather than wholesale migration. New projects and technology refreshes represent natural inflection points where the benefits of memory safety outweigh migration costs. Safety-critical industries are beginning to recognize that the additional upfront investment in Rust adoption pays dividends through reduced testing overhead, fewer field failures, and improved security posture.

Instead of asking whether Rust will replace C, we should ask whether we can afford not to adopt memory-safe alternatives where technically feasible. The answer increasingly appears to be nomaking Rust adoption less a question of preference and more one of professional pragmatism.

Jonathan Pallant, Senior Embedded Systems Engineer at Ferrous Systems

C is a programming language with over 50 years of history. It was designed for writing tools and components for Unix, running on the minicomputers of the early 1970s. For that specific role – as the standard language of Unix and POSIX - C will remain supreme.

But for many other areas of software development where resource usage and performance are paramount, Rust’s ability to match the speed of C code while massively reducing development times will see it become the default systems programming language. And when

For safety-critical systems, using Rust will come to be seen as the default – the lowest-risk option. This is because much of the verification and validation can be left-shifted into the toolchain (and the IDE) rather than left to the test department or expensive static analysis add-ons.

For CLI tools and cloud services, Rust is already replacing C and C++ – especially where performance really matters. The productivity benefits of having a standard build system and package manager without the run-time pain of programming in a language like Python are too good for most to ignore, not to mention achieving C-like levels of performance (or better) on top.

I think desktop applications are one area where Rust might struggle. Desktop applications need to tie closely with the operating system-supplied GUI toolkits – otherwise, you end up with Java Swing.

These toolkits generally date back to the object-oriented programming boom of the mid-1990s and are not currently well-suited for use with Rust. But for embedded systems where integrating with pre-existing applications isn’t a requirement, Rust offers many advantages, especially on resource-constrained systems.

Finally, in education, I think Rust will become the default language for teaching programming at the university level. We have seen that the language around Rust and its ownership and borrowing model makes it easier to explain how C and C++ programs work, so learning Rust first makes a lot of sense.

Several universities are doing this already, especially in China, and I think we will see

Built to be Forgotten. Engineered to Endure.

Bury it in a factory floor. Mount it on a drilling platform. Deploy it in the Arctic. The Relio™ R1 family of computers thrive where other hardware fails. While you're solving the big problems, Relio™ handles the basics. 24/7/365 operation. Three industal computers. One promise: Set it. Forget it. Count on it.

When failure isn't an option, forgettable is unforgettable. Sealevel Relio™ Computers

MIPI VISION COMPONENTS: The Future of Embedded Vision is Modular

MIPI camera modules are ultra-compact, price-optimized and compatible to most embedded processor platforms through the MIPI CSI-2 interface. They have thus become a standard for embedded vision projects, from mobile devices to professional and industrial applications. At embedded world North America, Vision Components will show its VC MIPI Bricks system of perfectly matching MIPI vision components, that enables flexible plug-and-play integration. It covers 50+ cameras, cable options with up to 10 meter length and MIPI-based vision systems for turnkey integration. Brand new is the VC MIPI Multiview Cam, a camera array with nine image sensors for light field measurement, multi-view, and multi-spectral imaging.

MIPI CSI-2: Standard Interface for Embedded Vision

Embedded vision is a key technology to integrate cameras and image processing into devices and machines. With modern embedded systems and processors from NVIDIA, the i.MX series from NXP, the Raspberry Pi family, and other ARM-Linux based platforms, the MIPI CSI-2 interface has emerged as the de-facto standard for connecting cameras to these platforms. German manufacturer Vision Components began developing industrialgrade, long-term available MIPI camera modules around seven years ago, based on over 30 years of experience in the development of embedded vision systems. VC MIPI Cameras are today available with over 50 different image sensors, in color and monochrome versions, with 0.3 MPixel to over 20 MPixel resolution, global shutter, rolling shutter, and global reset shutter.

Faster to Market Success with VC MIPI Bricks

With its VC MIPI Bricks system, Vision Components provides a modular system of perfectly matching MIPI cameras and accessories, that enable fast and easy integration. It comprises shielded FPC cables in various lengths up to 20 cm, coax cables for cable lengths of up to 100 cm and a GMSL2 option for cables up to 10 meters long, as well as additional boards for external triggers and lighting control, lens holders, and optics. The cameras are also available fully assembled and calibrated. In this case, the ready-to-use cameras can be installed directly in the end product without any further adjustments.

Cameras with Onboard Pre-processing

In order to support companies even further, Vision Components has developed the FPGA accelerators VC Power SoM for easy integration in embedded designs, and VC Power SoC, which is fully integrated in the tiny cameras on request. The FPGA accelerators take over the pre-processing of image data in the MIPI data stream and transfer the results to a processor board. This means that the main processor requires significantly less computing power, because complex pre-processing tasks

Vision Components VC MIPI Bricks system comprises perfectly matching camera modules, accessories, and services, right through to ready-to-use MIPI cameras and MIPI-based embedded vision systems.

such as color space conversion, barcode identification, or edge detection have already been carried out. This gives developers greater freedom when selecting the processor board, whose resources are then primarily available for the main application.

Brand new: VC MIPI Multiview Cam

For the first time ever, VC will showcase it’s new VC MIPI Multiview Cam at embedded world NA. The embedded vision system consists of an array with nine image sensors, and enables light field measurement, multi-view, and multi-spectral imaging. It outputs all data through a single MIPI CSI-2 interface, enabling easy integration with standard processor boards. The VC Multiview Cam is the ideal basis to develop individual image sensors and smart devices, where the optical setup and data transmission to a processor board has already been taken care of.

Why Standardized SerDes Interfaces Are Essential for In-Vehicle Networks

Today, automotive technology is advancing faster than ever, with the latest electronic components now central to the design and success of new vehicles.

Rapidly evolving advanced driver-assistance systems (ADAS) and in-vehicle infotainment (IVI) are the stars of many new models, while autonomous driving systems (ADS) are a major focus of development. These innovations require the integration of more cameras, sensors, displays and computing resources from a growing ecosystem of suppliers. The high-speed data interfaces linking these components are also core to the success of these new onboard systems.

SerDes (serializer/deserializer) interfaces convert parallel data into serial data for high-speed, long-distance communication, using simple low-cost cables. In automotive in-vehicle networks, highspeed SerDes interfaces that leverage

advanced digital signal processing techniques are used to connect cameras, lidars and in-vehicle displays to their corresponding electronic control units (ECUs).

Use of these SerDes interfaces is essential because they enable:

› High-speed, low-latency data transfer: A SerDes interface allows for the efficient transmission of large amounts of data at high speeds (multiple gigabits per second) and low-latency (microsecond), crucial for safety-critical applications.

› Longer cable lengths: SerDes interfaces can enable longer cable lengths compared with parallel communication interfaces, especially in harsh electromagnetic environments where cables are susceptible to the effects of electromagnetic interference (EMI).

› Reduced wiring: By requiring fewer wires for data transmission, enabling use of low-cost coax or shielded differential pair (SDP) cables, SerDes interfaces simplify wiring harnesses, reduce weight and lower cost.

› Reliability: Link layer protocols can enable ultra-low bit error rates, functional safety and security, enabling OEMs to meet the latest safety and cybersecurity regulations.

With this need to connect an ever-greater number of components, the benefits of an industry-standardized SerDes solution become even more pronounced. Standardization offers greater supply chain flexibility and vendor choice, enhanced interoperability, simplified design complexity, reduced development costs, and improved quality and reliability.

MIPI A-PHY: A Standardized Solution

Purpose-Built for Automotive MIPI A-PHY is the first industry-standard asymmetric SerDes interface designed specifically for the automotive market. A-PHY enables the high-speed transfer of proven higher-layer camera and display protocols, such as  MIPI CSI-2 for cameras and MIPI DSI-2 for displays, to

operate over low-cost, long-reach cables throughout a vehicle, eliminating the need to use proprietary SerDes “bridges” and PHYs. For automotive OEMs and system integrators, this equates to simplified in-vehicle networks and reduced costs, weight and development time.

First released in 2020, MIPI A-PHY was purpose-built to provide the high performance, high EMI immunity and stringent near-zero latency requirements needed for automotive. The latest version, A-PHY v2.0, features forward-looking enhancements in recognition of increasing bandwidth demands and performance requirements of software-defined vehicles (SDVs), zonal and other emerging vehicle architectures. A-PHY v1.0 was adopted as an IEEE standard and is available as IEEE 2977-2021.

The specification’s key features include:

› Downlink data rates of up to 32 Gbps per channel and uplink data rates of up to 1.6 Gbps per channel. A table showing all A-PHY “speed gears”

› High reliability, with an ultra-low packet error rate of <10-19 over the lifetime of a vehicle

› High resiliency, with ultra-high immunity to automotive EMI effects

› Bounded low latency (maximum 6 microseconds)

› Functional safety, meeting the requirements of ISO 26262

› Support for multiple cable types – coaxial, shielded differential pair (SDP) and star quad (STQ – enabling dual-downlink 64 Gbps operation

› Long reach – up to 15 meters in length with four inline connectors

› Multiple power over cable options, including support for 48-volt operation

› Protocol adaptation layers (PALs) to support the transport of CSI-2, DSI-2 and VESA DisplayPort/embedded DisplayPort application layer protocols, plus Ethernet, GPIO, SPI and I2C protocols for peripheral device command and control.

Support for higher layer end-to-end MIPI CSI-2 camera service extensions (MIPI CSETM) cybersecurity protocols and supporting MIPI Camera Security Framework.

Growing Ecosystem

There is a  growing automotive ecosystem designing products and services around the A-PHY interface, including multiple camera, radar and lidar sensor vendors, platform vendors, silicon vendors, and test and development tool vendors.

To support A-PHY implementers, MIPI is developing a compliance program designed to ensure A-PHY implementations meet the industry’s need for multi-vendor interoperability, exacting requirements around functional safety and challenging design environments in terms of EMI. The initial phase of the program is focused on testing A-PHY physical and link layers, using the recently published A-PHY compliance test suite. Later phases will extend the program to cover the A-PHY protocol adaptation layer specifications, which enable higher-layer protocols to operate seamlessly over the MIPI A-PHY physical link.

MIPI Alliance Enables Royalty-Free Implementation

As with all MIPI specifications, MIPI membership provides royalty-free licenses to implement the applicable MIPI specification, and there is no additional essential patent licenses required for implementation. Beyond A-PHY, MIPI members also have license to the other specifications for use in ADS/ADAS applications such as MIPI CSI-2 for cameras, all the A-PHY protocol adaptation layer specifications, the MIPI Camera Security Framework for protection of camera data streams and MIPI DSI-2 for displays.

The Rise of Software-Defined Audio in Automotive

As vehicles become increasingly reliant on centralized computing systems, traditional hardwaredependent audio architectures face challenges in scalability, flexibility, and development speed. Enter software-defined audio (SDA), a groundbreaking approach that shifts audio processing and control from hardware-centric systems to software-driven solutions within system-on-chip (SoC) platforms.

By leveraging SDA, automakers can address modern demands for sophisticated, customizable, and interconnected in-car audio systems. Conventional automotive audio systems rely heavily on dedicated hardware, such as digital signal processors (DSPs) and amplifier components. While effective, this model tends to increase costs, extend development cycles, and limit overall flexibility in post-purchase feature upgrades. SDA redefines this approach by embedding audio functionalities into centralized vehicle computing platforms.

In this architecture, audio processing occurs in software running on SoC hardware, enabling seamless updates and integration with other vehicle systems. The result is a simplified, flexible, and scalable audio infrastructure that reduces dependency on specialized hardware. This shift empowers automakers to accelerate innovation while maintaining cost efficiency.

Key Benefits of SDA

SDA seamlessly integrates with voice assistants, streaming platforms, and connected devices, aligning in-car audio systems with drivers’ digital lifestyles. This connectivity elevates the overall user experience by enabling cohesive and personalized features such as individual seat sound zoning and adaptive soundscapes tailored to driver preferences. It also enables tighter synchronization between audio output and other vehicle systems, such as navigation, safety

alerts, and environmental sensors. For example, with SDA, navigation prompts can be contextually blended with music or dynamically adjusted to minimize driver distraction. This centralization of audio processing eliminates the need for multiple DSP-enabled amplifiers and bespoke hardware components.

This overall consolidation in turn reduces the bill of materials (BOM) and manufacturing complexities, ultimately lowering production costs. Automakers can also achieve greater uniformity across audio systems in various vehicle trims by using the same software base with minimal hardware adjustments to support different features or even different audio brands across vehicle trim levels or vehicle nameplates within an OEM.

SDA also allows for the dynamic reconfiguration of audio systems. This means that manufacturers can upgrade or modify audio features through over-the-air (OTA) updates rather than physical hardware changes. This capability supports rapid prototyping, iterative development, and quicker responses to market demands.

Another advantage is that development cycles are shortened significantly as software updates replace labor-intensive hardware integrations. By opening new possibilities for personalized audio experiences, consumers can upgrade standard audio systems to premium configurations on demand, accessing immersive 3D soundscapes or advanced equalizer settings via OTA updates. This model not only has the potential to enhance user satisfaction but also introduces subscription-based or payas-you-go revenue opportunities for automakers.

Redefining the In-Car Soundscape

The role of automotive audio extends beyond entertainment or convenience. Modern systems must manage a diverse range of audio signals, including phone calls, virtual assistant feedback, the synthesis of propulsion and pedestrian warning sounds, and road and engine

noise mitigation. SDA provides a unified platform to harmonize these audio elements, offering a cohesive auditory experience.

By integrating audio processing with vehicle-wide SoC systems, automakers can more easily achieve advanced features such as active noise cancellation tuned to specific driving conditions, multichannel audio zones for individualized passenger experiences, and adaptive sound profiles based on driver and passenger preferences.

While SDA offers numerous benefits, its implementation presents challenges. Integrating audio processing into centralized platforms requires robust computational resources and meticulous system optimization. Ensuring seamless OTA updates and cybersecurity measures is critical to maintaining consumer trust. The generation and playback of mixed-criticality audio signals, including chimes and ADAS warnings, require a safety-certified software environment. Despite these hurdles, the potential of SDA to transform automotive audio is undeniable.

As computing technologies evolve, SDA will likely become a cornerstone of next-generation vehicles, fostering innovation in areas like immersive sound design, real-time audio customization, and cross-platform digital integration. With this technology comes a paradigm shift in automotive audio design. By decoupling audio functionality from dedicated hardware, SDA offers a flexible, scalable, and cost-effective solution for delivering cutting-edge audio experiences. As automakers embrace this technology, the in-car soundscape will evolve into a dynamic and personalized domain, redefining how drivers and passengers engage with audio systems in the cabins of the future.

Design Your Own Dev Board With Raspberry Pi Microcontrollers

Since 2021, Raspberry Pi has made its own microcontrollers, the RP2040 and now the RP2350. While Raspberry Pi’s Pico and Pico 2 boards provide a standardized way to use these chips, designing your own development and production boards unlocks a wide range of new possibilities. In this online training session, presenter Jeremy Cook will show how to integrate Raspberry Pi microcontrollers into your design using the KiCad EDA package.

Watch On Demand: https://resources.embeddedcomputing.com/EmbeddedComputing-Design/design-your-own-dev?utm_bmcr_source=cal WATCH MORE WEBCASTS: www.embedded-computing.com/virtual-events

Smart Farming Revolution: How Inertial Sensing Is Driving Precision and Productivity

The pressure to sustainably feed a growing population worldwide is leading to the adoption of more technology and automation in modern smart farming. Inertial sensors have a role to play in several different applications. Precision inertial measurement units are being used for navigation and stability of the increasing roboticization of the industry, including self-steering tractors, picking robots, drones, etc. Furthermore, wideband inertial sensors can be used for predictive maintenance of all this complex machinery. Lastly, inertial sensors help enable various edge sensing modalities like animal tracking, detecting heat in dairy animals, and vital sign monitoring.

Introduction

The world’s population is projected to reach nearly 10 billion by 2050, necessitating a 70% increase in food production worldwide as standards of living increase across the globe.1 Yet, the agri-market faces unprecedented challenges. Many developed and developing nations face a shrinking agricultural workforce. Younger generations are moving away from traditional farming, and labor costs continue to rise. Compounding the challenge is our changing climate, where unpredictable weather patterns, soil degradation, and water scarcity present daunting challenges to farmers across the globe. Agricultural businesses must maximize yields, reduce waste, and optimize costs to keep up with demand and remain competitive. This is where technology has a strong role to play. The rise of artificial intelligence (AI), machine learning (ML), robotics, and Internet of Things (IoT) has made automation in smart farming more feasible and cost-effective. Farmers now have access to data-driven insights that improve decision-making.

Automated systems, such as robotic harvesters and drone-assisted monitoring, allow for faster, more efficient farming operations and reduce dependence on manual labor. Precision farming techniques improve soil health, seed placement, and crop growth, leading to higher yields per acre. Smart irrigation and fertilization systems minimize water and fertilizer waste, leading to cost savings and resource conservation.

The Role of Inertial Sensors in Smart Farming

Inertial sensors provide real-time data on acceleration, orientation, and position,

improving the efficiency of autonomous and semi-autonomous (autosteer) farming vehicles. Inertial measurement units (IMUs), aided by GPS, are used to navigate and steer land and air vehicles such as tractors, robots, and drones, monitor their attitude and other inertial states, and enable these vehicles to follow precision paths for seeding, tilling, and spraying that ultimately reduce cost and improve the sustainability of farming.

Second, in live-stock management, inertial sensors can be used to track animal movement and behavior, allowing farmers to monitor herd health and detect anomalies in activity patterns. Lastly, the integration of inertial sensors with AI-driven further improves the predictive maintenance of farm equipment, reducing downtime and maintenance costs.

FIGURE 1: The ADIS16576 inertial reference frame.

Advancements in micro-electromechanical systems (MEMS) technology have led to enhanced performance, making MEMS IMUs pivotal for scalable autonomous vehicle (AV) platforms. MEMS IMUs often serve as feedback sensing elements in motion control systems, such as guidance navigation control (GNC) in autonomous vehicles or in pointing control for smart implements (sprayers, seeders, scoops, blades). When used as a feedback sensing element, MEMS IMU performance has a direct impact on a system’s accuracy. The ADIS16576 is a recent example of a MEMS IMU that delivers advancement in both functional integration and core sensor performance (Figure 1). This device offers a substantial leap forward, with the most impactful behavior coming from a 10× improvement in gyroscope vibration rectification error (VRE) and a 50× improvement in accelerometer VRE.

On the most basic level, MEMS IMUs provide triaxial angular rate sensing around three mutually orthogonal axes (roll, pitch, yaw) while also providing triaxial linear acceleration sensing along the same three axes. The accelerometers provide mean (or static) angle estimation, while integrating gyroscope measurements provides real-time angular displacement. System processors combine these two angle estimation sources to produce credible feedback control information for GNC or pointing control systems.

When operating in this way, having an accelerometer VRE of 1.3 mg, under 4 g rms of vibration, means that the GNC platform can preserve an attitude angle of better than 0.1° without requiring assistance from any other sensing function. This can be very useful for UAVs that may experience substantial changes in vibration, depending on thrust levels. In gyroscopes, VRE can create quick and persistent changes in bias, which can result in erroneous motion correction and, in the worst cases, can lead to instability in the platform.

In prior generation devices, VRE responses could exceed 300°/h, under 8 g rms, while the ADIS16576 offers a response of 12°/h, which greatly reduces the burden of estimation/correction by other system sensing modalities. One of the most important functional improvements of this MEMS IMU is in the scalable external synchronization.

By including a user-programmable, clock scaling function, system developers can now drive 4000 Hz IMU data sampling with slower system-level references, such as the GPS or a video sync. This offers both tight coupling with pulse per second (PPS) or perception sensing references while preserving all the digital processing options that higher data sampling provides.

Figure 2 and Figure 3 illustrate an example where an autonomous vehicle platform will use a 20 Hz GPS reference and a 200× scale factor to produce an internal sample rate of 4000 Hz. In addition, this system illustrates the use of an on-board decimation filter to reduce the output data rate by a factor of 20× (200 Hz). In more dynamic situations, such as a crop inspection drone operating in windy conditions, the system processor may need to read and process the data at the maximum sample rate to assure stability and maneuverability.

Inertial Sensing and IoT Systems

Another area where inertial sensors are providing critical capability is in the use of IoT systems, which are used to continuously monitor animal location and physiological conditions. Typical embodiments include either tags, tacked to the ear, tail, or body, and smart collars worn around the neck. These tags can help manage herd location and, more importantly, give continual insights into animal welfare, such as activity, feeding time, and respiration rate, and newer capabilities that offer the ability to track heart rate and other vital signs. Neckmounted collars have become invaluable tools for detecting estrus (heat) in cattle, rumination, lameness, and other conditions. A core requirement in these IoT systems is power consumption because maintaining batteries (rechargeable or primary) in large herd populations is an intractable chore. The ADXL366 offers unprecedented capability in this respect.

FIGURE 2: The ADIS16576 signal chain and external synchronization inputs.
FIGURE 3: Timing diagram of the ADIS16576 in scale sync mode.
FIGURE 4: ADXL366 configured as a motion switch within IoT systems.

This triaxial accelerometer can directly connect to a battery because it is internally regulated, can operate down to 1.1 V, and can deliver motion data at 100 Hz using approximately 1 µW of energy. At this level, energy consumption is lower than the self-discharge of a coin cell battery. When used in a neck-worn collar, the accelerometer can scale between low power and low noise modes, providing a minimum signal in the range of 3 mg and 8 mg rms – enough to distinguish between the chewing, rumination, and respiration rate (R-R). An enhanced vitalsign monitoring capability is offered by the ADXL380, which operates with noise levels that are almost two orders of magnitude lower over a 4 kHz bandwidth. For a fair comparison at 200 Hz bandwidth, the equivalent noise for this accelerometer would be 0.4 mg rms.

Such a signal-to-noise ratio (SNR), coupled with the wide bandwidth, can turn this tri-axial accelerometer into a stethoscope that can collect heart rate information through a ballistocardiogram or various noises associated with breathing, digestion, and other physiological functions. A comparison between the two accelerometers can be found in Table 1. Another core capability offered by ultralow power inertial sensors is in the system-level power management of IoT nodes.

The ADXL366 offers a dedicated wake-up mode that can be used to power cycle electronic systems by issuing interrupts based on detected motion profiles. A typical configuration can be found in Figure 4. The accelerometers offer a rich set of programmable parameters to configure the desired motion profile and, most importantly, wake and sample at full bandwidth. This capability is important to avoid aliasing and false detections. In wake-up mode, the ADXL366 consumes only an astonishing 180 nA. By leveraging this capability, energy hungry sensors, radios, and other components can be powered down when not needed to increase the sensor’s node useful lifetime.

Therefore, for predictive maintenance, accelerometers need to deliver three important parameters. They need to have low noise (earlier prediction), high bandwidth (to detect all spectral content and aid in the classification of the fault), and a sufficiently high measurement range. The last one is often overlooked, however, as the magnitude of acceleration is proportional to the frequency ω2, and high frequency spectral content can saturate the sensor if not considered. The new ADXL382 triaxial, digital accelerometer offers all three requirements in a compact package. The product has a full-scale range of up to 60 g, 8 kHz bandwidth, and ultralow noise < 55 µg/√Hz.

Predictive Maintenance in Smart Farming

The last topic is regarding the integration of inertial sensing and AI analytics for predictive maintenance in smart farming. As the scale of modern farms increases, they are having to rely on high capital expense machinery for production. This type of equipment must deal with precision operation while undergoing strenuous conditions and the rigors of seasonal farm life. Breakdown during the short planting or harvesting season can have a serious financial impact. For example, precision-controlled instruments, such as seeders or harvesters, often have to operate through rain, wind, dust, mud, rock fragments, and many other environmental hazards. In these environments, changes in key vibration artifacts can offer advanced prediction of problems, which can be addressed through maintenance, at times that have minimal impact on peakdemand productivity.

Vibration analysis in machinery (analogous to the vital sign monitoring in livestock) can pinpoint the failure mode and timing of different problems in mechanical elements, such as faulty bearings, axle misalignment, imbalance, looseness, gear faults, and other issues. Consider a bearing defect, such as a chip out or any physical deviation from the perfect spherical shape. This will create a bump in the platform every time this defect contacts the machine’s surface, resulting in a complex vibration profile that contains both fundamental and broadband content. See Figure 5 for an illustration of a complex vibration profile in spectral terms.

Conclusion

Automation and technology in agriculture address critical global challenges, including food security, labor shortages, and environmental sustainability. By embracing innovations such as AI, robotics, and precision farming, the agricultural sector can enhance efficiency, reduce costs, and ensure a more sustainable future for food production. Inertial sensors have a key role to play in this ecosystem since they provide enabling sense capabilities. However, care must be taken to choose sensors with appropriate fit and function.

1 “Global Agriculture Towards 2050.” Food and Agriculture Organization of the United Nations, October 2009

TABLE 1: Side-by-Side Comparison Between Ultralow Power and Ultralow Noise Accelerometers
FIGURE 5: Wide bandwidth, spectral representation of common machine faults.

Designing Resilient Edge Systems for the AI Era

As AI adoption accelerates, organizations increasingly need robust, resilient edge systems to support complex workloads. Powering real-time data processing and enabling AI inferencing that improves customer experiences and operational efficiency, edge systems are foundational to digital transformation.

However, designing these systems requires more than traditional IT knowledge. It demands a blend of skills, foresight, and lessons learned from years of IoT evolution.

Specialized Skills for Edge System Design

Successful edge systems must constantly operate at peak performance within constrained, often unpredictable environments. Designing and building edge infrastructure requires a combination of hardware, software, and networking skills with expertise in hardware availability and fault-tolerance, workload virtualization, data governance, and cybersecurity.

While these skills draw upon common IT practices, they are applied differently in edge and AI environments. The edge is typically a unique, heterogeneous environment where limited connectivity, security needs, and scarce on-site IT pose greater challenges. Additionally, although hardware availability and fault-tolerance – meaning 99.999% availability or better – are valued, this is not an area of wide expertise. As a result, there is an increasing demand for expertise from OEMs, systems integrators, and service providers well-versed in these technology nuances that can seamlessly and rapidly deploy and simply manage edge infrastructure at scale.

Preparing For an Edge-Driven Future

Technology and business professionals must prepare for the edge. For most companies, the edge is where products are made and customers are served, making it critical to both customer satisfaction and competitive advantage.

To stay competitive, tech and business professionals need to run highly efficient edge operations that support “always-on” applications and continuous data availability. These low-latency, secure, and value-added operations rely on evolving IT and OT capabilities at the edge that professionals need to track, understand, and implement when appropriate.

For example, AI inference will fuel the need for reliable, powerful edge systems that bolster the accessibility and reliability of AI-generated recommendations and enable widespread automation. Edge AI will become a key piece of tech and business leaders’ digital-first strategy. Learning, understanding, and preparing for the edge is essential.

IoT Lesson Learned: Resilience is Non-Negotiable

To understand how to build these resilient systems, we can draw valuable lessons from years of IoT deployment experience. Businesses and customers expect an IoT experience that offers uninterrupted system and application availability. Let’s face it, we are

information hungry and impatient! All kidding aside, operations teams often have real-time requirements in ensuring safety and efficiency. Yet distributed systems involve thousands of devices operating in unpredictable environments with unreliable connections.

The need to design resilience into the system has been a major IoT lesson. We’ve learned that recovering systems after an outage is not enough. Systems should ensure uninterrupted operations. IoT systems must continue to provide essential services even if operating at reduced capacity due to hardware issues, compromised communications, or other problems. Self-diagnosing, self-repairing systems with automated maintenance capabilities that manage workloads gracefully are key to IoT success.

A related IoT lesson is the constant and real threat of cyberattacks. Real-time monitoring and protection have proven to be essential to mitigating cybersecurity risks. This requires IoT infrastructure purpose-built for these environments that allows IT and OT teams to easily detect, isolate, and eliminate risks before data, operations, or the business is affected. We’ve seen that IT and OT teams can more easily manage these challenges by standardizing on a robust computing infrastructure designed for IoT’s diverse environments.

We’ve also learned that maintaining high-performance distributed systems at scale is a challenge. Remote device

management, firmware updates, and troubleshooting across thousands of geographically dispersed edge nodes present logistical challenges. Edge architectures that incorporate automated management, simplified maintenance, and seamless orchestration capabilities alongside predictive failure detection help resource-constrained IT and OT teams maintain large-scale deployments.

These lessons collectively point toward the value of a fault-tolerant edge within an IoT system architecture, which, of course, affects how engineers approach distributed system design.

Business Benefits of IoT: Efficiency,

Insight, and Expertise    Operational efficiency and customer satisfaction have been IoT’s most noteworthy impacts. When done right, customers are benefiting at the same time businesses streamline processes, increase product quality, and improve service delivery. Business leaders benefit from enhanced insights and actionable intelligence that help them drive better outcomes and open new revenue streams.

IT and operations teams gain the benefits of real-time monitoring and control of processes, automation that enhances human operators, allowing them to focus on higher-value tasks. This all leads to higher productivity at a lower total cost of ownership. This also adds up to better customer experiences that ultimately help the company win in highly competitive and dynamic markets.

We see these benefits across industries, including manufacturing, retail, and healthcare. In manufacturing, for example, Industrial IoT (IIoT) has been central to predictive maintenance, quality control, and supply chain optimization deployments. Retail has adopted IoT for inventory management, customer analytics, and point of sale. Within healthcare, IoT supports backoffice functions for customer service and shows tremendous potential in patient treatment via medical devices.

These IoT benefits underscore why designing and investing in resilient edge systems is critical and why organizations need systems that deliver these advantages at scale.

Build For What’s Next

It’s clear that the edge is more important than ever. It’s a foundation for intelligent, distributed systems. Designing resilient edge systems means applying lessons learned from IoT trends and evolution, embracing the unique demands of the edge environment, and preparing for an increasingly edge-driven AI future.

Organizations that invest in the right skills, strategic partnerships, and infrastructure today will be the ones well-positioned to capture long-term value.

The most demanding applications require the world’s most reliable components. For over 50 years PICO Electronics has been providing innovative COTS and custom solutions for Military, Commercial, Aerospace and Industrial applications. Our innovative miniature and sub-miniature components are unsurpassed in any industry. PICO Electronics’ products are proudly manufactured in the USA and are AS9100D Certified.

To learn more about our products and how you can benefit from our expertise visit our website at picoelectronics.com or call us today at 800-431-1064.

Miniature Designs

• MIL-PRF 27/MIL-PRF 21038 DSCC Approved Manufacturing

• Audio/Pulse/Power/EMI Multiplex Models Available

• For Critical Applications, Pico Continues to Be the Industry Standard

Facility/US Manufactured

• Military Upgrades and Custom Modules Available

Feeding the Future: Edge AI in the Field

The agricultural sector is undergoing a major technological overhaul that some are calling the era of Farming 4.0. This agricultural revolution is marked by autonomous machines that carry a raft of sensing and processing devices. These machines collect and analyze data to make decisions in real time that improve productivity, efficiency, sustainability, and cost effectiveness.

Agriculture is increasingly being shaped by AI-powered edge computing systems. Traditional farming equipment such as tractors, combine harvesters, and irrigation systems now carry sensors and processors capable of collecting data, processing it at the edge, and turning those decisions into appropriate and timely interventions. AI-enabled systems check whether crops need more water, if the soil has the right nutrients, or if plants or livestock are being attacked by pests or diseases. Not only do these systems keep farmers informed – they are also capable of finding the right solutions with minimum human input.

AI can also lower the cost and burden of maintaining machines. Predictive maintenance uses machine learning techniques such as anomaly detection to anticipate equipment failures before they occur, based on vibration and audio data collected on the machine. This reduces maintenance costs and minimizes downtime.

Closer to the Source

In industrial systems, we see data collected from a multitude of sensors being sent to the cloud for processing and analysis, to improve insights and develop longer-term strategies. The principle is the same in agricultural systems, but here, the remoteness of fields and farms makes uploading large volumes of data to the cloud unreliable. Local processing, enabled by Edge AI, solves this issue. On-chip AI capabilities allow for low-latency smart decisions, reducing the need for large volumes of data to be sent to the cloud for analysis. Devices in the form of CPUs, GPUs, dedicated ASICs, and NPUs, many with AI capabilities built in, handle this data locally. Market analysis house

Grand View Research forecasts the global market size for edge AI chips to reach USD 120 billion by 2030, up from 16 billion in 2023, growing at a compound rate of 33.9% over that period.

Edge AI applications are powered by embedded compute modules that harbor these AI-enabled processors. Tria Technologies provides a wide range of computer-on-modules (COMs), designed in partnership with various CPU vendors such as AMD, Intel, NXP, Renesas, and more. One notable cooperation is with Qualcomm, allowing Tria to create a new generation of compute modules around Qualcomm’s highperformance processors, Dragonwing and Snapdragon, based on the ARM architecture. The latest Tria SMARC modules are applicable to a wide range of applications that meet the needs of smart agricultural systems, providing

machine vision, anomaly detection, sensor gathering and analysis, audio classification, and more.

Embedded Compute Modules for Agriculture

AI-enabled embedded compute boards are highly advantageous for smart agriculture applications, with different options for compact size, ruggedness, flexibility, and powerful computing. These small boards can be embedded in tractors and machinery, running ML and AI models locally. Tria’s portfolio of boards supports multiple cameras, easily adapted for use in agricultural autonomous robots and drones. They also support computationally intense AI applications such as Large Language Models (LLMs) for applications that require natural language processing. This will soon enable machines to respond to verbal communication.

The hardware on the embedded compute boards is built to handle parallel processing to speed up models such as CNNs (convolutional neural networks). The specialized processors can handle these tasks at great speed and at very low power, allowing them to run off batteries or solar power. Tria’s low-power, AI-enabled boards have been used in anomaly detection applications that use a combination of audio and accelerometer data to predict irrigation leaks.

The farming community is currently testing several projects relying on machine vision and ML to detect diseases in plants and livestock. One such program determines the illness plants are suffering from based on photographs of their leaves. A CNN is trained on an existing dataset of leaf images to identify the disease, resulting in an accuracy of over 96%. The plant disease can be quickly determined, with appropriate measures then taken before the disease spreads.

Machine vision is found in agricultural ‘spraying’ projects, too, which use robots and drones to water plants and selectively spray herbicide. These robots function either autonomously, by navigating

through the fields using sensors, or are manually controlled via apps. One benefit of autonomous farm machinery is that driverless machines can be smaller and lighter weight, reducing soil compaction for healthier soils that require less tilling. Using machine vision, these systems can accurately pinpoint weeds to spray with herbicide so that considerably less herbicide is used. Less compacted soils and lower herbicide usage saves farmers money, provides more nutritious food, and maintains a healthy environment.

Edge AI Enabled by Tria

Tria prides itself on the benefits it offers its partners and customers. The company designs its board for applications specified by its partnering companies, based around the capabilities and specifications of their devices. In addition, Tria’s partnership with Avnet (its parent company) ensures stock is available for its customers on a 15-year life cycle, so items don’t become obsolete within that time.

Tria develops and integrates boards and devices into customized systems on behalf of its customers, thus eliminating technical difficulties and lengthy development stages that design engineers undertake when introducing new products – especially now that AI is a key part of these products.

Choosing a processor for an AI-enabled applications can be a difficult task. Tria offers a wide array of system-on-modules (SOMs) built around AI-capable processors. They also offer compatible baseboards that are based on popular standards such as SMARC, as well as reference designs that provide example code for machine learning applications. These enable customers to hit the ground running with AI-onthe-edge projects. This enhances customers’ ability to react to new requirements quickly without the need for involved, time-consuming, and expensive development stages, even more determining the success of their creations.

Discover the Future of AI Inference and Performance

As AI adoption accelerates, IT leaders must deliver infrastructure that meets evolving demands. From chatbots to multimodal apps, AI inference is reshaping performance and cost. This e-book shows how organizations use the NVIDIA AI Inference Platform, Triton Inference Server, TensorRT-LLM, and Grace Blackwell to improve latency, throughput, and cost per token, unlocking new levels of efficiency and experience.

2025 RESOURCE GUIDE

ENET-AIO16-16F Ethernet Multifunction Module

The eNET-AIO16-16F is an ideal solution for adding portable, easyto-install high-speed analog and digital I/O capabilities to any computer or Ethernet network. The board is plug-and-play auto-detecting.

The eNET-AIO16-16F is a 16-bit resolution A/D board capable of sampling speeds up to 1MHz for its 16 single-ended or 8 differential analog inputs. Each channel can be independently software configured to accept 8 different input ranges. A unique, real-time internal calibration system allows the card to continually compensate for offset/gain errors giving a more accurate reading. Additional features include 16 digital I/O lines and 4 (optional) analog outputs.

The board is driven by embedded Linux running on a dual core ARM Cortex A53 @ 1.2GHz, the TI Sitara AM6422 SoC. It includes two Cortex R5F cores for critical real-time operations. Full root access is provided via unique model/serial-number based credentials – you have full control over the system.

FEATURES

Ą Ethernet 10/100/1000 Multifunction DAQ with sustained sampling speeds up to 1MHz

Ą Flexible, software configured functionality for 16 single-ended or 8 differential analog inputs

Ą 8 input ranges, 4 unipolar and 4 bipolar, channel-by-channel programmable and autocalibration and filtering onboard for accurate data

Ą On Board Intelligence built on IEC 61508-certified TI Sitara AM6422 SoC

Ą Dual core Cortex-A53 @ 1.2 GHz + 2 Cortex R5F Real-time cores (800 MHz) + Coretex M4F

Ą 2× PRU-ICSSG subsystems supporting Profinet IRT/RT, EtherNet/IP, TSN, plus dedicated PROFIBUS UART

Ą 1 GB DDR3 RAM, 8 GB eMMC, microSD for expansion and more!

USB3C-104-HUB4C Hub

Rugged, 4-Port SuperSpeed10 USB 3.2 Gen 2 Type C Hub with Locking Connectors

The USB3C-104-HUB4C is an industrial-grade 4-port USB hub optimized for harsh and rugged environments. This hub has latching / locking connectors on upstream and downstream ports as well as power, preventing accidental disconnects – making it perfect for applications that require vibration proofing. The rugged steel enclosure, positive retention connections, and -40° to +85°C operation makes the USB3C-104-HUB4C stand out compared to commercially available hubs – and it’s 100% Made in the USA. Each connection has been designed for rugged use without loose or intermittent cables disrupting your application. The input power is secured via screw terminals or a threaded DC Jack. Type C connections utilize USB-standard latching cables. In addition to secure retention connectors signal integrity is further protected by IEC 61000-4-2 maximum-rated ESD protection diodes (15kV Contact & Air-gap).

https://accesio.com/product/usb3c-104-hub4c/

FEATURES

Ą 4-port USB 3.2 Gen 2 hub with data transfers up to 10 Gbps

Ą One upstream USB C and Four downstream USB C

Ą ESD protection (+/-15kV IEC 61000-4-2 Level 4) on all data lines

Ą Rugged (-40°̊C to 85°̊C) operation

Ą Locking connectors prevent accidental disconnects

Ą SuperSpeed+ (10Gbps), SuperSpeed (5Gbps), Hi-Speed (480Mbps), Full-Speed (12Mbps), and Low-Speed (1.5Mbps) transfers supported on all ports

Ą Compact, steel, low-profile enclosure and Made in the USA

 contactus@accesio.com  858-550-9559  linkedin.com/company/acces-i-o-products-inc. @accesio

Hardware

ADL-AI2500

The ADL-AI2500 is an AI edge embedded system built on NVIDIA Orin NX, delivering up to 157 TOPS of performance for mission-critical applications. Its compact chassis ensures reliable operation in demanding environments, while dual NVMe storage and 5G connectivity make it adaptable across industries.

From industrial automation and machine vision, to robotics, smart agriculture, defense & security, transportation, energy, and smart cities, the ADL-AI2500 brings scalable AI compute to the edge. With full customization options for MIL-STD compliance, it is the ideal platform for deploying AI in environments where reliability matters most.

FEATURES

Ą Rich I/O & Expansion

Ą Wide-temp Operation

Ą Rugged, Fanless, Conduction-cooled

support@adl-usa.com

www.adl-usa.com/

 www.linkedin.com/company/adl-embedded-solutions/

Industrial Displays and AD Boards

At Chefree, we provide high-performance industrial display and touch solutions, including embedded PCBs, designed for professionals who demand quality and reliability.

Our core expertise lies in creating highly integrated solutions that simplify complex display systems. We offer a comprehensive range of all-in-one controller boards that integrate an on-board power circuit, an LED driver, and the industry-leading EETI touch controller IC. We also extend this specialized expertise to dedicated driver boards for E-Ink displays.

To further simplify system integration, we've developed a versatile series of scaler boards. These boards effortlessly convert video signals from HDMI, VGA, or DP interfaces into LVDS to power and drive your display panels.

With our intelligent, integrated solutions and versatile connectivity, our customers can achieve sleeker, more streamlined final products, all while enjoying a faster and more efficient development cycle.

Chefree Technology Corp. www.chefree.com

FEATURES

Ą High-Brightness & Wide-Operating-Temperature Industrial Panels with Touch

Ą Customized AD Board Solutions

Ą Open Frame / Type C Monitor Solutions

Ą E-Paper Display Solution

Ą In-House Design to Ensure Superior Quality and Reliability

 sales@chefree.com

 www.linkedin.com/company/chefree-technology-corp/

The Relio™ R1 Edge redefines what edge computing should be: seamlessly powerful, uncompromisingly reliable, and effortlessly dependable. Featuring next-generation Intel processors and versatile connectivity including dual 2.5 Gigabit Ethernet, 4G/5G LTE connectivity, and Wi-Fi 6E, the R1 Edge delivers advanced computing capabilities directly where you need them most.

Built for uncompromising performance in harsh environments, its fanless, solid-state design and strategic thermal management system enable continuous operation in temperatures from -40°C to +71°C.

The R1 Edge's anodized aluminum enclosure and innovative SeaLATCH locking connectors ensure dependable operation even under extreme shock and vibration conditions. Its COM Express architecture enables processor upgrades without system redesign and comprehensive software compatibility supports accelerated deployment.

Relio™ R1 Edge Industrial Computers thrive where other hardware fails.

FEATURES

Ą (1) M.2 4G LTE Cellular slot – 2 Antennas (optional)

Ą (1) M.2 4G/5G LTE Wi-Fi 6E – 2 Antennas (optional)

Ą (2) USB 3.1 SeaLATCH Charging Ports, (2) USB 2.0 SeaLATCH Ports, (1) USB C Port

Ą (2) Video DisplayPort connectors

Ą (2) Full RS-232/422/485 Ports

Ą (2) 2.5 Gigabit (10/100/1000/2500 BaseT) Ethernet Ports

Ą Up to -40ºC to 71ºC wide operating temperature range

Relio™ R1 Edge

Electron E1

The Electron E1© is the world’s most energy-efficient, general-purpose processor. The Electron E1 enables the next generation of battery-powered devices, achieving acceleratorlike efficiency with up to 100x better energy efficiency vs. low-power processors on the market today. The Electron E1 achieves this without software modification, a familiar programming interface, and support for a wide variety of applications (e.g., neural networks, digital signal processing, sparse algorithms, graph analytics, compression, and cryptography).

The Electron E1's best-in-class efficiency enables unprecedented capabilities on-device, e.g., sophisticated, local interpretation of sensed data to avoid expensive off-device communication. Moreover, Electron E1’s general-purpose support means that the entire application benefits, unlike an accelerator which targets a portion of the application.

The key to the Electron E1's efficiency is Efficient’s proprietary Fabric© architecture. The Fabric is a generalpurpose, ultra-low-power and ultra-efficient dataflow processor with a standard software development flow. Efficient offers a complete compiler stack that is a drop-in replacement for existing compiler toolchains (e.g., GCC/Clang) with support for high-level languages (e.g. C/C++) and frameworks (e.g., TFLite).

Efficient has developed the effcc Compiler in tandem with the Electron E1 processor. The effcc Compiler analyzes program structure – including loops, dependencies, and data reuse–, automatically maps tasks onto the Fabric mesh for maximum parallelism and locality, and generates a binary that runs out-of-thebox with no manual tuning required (while still providing hooks for those who want to tinker).

https://www.efficient.computer/electron-e1

FEATURES

• Low voltage: 5.4GOPS (50MHz system clock)

• High voltage: 21.6GOPS (200MHz system clock)

• 4 MB of NVM with DMA support

• 3 MB ultra-low-power SRAM

• 128KB (8KB/bank) cache

• 6x QSPI, 6x UART, 6x SPI, 6x I2C, 72x GPIO, x RTC, 1x WDT

• Supply voltage: 1.8V, Internal logic voltage: 0.55V-08V

• Temperature range: -40°C to 125°C

• Standard BGA package

K3881-C μATX

The New Benchmark in Industrial Server Motherboards

Kontron introduces the K3881-C μATX, the first industrial server motherboard designed and produced in Europe. Purpose-built for small businesses and entry-level cloud services, it delivers the performance, security, and scalability needed for professional-grade server solutions.

Designed for secure remote management, this platform ensures maximum uptime, reliability, and long-term availability – with 24/7 continuous operation and 5+ years guaranteed lifecycle support.

Why Kontron Motherboards?

• Strict lifecycle management & reliable product maintenance

• Excellent technical support

• Beneficial total-cost-of-ownership

Industrial Server Motherboard. Engineered for the Future.

FEATURES

• Industrial Server Class Motherboard supporting Intel® Processors

• Latest DDR5 Memory Technology with ECC Support

• 4x Intel® LAN: 2x 10 GbE + 2x 1 GbE

• 3x PCI Express Expansion Slots

• 5G Kontron Wireless Solutions Ready

Development Module with AMD-Xilinx Artix UltraScale+ FPGA and USB 3.0

The XEM8310-AU25P development module with AMD-Xilinx Artix™ UltraScale+ FPGA offers integrators a turnkey solution with fast (340+ MB/s) USB 3.0 host interface using the Opal Kelly FrontPanel® SDK. Designed for both prototype / proofof-concept and production deployment, the highly-integrated device includes on-board power supplies, DDR4 memory, and support circuitry in a compact form factor with commercialoff-the-shelf (COTS) availability.

The FrontPanel SDK greatly simplifies hardware / software communication with a comprehensive and easy-to-use API and broad operating system and language support. Opal Kelly SOMs reduce time-to-market, allow teams to focus on their core competencies, and simplify supply chains. Opal Kelly has been ISO 9001:2015 certified since 2019.

Typical applications include:

• Data acquisition

• Test & measurement, instrumentation, and control

• Machine vision / machine learning / AI

• Software-defined radio (SDR)

• Digital communications and networking

• Data security

FEATURES

• AMD-Xilinx Artix UltraScale+ XCAU25P-2FFVB676E

• SuperSpeed USB 3.0 port for high-bandwidth data transfer

• 2 GiB DDR4 (32-bit data interface), 32 MiB QSPI FPGA flash

• 149 FPGA fabric I/O and 12 gigabit transceiver lanes (16.375 Gbps)

• Compact form factor (100mm x 70mm)

• Full FrontPanel SDK and API support

Qualcomm Dragonwing™ QCS6490 Development Kit

The Vision AI-KIT 6490 features an energy-efficient, multicamera, SMARC 2.2 compute module, based on the Qualcomm QCS6490 SOC device.

High performance cores on the Vision AI-KIT 6490 (Qualcomm Dragonwing™ QCS6490 Development Kit) include an 8-core Kryo™ 670 CPU, Adreno 643 GPU, Hexagon DSP and 6th gen AI Engine (12 TOPS), Spectra 570L ISP (64MP/30fps capability) and Adreno 633 VPU (4K30/4K60 enc/dec rates), ensure that exceptional concurrent video I/O processing performance is delivered.

A useful subset of the SMARC interfaces are pinned-out on the carrier board, to support four cameras, two displays, five USB interfaces, CAN-FD, Gigabit Ethernet and optional highspeed Wi-Fi networking. The audio subsystem includes two PDM microphones, stereo audio Codec, digital audio interface and analog audio jack I/O.

TPM is fitted by default on these SMARCs and Wi-Fi/ BT modules are offered as assembly options of the QCS6490 SMARC compute module. The carrier board has M.2 slots for NVME storage and advanced wireless options. Compact 100 mm x 79 mm carrier board dimensions and mounting hole alignment with similar AI developer boards, enable a drop-in capability with many enclosures.

Qualcomm Linux BSP (Yocto, 6.6 kernel), plus a range of AI/ML and multimedia open-source example applications are provided. Windows 11 IoT Enterprise and Android support will release in 1H25. (Ubuntu Linux for QCS6490 may at later date be supported.)

FEATURES

• Based on the Qualcomm QCS6490 SOC

• 4x Arm Cortex-A78 (@ up to 2.7 GHz)

• 4x Arm Cortex-A55 (@ 1.9 GHz)

• GPU: Adreno 643 (@ 812 MHz)

• DSP/NPU: 6th gen Qualcomm AI Engine (12 TOPS)

• VPU: Adreno 633, video enc/dec to 4k30 / 4K60

• 2x USB 3.1, 2x USB 2.0, 1x USB-C OTG

• 2x PCIe Gen3 (1L), 1x PCIe Gen3 (2L) interface

• SMARC 2.2 edge connector (314 pin)

• Operating Temperature: -25C ~ +85C

• Full customization available

https://www.tria-technologies.com/product/vision-ai-kit-6490/

TRIA Vision AI-KIT 6490

TRIA C6C-RYZ8

The COM Express® Type 6 module TRIA C6C-RYZ8 is ideal for applications like medical imaging, gaming platforms, AI enhanced solutions in the edge and security appliances. With the on-chip XDNA Neural Processing Unit, it is the right fit for enabling AI applications, that run on the local system independently from a cloud-based approach. Applications also gain outstanding compute performance from up to eight CPU cores and six RDNA3 work group processors integrated into the AMD RyzenTM Embedded 8000 Series Processor. The module provides up to four independent display pipes with high-resolution, highest-level graphics and video en-/ decoding acceleration. The Type 6 pinout allows direct access to high bandwidth display output from interfaces like eDP, LVDS, DisplayPort, and HDMI. Further high-bandwidth interfaces include USB 3.2/2.0, PCIe Gen 3/4, and 1/2.5Gb Ethernet.

https://www.tria-technologies.com/product/tria-c6c-ryz8/

FEATURES

Ą COM Express® Type 6, Compact module

Ą AMD Ryzen™ Embedded 8000 Series Processor

Ą Up to eight cores/sixteen threads

Ą Up to 96GB DDR5-5600 SDRAM, dual channel, ECC

Ą Up to six RDNA3 work group processors (WGP)

Ą Four independent display streams

Ą Long-term product availability

www.linkedin.com/company/triatechnologies/

The TRIA HMM-RLP module featuring the COM-HPC Mini formfactor is ideal for system designs that require outstanding performance and flexible IO connectivity in smallest possible space. The module is perfect for applications such as automation solutions, drone and robot controllers, rugged HMI platforms, compact medical units, measurement equipment, and in transportation.

Based on the 13th Gen Intel® Core™ processor, system designers can pick from a variety of choices of power efficient and performant module options. The architecture scales up to fourteen cores and twenty threads at 35W thermal design power (TDP). For applications with need for lower power dissipation, selected processor variants can be operated down to 12W TDP.

https://www.tria-technologies.com/product/tria-hmm-rlp/

FEATURES

Ą COM-HPC Mini module

Ą 13th Gen Intel® Core™ processors

Ą Scalable CPU performance, up to fourteen cores, twenty threads

Ą Industrial grading options (Intel® TCC, TSN, IBECC, ext Temp., 24/7)

Ą LPDDR5-6000 main memory, memory-down, up to 64GB, In-band ECC option

Ą Intel® Iris® Xe architecture Graphics, up to 96 EUs

TRIA HMM-RLP

MaaXBoard OSM93 features an NXP® i.MX 93 System on Chip compute module, with integrated AI/ML NPU accelerator, EdgeLock security enclave and Energy Flex architecture that supports separated processing domains, such as the Application domain with two Arm® Cortex®-A55 (1.7 GHz) cores, the real time domain with Arm® Cortex®-M33 (250 MHz) core and Flex domain with Arm® Ethos-U65 NPU (1 GHz). Other resources on the fitted MSC OSM-SF-IMX93 solderdown module include eMMC (16GB) memory, LPDDR4 (2GB, 3.7 GT/s) with inline ECC support, RTC clock and NXP PCA9451 PMIC.

The Raspberry Pi form-factor carrier SBC board adds QSPI flash memory (16Mbit) plus connectivity and UI interfaces. High speed interfaces include four USB 2.0 interfaces (2x host type A, 1x host type-C, 1x device type-C), MIPI DSI display and MIPI CSI camera interfaces, two 1 Gbps Ethernet ports and two high-speed CAN interfaces.

https://www.tria-technologies.com/product/maaxboard-osm93/

Tria Technologies www.tria-technologies.com

Tria@avnet.com

Ą Develop with SBC board, use same OSM on custom board

Ą Compact size, versatile connectivity, multiple power domains

Ą 4x USB interfaces: 2x type-A host, 1x type-C host, 1x type-C device

Ą 2x 1 Gbps Ethernet, 2x CAN interface

Ą M.2 module connector for optional tri-radio module

Ą MIPI CSI and MIPI DSI interfaces, 2x PDM microphones

Ą HAT 40-pin, ADC 6-pin and audio 6-pin expansion headers

Ą -40C.. +85C Industrial temperature rating

800-408-8353  www.linkedin.com/company/triatechnologies/

The TRIA SM2S-G3E SMARC 2.2 module family is equipped with the latest cost-effective and power-efficient RZ/G3E processors, manufactured by Renesas. The processor integrates up to four Arm® Cortex®-A55 cores, dedicated Arm Cortex-M33 Real-Time core, powerful Arm Mali™ GPU and a 4K capable VPU. The RZ/G3E processor is the first RZ/G family with integrated Arm Ethos™-U55 NPU, helping developers to build powerful, cost-effective and energy-efficient machine learning (ML) applications at the edge. The typical design power ranges from 3 to 5 W.

TRIA SM2S-G3E provides fast LPDDR4 memory technology with inline ECC support, combined with up to 256GB eMMC Flash memory and high speed interfaces such as Dual Gigabit Ethernet, USB 3.2 Gen 2 and PCI Express Gen 3. Various standard interfaces for embedded applications such as dual CAN-FD, dual-channel LVDS or MIPI DSI, HDMI and MIPI CSI for connecting a camera are available.

Tria Technologies

www.tria-technologies.com/

 tria@avnet.com

FEATURES

Ą Quad/Dual Core Arm Cortex-A55 up to 1.8GHz

Ą Arm Cortex-M33 Real Time Processor up to 200MHz

Ą Arm Ethos™-U55 Neural Processing Unit (NPU) up to 0.5TOPS

Ą Arm Mali™-G52 Graphics Processing Unit (GPU) up to 30GFLOPS

Ą Video Processing Unit up to 4k decode/encode

Ą Up to 8GB LPDDR4 SDRAM with inline ECC

Ą Up to 256GB eMMC Flash

Ą Dual Independent Display support

https://www.tria-technologies.com/product/tria-sm2s-g3e/

800-408-8353

 https://www.linkedin.com/company/triatechnologies/

TRIA SM2S-G3E
TRIA MaaXBoard OSM93

TRIA SM2S-IMX95

The TRIA SM2S-IMX95 SMARC module family is powered by the latest i.MX 95 Applications Processors, manufactured by NXP. The processors integrate up to six Arm Cortex-A55 cores, dedicated Arm Cortex-M7 and M33 Real-Time Processors, immersive Arm Mali™ GPU, 4K capable VPU combined with Edgelock® secure enclave security. The i.MX 95 family is the first i.MX applications processor family to integrate NXP’s eIQ® Neutron neural processing unit (NPU) and a new image signal processor (ISP) developed by NXP, helping developers to build powerful, next-generation edge platforms.

TRIA SM2S-IMX95 provides fast LPDDR5 memory technology with inline ECC support, combined with up to 256GB eMMC Flash memory and high speed interfaces such as 10 Gigabit Ethernet, Dual Gigabit Ethernet, USB 3.0 and PCI Express Gen. 3. Various standard interfaces for embedded applications such as CAN-FD, dual-channel LVDS or MIPI DSI, HDMI and MIPI CSI for connecting a camera are available. An onboard Wireless Module is provided as assembly option.

The module is compliant with the new SMARC 2.2 standard, allowing easy integration with SMARC baseboards. For evaluation and design-in of the SM2S-IMX95 module, Tria provides a development platform and a starter kit. Support for Linux is available (Android support available on request)

FEATURES

• NXP® i.MX 95 Applications Processor

• Hexa core Arm® Cortex®-A55 up to 2.0GHz

• Dual Arm® Cortex-M7 / M33

• NXP® eIQ® Neutron Neural Processing Unit.

• NXP® Image Signal Processor

• Arm MaliTM Graphics Processing Unit

• Video Processing Unit up to 4k decode/encode

• Up to 16GB LPDDR5 and 256GB eMMC Flash

• LVDS, MIPI, HDMI Interface

• PCIe Gen. 3, USB 3.0

• Full customization available

https://www.tria-technologies.com/product/tria-sm2s-imx95/

TRIA XRF16 RFSoC Gen3 SOM

The Tria XRF16™ RFSoC System-on-Module is designed for integration into deployed RF systems demanding small footprint, low power, and real-time processing. The XRF16 features the AMD Zynq™ UltraScale+™ RFSoC Gen3 ZU49DR, with 16 RF-ADC, 16 RF-DAC channels, and 6GHz RF bandwidth.

Combine the production-ready XRF16 module with the XRF16 Carrier Card and Avalon™ software suite to jumpstart proof-of-concept and application development. Then deploy your system with the same XRF16 module used for proofof-concept. Example code and tutorials demonstrate AMD RFSoC multi-tile sync (multi-converter sync) and multi-board synchronized analog capture.

Target applications include:

1. Phased Array Radar

2. 5G Massive MIMO

3. Hybrid Beamforming

4. Signal Detection & Jamming

5. Quantum Computing

6. Multi-Channel RF Instrumentation

FEATURES

Ą AMD Zynq UltraScale+ RFSoC ZU49DR

Ą 16x ADCs, 14-bit up to 2.5 GSPS

Ą 16x DACs, 14-bit up to 9.85 GSPS (10 GSPS available)

Ą Ultra-low jitter programmable sampling clocks

Ą High-Speed Data Transfer – 16x ultra-fast AMD GTY serial transceivers

Ą 4” x 5” footprint

Ą Industrial temperature rated

https://www.tria-technologies.com/product/xrf16-xilinx-rfsoc-gen3-system-on-module/

Tria Technologies www.tria-technologies.com

Tria@avnet.com

www.linkedin.com/company/triatechnologies/

800-408-8353

Viking Technology’s BGA SSD is an embedded solid state drive (eSSD) solution designed and optimized for a wide range of embedded/industrial applications. This BGA SSD leverages advanced NAND technologies and NAND geometry (iTLC) to deliver high capacity drives in the densest BGA package (20mm x 16mm).

This SSD-On-Chip is built with embedded OEMs in mind, supporting standard PCIe SATA and legacy PATA interface with ultra reliability and security features such as Opal 2.0, Opalite 1.0, and Pyrite 1.0. The drive is optimized to enable high throughput transfer rates with optional embedded DRAM to enhance data storage efficiency and high random read/write IOPS.

Viking’s BGA SSD solution incorporates full data error detection with recovery engines to provide enhanced data integrity throughout the entire Host-to-NAND-to-Host data path.

FEATURES

Ą Ultra high NAND storage density per cu. in.

Ą Security features: Opal 2.0, Opalite 1.0, and Pyrite 1.0

Ą Supports PCIe SATA & Legacy PATA interface

Ą Very small footprint: Saves up to 80% board space vs. Standard 2.5in. SSD

Ą Rugged: Soldered-down BGA eSSD

Ą Commercial & Industrial (-40°C to +85°C) temperature range

Ą Full End-to-End data path protection with recovery algorithms

www.vikingtechnology.com/military-memory-storage-solutions/bga-ssd/

BGA SSD

Solutions

VME and VME64x, CompactPCI, PXI, or OpenVPX chassis are available in many configurations from 1U to 12U and 2 to 21 slots, with power options up to 1,200 watts. Dual hot-swap is available in AC or DC versions, and Vector’s redundant power supply design supports N+1 configurations for high-reliability applications. All chassis support a wide range of backplane types and are built in-house in the USA, ensuring tight process control, high quality, and fast turnaround.

Our platforms are designed with flexibility to accommodate highspeed signaling requirements, rear I/O capability, and thermal strategies suitable for both air- and conduction-cooled environments. Solutions can be tailored to meet demanding electrical, mechanical, and thermal constraints depending on system requirements. Chassis designs support hot-swappable fans, plug-in power modules, filtered airflow, and meet UL, FCC, and IEEE 1101.10/11 compliance standards. All systems are fully assembled, wired, and tested in-house before shipment.

Series 2370 chassis offer the lowest profile per slot. Cards are inserted horizontally from the front, with options for 80mm rear I/O. Chassis are available from 1U, 2-slot up to 7U, 12-slot configurations for VME, CompactPCI, PXI, or OpenVPX systems.

Series 400 enclosures feature side-filtered air intake and rear exhaust, supporting up to 21 vertical cards. Power options include hot-swappable AC or DC supplies with embedded system monitoring for voltage and temperature.

Series 2151 is a rugged 5U chassis supporting up to 8 slots of 3U x 160mm Eurocards with 80mm rear I/O. It accommodates various backplane architectures and can be configured for either forced-air or conduction cooling.

Integration services are available, including system-level assembly, custom wiring, and validation.

We are also the premier supplier of chassis accessories, including custom front panels, card guides, handles, filler panels, air blockers, card extenders, and more – all engineered to meet your integration needs and most items ship from stock.

FEATURES

• Most rack accessories ship from stock

• Modified ‘standards’ and customization are our specialty

• Card sizes from 3U x 160mm to 9U x 400mm

• System monitoring option (CMM)

• AC or DC power input

• Power options up to 1,200 watts

cPCI, PXI, OpenVPX, VME – Custom Chassis

Daedalean AI Accelerator (DAIA)

Flexible, Compact, Efficient Combo OLT

The Kontron Lumia C16 is a super compact 16-port Combo OLT for XGS-PON, GPON, and 10GE P2P fiber access. With record-low energy consumption, it helps operators cut costs, support sustainability goals, and reduce environmental impact.

Designed for smaller fiber-access deployments, the Lumia C16 with ultra-dense user support delivers reliable performance for both residential and business customers. Its dual-nature design enables operators to virtualize central-office infrastructure without any hardware upgrade – securing a smooth, cost-effective path to nextgeneration broadband.

FEATURES

Ą 16 Combo ports serving up to 4,096 users

Ą Record-low power consumption

Ą –40 °C to +65 °C temperature resilience

Ą Compact 1U, 235 mm ETSI-compliant depth

Ą Dual-nature OLT – conventional or virtualized operation

Ą AC or DC power supply options

Kontron www.kontron.com

Info.americas@kontron.com

www.linkedin.com/company/2661537

High-Performance, High-Density Combo OLT

Scalable. Flexible. Future-Proof.

The Kontron Lumia C16 is a super compact 16-port Combo OLT for XGS-PON, GPON, and 10GE P2P fiber access. With record-low energy consumption, it helps operators cut costs, support sustainability goals, and reduce environmental impact.

Designed for smaller fiber-access deployments, the Lumia C16 with ultra-dense user support delivers reliable performance for both residential and business customers. Its dual-nature design enables operators to virtualize central-office infrastructure without any hardware upgrade – securing a smooth, cost-effective path to nextgeneration broadband.

Kontron www.kontron.com

(888) 294-4588

FEATURES

Ą Up to 400 Gbps per subscriber blade

Ą 600 Gbps uplink connectivity with dual-unit stacking

Ą Seamless transition to virtualized fiber access

Ą –40 °C to +65 °C industry-leading temperature range

Ą Flexible 14-slot shelf design for any capacity

Info.americas@kontron.com

www.linkedin.com/company/2661537

Kontron Lumia T14

Industrial DRAM Module Solutions Tailored for Embedded Computing

Apacer offers industrial DRAM module solutions specifically tailored to meet the unique demands of industrial applications. The product lineup spans DDR5, DDR4, DDR3, DDR2, and DDR technologies, and is available in various form factors including UDIMM, SODIMM (with or without ECC), and RDIMM.

Engineered for reliability, endurance, and performance, these purpose-built DRAM modules deliver seamless operation in even the most demanding environments – supporting everything from heavy workloads to mission-critical tasks, and helping drive long-term business success.

FEATURES

Ą Anti-sulfuration, Apacer awarded patent for anti-sulfuration memory adopts exclusive and improved alloy materials to replace normal electrode and ensure anti-sulfuration performance .

Ą Fully lead-free, the world’s first memory modules to use 100% lead-free components, including resistors, without relying on the RoHS 7(c)-I exemption

Ą Wide Temperature, designed with wide temperature support to ensure operation reliability in extreme temperatures ranging from -40°C to 85°C.

Ą Very Low Profile, the height is only 18.75mm (0.738" inch)

https://www.apacer.com/en/product/industrial-product/industrialsearch/industrial_dram/embedded_memory

Apacer Memory America Inc. www.apacer.com

ssdsales@apacerus.com

https://www.linkedin.com/company/apacer/

Industrial SSD Solutions Tailored for Embedded Systems

Apacer specializes in manufacturing industrial-grade SSDs with a wide range of interfaces and form factors, specifically designed to meet the demands of various vertical application markets. In the industrial sector, Apacer is committed to understanding customer needs, staying ahead of market trends, and continuously advancing its R&D capabilities for embedded SSD storage and related hardware and software solutions. Customized solutions are also available to meet specific project needs.

https://www.apacer.com/en/product/industrial-product/ industrialsearch/industrial_ssd/industrial_card

Apacer Memory America Inc. www.apacer.com

408-518-8699

Memory and Storage

FEATURES

Ą PCIe/NVMe: M.2 2230/2242/2280, CFexpress, U.2

Ą SATA: 2.5” SSD, M.2 2242/2280, mSATA, MO-297, 7-Pin module

Ą Card Type: microSD, SD, CF, CFast

Ą Embedded solutions: eMMC, uSSD (available with both PCIe/NVMe and SATA interfaces)

Ą Anti-sulfuration: ANSI/ISA 71.04 G3 Compliance

Ą CorePower: a hardware-based technology designed to prevent data loss

Ą CoreSnapshot: a firmware-based instant backup and recovery technology

Ą SLC-liteX: based on 3D NAND architecture, features enhanced reliability through customized firmware

Ą Tailored Temperature Solutions: Standard Temperature, Commercial Extended Temperature, Wide Temperature

 ssdsales@apacerus.com

 https://www.linkedin.com/company/apacer/

408-518-8699

Secure with Kigen eSIM modules

Secure with Kigen: Delivering the promise of Embedded IoT scale with new eSIM standards

Extensively tested for interoperability, Secure with Kigen modules and chipsets enable embedded designers to create globally connected, secure, and interoperable IoT eSIM products at scale. Secure with Kigen is your gateway to modules designed to help OEMs and enterprises expand IoT products that comply with the new IoT eSIM known as the GSMA SGP.32 standard.

Discover over 50 leading modules with ready Evaluation Kits available today to transition from concept to comprehensive solution testing across NB-IoT, CAT-M, Cat-1bis, LTE-M, 5G, and satellite network connectivity. Available with development resources and tools tailored to your preferences, this portfolio of compact, IoT-optimized, and scalable options reflects a joint vision of the ecosystem to bring the latest technologies and standards to your fingertips.

Begin your product development with reliable, pre-integrated IoT modules or choose compliant chipsets and platforms for your trial. Connectivity is offered through over 200 global partners, or you can bring your own (BYOC). Kigen’s eIM (eSIM IoT Remote Manager) and eSIMs support both in-eUICC (IPAe) and in-device (IPAd) IoT Profiles Assistants.

Kigen’s SGP.32-compliant eSIMs support multiple operator profiles and effortless network switching, enabling IoT device and sensor makers to accelerate deployments, enhance customer experiences, and keep devices securely connected throughout their lifecycle.

Designed for OEMs and Enterprises

Choose trusted, pre-integrated modules with compliant chipsets to streamline product development by the manufacturer, connectivity technology, and evaluation kit that best serve your needs.

Collaborate with us to launch new connected IoT products that are market-ready.

FEATURES

• Seamless integration with leading loT modules and chipsets

• Out-of-the-box compatibility reduces development timelines

• Streamlined GCF/PTCRB certification with solutions pre-integrated

• Access to Kigen’s partnerships with 200+ connectivity providers

• SAS-SM accreditation with robust security for large-scale deployments

• Kigen eSiMs support both loT Profile Assistant in eUICC (IPAe) and in device (IPAd)

• GSMA SGP.32 compliant and compatible with Kigen elM (eSIM IoT Remote Manager)

https://kigen.com/solutions/secure-with-kigen/

UDE® – Multicore Debugger for MCUs / Embedded MPUs

UDE® Universal Debug Engine for Multicore Debugging is a powerful development platform for debugging, testing and system analysis of microcontroller software applications. UDE® enables efficient and convenient control and monitoring of multiple cores for a wide range of multicore architectures within a single common user interface. This makes it easy to manage even complex heterogeneous multicore systems.

UDE® supports a variety of multicore microcontrollers and embedded multicore processors including Infineon AURIX, TRAVEO, NXP S32 Automotive Platform, STMicroelectronics Stellar, Renesas RH850, R-Car, Synopsys ARC, RISC-V and others.

The UAD2pro, UAD2next and UAD3+ devices of the Universal Access Device family provide the hardware basis for the powerful functions of UDE® and enable efficient and robust communication with the supported architectures and controllers. With its flexible adapter concept, the Universal Access Device family supports all commonly used debug interfaces.

FEATURES

Ą Debugging of homogeneous and heterogeneous 32-bit and 64-bit multicore MCUs and embedded MPUs

Ą Synchronous break, single step and restart for multicore systems

Ą One common debug session for complete system / all cores

Ą Convenient debugger user interfaces supporting multi-screen operation and perspectives

Ą Support for special cores including GTM, HSM, ICU, PPU, SCR

Ą Software API for tool automation and scripting

Ą AUTOSAR support and RTOS awareness

The UDE® Universal Debug Engine is the perfect tool for runtime analysis and testing of embedded multicore applications. With support for on-chip tracing, it offers comprehensive features for non-intrusive debugging, runtime observation, and runtime measurement. This helps developers to investigate, e.g., timing problems or misbehavior caused by parallel execution.

Used in conjunction with the UAD2next and UAD3+ devices from the Universal Access Device family, the UDE® enables trace data to be captured from various trace sources and via external trace interfaces. Trace modules for the UAD2next or trace pods for the UAD3+ are provided for this purpose.

UDE®'s debugging and tracing capabilities, coupled with a flexible and open API for scripting and integrating with third-party tools, make UDE® an ideal choice for automated testing on real target hardware. During the execution of automated tests, UDE® can also determine the Code Coverage to validate the quality of the test cases that are being used.

FEATURES

Ą Multicore trace support for various on-chip trace systems (incl. MCDS/miniMCDS for Infineon AURIX / TriCore, IEEE-ISTO 5001 Nexus for NXP MPC5xxx, STMicroelectronics SPC5, Arm CoreSight for Arm Cortex A/R/M based devices, Renesas RH850 on-chip trace)

Ą Visualization and analysis of recorded trace information (execution sequence chart, call graph visualization, profiling)

Ą Trace-based, non-intrusive statement coverage (C0 coverage) and branch coverage (C1 coverage) for proving test quality

Ą UDE SimplyTrace® function for easy and context-sensitive trace configuration

Ą Open and flexible software API for scripting and test automation

UDE® – Trace and Test for MCUs / Embedded MPUs

Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries.

Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.

Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.