AutomationDirect carries a full line of AC drives, from basic micro drives to full-featured high-performance drives boasting flux vector control and built-in PLCs. So, no matter the application or environment, AutomationDirect has an affordable drive solution for you!
Micro VFDs
Starting at $138.00
With sizes as small as 55mm wide, these drives provide the needed motor speed control without taking up large amounts of panel space.
General Purpose VFDs
Starting at $113.00
General purpose drives offer great value for a wide variety of applications including conveyors, pumps, fans, HVAC systems, and elevators.
High Performance VFDs
Starting at $281.00
High-performance AC drives are top-of-the-line drives that are usually specified when a high degree of precision in speed control is required or when full torque is needed at very low or zero speeds.
Washdown VFDs
Starting at $243.00
These NEMA 4X, washdownduty drives are built to withstand harsh environments including food and beverage processing and water treatment facilities.
price, buy at: www.automationdirect.com/ac-drives
Starting at only $213.00
Cutting EDGE control you can actually afford
The BRX PLC has advanced features that allow it to easily take on the role of an edge computing device—gathering, re ning, and delivering control data to upstream IT collection and analysis.
Embedded Web Server
Rest API
Must-have IIoT Protocols
Intelligent Code Execution
With BRX’s embedded web server, you can instantly access system status, diagnostic information, and monitor memory usage from any Internet-ready device.
Robust task management and a variety of interrupt styles make task prioritization simple.
Extensive Instruction Set
The integrated REST API and secure HTTPS protocol allow BRX to work with ow control tools like Node-RED® to supply high-level IT systems with the plant- oor data they need.
BRX controllers connect to IIoT platforms and cloud services via a selection of industry-standard protocols, including OPC UA, MQTT(S) (with Sparkplug B for structured data), and FTP for le transfer. These capabilities enable integration with asset management and IIoT platforms, such as Microsoft Azure® and IBM Watson®.
Discrete, process, and multi-axis motion control instructions help support even the most complex applications, executed with familiar ladder logic programming.
Powerful Math Functions
Scripted math and algebra enable rich data pre-processing right at the edge.
48 VDC Expansion I/O
BX-P-OPCUA
BX-P-SPARK
Pluggable Option Modules (POMs)
41 | New sensors, expert services yield 4 benefits COVER: Industrial automation requires collecting and analyzing information to make better decisions. Automation, controls and instrumentation in this issue are helping with that. Courtesy: Endress+Hauser
INNOVATIONS
44 | New Products for Engineers, www.controleng.com/products
VOTE: 2026 Control Engineering Product of the Year https://www.controleng.com/voting-is-open-for-the-2026-controlengineering-product-of-the-year-program
Sensor platform for fill, motion, material monitoring; Software-defined radio for radar, satellite, 6G research; Modular I/O; Distributed control system; Surge devices; Modern control system with easier integration; Active alignment; Pressure data; I/O setup simplified; Hybrid solution for electrification.
46 | Back to Basics: Automated condition monitoring in the IIoT era
More wireless sensors, cost-effective and rugged, are helping.
SUBSCRIBE
Insights for automation professionals
Control Engineering experts cover automation, control, and instrumentation technologies for automation engineers who design, integrate, implement, maintain and manage control, automation, and instrumentation systems, components and equipment to do their jobs better across process and discrete industries.
Recent newsletters
• March 19, Motors & Drives: Motor controls, VFDs, retrofits, safety, top products
• March 16, Monthly Top Picks: Control Engineering hot topics, automation tips, PID, cybersecurity, top products
• March 5, AI & Machine Learning: Connected OT security, machine vision, lifecycle-ready AI
• March 3, Process Instrumentation & Sensors: PID spotlight 23, APC, metrics, software, more
Choose from among 12 topical newsletters. Trusted newsletters topics you need at: https://www.controleng.com/newsletter-subscribe
Control Engineering eBook series
Get the topical collection you need.
• Motors & Drives, March 16
• Mechatronics & Motion Control, March 13
• Control Systems, Feb. 26
• IIoT Cloud, Feb. 19
• Digital Transformation, Feb. 16
Ready to download now at www.controleng.com/ebooks
Control Engineering digital edition www.controleng.com/ magazine
Global System Integrator Report
Views, System Integrator of the Year, System Integrator Giants, seven tutorials
Yokogawa empowers a world where industry and sustainability move forward in synch and in harmony. Through the transformation of data into action, we co-innovate with our customers to connect and orchestrate systems of systems, enabling smarter, safer, and more sustainable operations. Together, we advance autonomy, protect our planet, and realize human potential for generations to come.
Learn how we harmonize people and progress
Latest automation mergers, February 2026: AI, robotics, sensing
uAutomation mergers, acquisitions and investments include industrial automation, artificial intelligence, robotics, motion control, process sensing, robotics and other technologies. Bundy Group, an investment bank and advisory firm that specializes in the automation segment, provides an update on mergers and acquisitions and capital placement activity for this industry, with 19 February 2026 report transactions. See more online.
uCapitalG, Valor Equity Partners, Atreides Management invest in Bedrock Robotics, Feb. 26, providing $270 million in Series B funding. The capital accelerates deployment of autonomous robotics in construction environments.
uBaird Capital invested in Rapid Energy, Feb. 25, a provider of industrial temperature controls, supporting growth in critical industrial infrastructure.
uSiemens acquired Canopus AI, Feb. 19, to integrate AI-based metrology into semiconductor manufacturing, strengthening analytics capabilities.
uEngine Ventures, IAG Capital, others invest in Trener Robotics, Feb. 10, providing $32 million in Series A funding. It is expected to accelerate deployment of AI-based skills for industrial robots and software-defined control.
uTavoron acquires DP Technologies Group & DP Brown of Saginaw, Feb. 9, providers of electrical and mechanical motion control systems such as drives, linear motion, motors, gear reducers, sensors, and human-machine interfaces (HMIs).
uInvio Automation acquired Calvary Robotics, Feb. 2, a custom automation and robotics integrator. ce
Clint Bundy is managing director, Bundy Group. Edited by Mark T. Hoske, editor-inchief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
Search on Bundy at www.controleng.com for more merger and acquisition news.
INTERACT ANALYSIS projects continued growth in high-efficiency motor market from 2026 and 2030. The firm projects 2026 revenue growth of 15%, following a slowdown in 2025 as weaker global macroeconomic conditions reduced demand. Demand in segments such as data centers, along with regulations, particularly in Europe, is contributing to growth. Interact Analysis estimates a12% CAGR from 2024-2030, with the market increasing from $1.6 billion in 2024 to $3.1 billion in 2030. The Americas accounted for 19.4% of total revenue in 2024 ($312.9 million) projected to reach $600.5 million by 2030. Revenue is increasing, but limited regulation has slowed adoption. ce
Edited by Puja Mitra, WTWH Media, for Control Engineering, from an Interact Analysis news release.
Manufacturing eyes growth in 2026 amid uncertainty
INTERACT ANALYSIS says the manufacturing industry faces an uncertain 2026. The market intelligence firm said the year’s performance will depend on global events and tariffs. Its latest manufacturing output forecast says that while the Americas and Asia grew in 2025, inventories are expected to normalize 2026 and investment conditions may improve, with modest recoveries expected in regions recovering from downturns. The manufacturing output forecast for 2026 is slightly better than earlier predictions, with trade, technology and investment helping. The sector still faces challenges from geopolitical tensions, conflicts, tariffs and protectionist policies, though industrial demand remains relatively stable. AI sectors expect growth. ce
Edited by Puja Mitra, for Control Engineering, from an Interact Analysis news release.
The Next Step in Secure Remote Access
Can an incident 'impact score' help cut through the OT cyber hype?
uWhen headlines scream about cyberattacks on water systems or pipelines, how does the general public know whether the incident was catastrophic or relatively minor? That question framed an interesting panel discussion at the S4x26 industrial cybersecurity conference, where industry leaders proposed a new way to measure and communicate the real-world impact of OT cyber incidents.
“A question,” moderator and S4 founder Dale Peterson began. “How does your mom or your congressperson know the impact of an OT cyber incident? Whether it’s trivial, minor or huge?” Peterson argued that the cybersecurity community bears responsibility for amplifying fear, uncertainty and doubt, or FUD, as many call it.
“These stories just don’t pop out of nowhere,” he said. “A lot of times … maybe even some of you in this room contact the press and say, ‘This is a huge incident.’ So since we are largely responsible for the FUD, I think it’s incumbent upon us to also find a way to reduce it.”
Richter scale for OT
The proposed solution is an OT incident impact score, a simple 0–10 rating designed not for cybersecurity experts, but for the general public.
“It has to be easy to understand,” Peterson said, like a zero to 10 score. Unlike existing post-incident analyses that can take weeks or months, the goal is speed. “This has to come out within 12 hours,” he said, suggesting a crowdsourced model where vetted OT professionals quickly score incidents based on severity, reach and duration.
The formula is straightforward: Rate each category from zero to 10, multiply the three numbers together, divide by 100 and produce a single score. Peterson displayed early examples that he had scored. Colonial Pipeline, he said, would score a 3.9. The Clorox
cyber incident, which disrupted operations but had limited broader reach, a 2.6. A widely reported Texas water tank overflow in Muleshoe would be a 0.0.
“That’s one person’s view of the score,” Peterson acknowledged. “But now I hope you’re ready to score some incidents.”
Impact score perspective
Robert Hanson, associate program leader for national security infrastructure
‘
With this article online, see a sample scoring of the Colonial Pipeline cybersecurity incident.
at Lawrence Livermore, drew on his experience preparing for national-level incidents. He said an objective score could help cut through noise and give people a clearer picture of what’s actually going on.
“There’s a lot of noise in the space right now,” he said. “[Emergency managers] are not cyber experts, so they can’t wade into a Dark Reading article … and say, ‘OK, this one matters. This one doesn’t.’”
But he cautioned that accuracy and timeliness are critical, as early reports don’t often have the complete picture when it comes to cybersecurity incidents.
“Does the score change once the facts change?” he asked. “Accurate information is going to be important.”
Peterson acknowledged the limitations. “This is not going to be perfect,” he said. “If you want to do it quickly … it’s not going to
have the rigor of [a physical measurement].”
The score, he reiterated, is not intended to guide tactical response decisions. “It was geared at my mom,” he said. “’Oh Mom, don’t worry. It’s a 0.0.’”
Learning from disaster science
Munish Walther-Puri, head of critical digital infrastructure for the TPO Group, inspired the concept in an S4 talk from a previous year. He framed the effort as part of a broader attempt to “bring disaster science to cybersecurity.” Looking to models like the Richter scale, he noted that today’s standardized measurements emerged only after years of trial and error.
“The most important thing is that it went through iterations,” he said. “My goal was to try and microwave that. How do we get to something more rigorous, more reliable?”
He also stressed that measurement forces clarity. “How do we get better at measuring it, not should we measure it?”
Impact score independence?
Panelists debated if government agencies should own such a metric, but Hanson suggested independence might be an advantage. A group without federal limitations can make a judgment, he said, without navigating classified information or internal equities. Peterson asked the group to use the new site when the next cyber incident occurs but acknowledging that the framework will evolve. The scoring platform is live at impact.icssecurityadvisory.com, where vetted OT professionals can log in and begin rating incidents based on severity, reach and duration.
If adopted, the impact score could mark an early step toward bringing greater clarity to OT cyber incidents. ce
Gary Cohen is senior editor, Control Engineering, gcohen@wtwhmedia.com.
TOTAL CONTROL
THE i CUBE CONTROL™ AUTOMATION PLATFORM
Yaskawa’s iCube Control™ is your open automation machine control solution, designed for precision, flexibility, and certainty in every operation.
iCube Engineer
Program logic, motion, safety and robotics with IEC 61131-3 or other languages.
Sigma-X Servo Family
Designed for ultra-fast response, predictive diagnostics, network safety, and smooth performance, with 1, 2, and 3-axis SERVOPACKS.
iC9200 Machine Controller
The Heart of iCube Control
• EtherCAT machine control and safety master.
• Powered by the Triton multi-core processor.
• Designed by Yaskawa for demanding multi-axis machine control applications.
SLIO
Expand and customize your system with flexible, high-speed local and distributed I/O.
HMI Designer
Design intuitive screens for our Yaskawa HMIs with drag-anddrop functionality and advanced visualization tools.
Yaskawa America, Inc. 1-800-YASKAWA
Email: info@yaskawa.com | yaskawa.com
Explanations:
How to secure OT for cyber-resiliency
uOperational technology (OT) cybersecurity issues include the added layers needed for modern production systems and increased complexity to controls, according to Erik Anderson, OT specialist, systems engineering, Fortinet, at the 2026 ARC Leadership Forum by ARC Advisory Group, Feb. 9-12, Orlando, Florida. Anderson said OT needs to reduce risk and increase resiliency. Most controls lack security by design. Air gapping (the physical separation of systems from outside connections) as a means of protection is going away, and the attack surface of increasingly connected automation and control systems are expanding. Remote access requirements need a zero-trust approach. Digital transformation turning analog into digital. Asset owners increasingly rely on original equipment manufacturers (OEMs) and system integrators to help, Anderson said.
Carlos-Raul Sanchez, senior director operational technology solution engi-
neer specialists at Fortinet, said it’s helpful to know who owns risk in the organization, because it varies. In some facilities a cybersecurity breach could result in loss of life. Interconnectedness and processing power increases speed, and, often, it’s helpful for things to go more slowly. “Faster” creates more challenges because we must remediate the problem in real time, Sanchez said. When a breach happens, not knowing assets and how they touch operations creates delay, greater risks and other challenges. Benefits of creating a layered OT risk-control cybersecurity architecture include:
1. Reduce surface reduction
2. Consequence reduction
3. Recovery time reduction.
Mark T. Hoske is editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com. See more at https://www.controleng.com/explanations-how-to-secure-ot-for-cyber-resiliency
Control Engineering Hot Topics: February
2026
February’s most-read articles on Control Engineering reflect a clear focus on turning digital transformation into measurable results. Top articles underscore how smarter, more agile manufacturing is about building the right architecture, data foundation and strategy to support long-term performance.
1. How to use C++, datasets for motion control programming
2. Ctrl+Alt+Mfg Ep. 9: When cyberattacks go physical, with Ian Bramson of Black & Veatch
3. Use software-defined control to get smarter, faster, more agile manufacturing
4. Why thoughtful integration beats quick AI fixes in MES
5. How to leverage cloud analytics to maximize industrial data
6. Research: Hot topics in Control Engineering for 2025
7. Ctrl+Alt+Mfg Ep. 7: Digital twins explained — how virtual plants are transforming manufacturing, with Matt Wise and Cole Switzer, E Tech Group
8. State of Automation 2026: Which technologies give early adopters the edge?
9. Digital transformation in print manufacturing: from manual processes to smart production
10. Ctrl+Alt+Mfg Ep. 8: Inside the 2026 State of Automation Report, with Mark Hoske, Control Engineering
Gary Cohen is senior editor, Control Engineering, gcohen@wtwhmedia.com.
See more at: https://www.controleng.com/control-engineering-hot-topics-february-2026
AUTOMATION EDUCATION
Webcasts
Professional development hour webcasts are archived for a year. www.controleng.com/webcasts
• Dec. 11 webcast “Motors, drives: How to better manage energy with variable speed drives.”
• Automation Fair (Rockwell Automation), Nov. 16-19, Boston www.automationfair.com
New projects for open process automation, software-defined automation
ARC Leadership Forum, 2026:
More Open Process Automation projects are underway. Compliant software-defined automation, open ecosystems and interoperability tests help flexibility, reusability, reliability, capabilities and ease of use at a lower cost than traditional distributed control systems.
Standards-based, open, interoperable, secure process automation architecture systems are known to be operating or underway at ExxonMobil, Reliance Industries, Shell and Texas A&M, as explained at the 2026 ARC Leadership Forum by ARC Advisory Group, Feb. 9-12, 2026, Orlando, Florida. (Theme of the 30th Annual ARC Industry Leadership Forum event is “How AI Is Driving the Future of Industrial Operations and Supply Chain.”)
Speakers cautioned that software-defined automation (SDA) isn’t sufficient for openness, interoperability or compliance with the Open Process Automation System (O-PAS) Standard is a “standard of standards” developed by the Open Process Automation Forum (OPAF). Experts involved in this 10-year standards effort recommend specifying OPA compliance in any request for process automation system proposal. Some close to the effort suggest automation vendors support the standard or lose market share to those who do.
[NOTE: The most-read article posted on www.controleng.com during 2025 was “New insights: 100-controller ExxonMobil Open Process Automation – March 3, 2025,” discussed in detail at the February 2025 ARC Forum.]
Interest extends beyond oil and gas; OPAF said interest includes food and beverage, metals and mining, petrochemical, pharmaceutical, pulp and paper, utilities
‘Be aware of the risks of half-measure steps towards the OPA vision.’
and other companies using distributed control system (DCS) and industrial control system (ICS) architectures.
OPA projects are real, expanding
Whit McConnell, chief automation and process control engineer, ExxonMobil Technical Engineering Co., said OPA progress results from a decade of work. OPA is real, as described in the ExxonMobil project, explained in detail last year, where the chief engineer in charge of the project came to the ARC Industry Forum just weeks after startup.
“He wouldn’t have left after a startup in the old days. Now we’re expanding open ecosystems for less engineering, reusable intellectual property, systems integration. Now we’re deploying a new OPA-based
system to a tank farm at the Baton Rouge Anchorage Chemical Terminal (ACT) with no issues,” McConnell said.
Last summer, a project in India began an OPA-based system with 2027 deployment expected, McConnell said. Standards in use include IEC 61499 Control Runtime Standard-Open and IEC 61131 controllers standard. With IEC 61499, function blocks encapsulate instructions and its own event-based state machine, allowing integration from different software and hardware suppliers, with libraries of functions, structured communications with other devices and systems. Using an app store analogy, the standard allows portability and reusability (human-machine interface, alarms, historian) for greater options and flexibility. There’s no need for an over-engineered system, McConnell said.
Open systems add efficiencies with structured encapsulation, allowing creation of composite blocks from reusable elements, rather than the typical blob of logic. State-based control and visual representations of logic and data flow are directly transferable to other systems if conformant to the standard, McConnell said.
Containerized designs allow for quick deployment with hardware independence. Logic can be assigned to other controllers
Mark T. Hoske Control Engineering
®
1111 Superior Avenue, 26th Floor, Cleveland, OH 44114
as needed during a failure.
Sequence/condition-based control can coexist in controllers. Distributed logic allows development of failure tolerant control strategies, taking minutes instead of weeks or months in the past. Easier integration of control types is possible with easier upgrades expected in the future.
UniversalAutomation.org (UAO) has more than 100 members (including directors from ASRock, ExxonMobil, Intel, Kongsberg Marine, Kyland Group, Novo Nordisk, Schneider Electric, Wood Group and Yokogawa); the non-profit oversees a shared ecosystem of portable, interoperable, plug and produce software showing the screaming success of IEC 61499, McConnell said.
SDA not a substitute for OPA
Don Bartusiak, (R. Donald Bartusiak), Ph.D., president, Collaborative Systems Integration Inc. (CSI) provided Control Engineering with notes informing his comments during the session.
“The phrase ‘software defined automation’ started to appear in the marketing messages of the control system vendors about two years ago around 2024. (For a list of eight vendors photos and tables, see this article online.)
Online controleng.com
“End Users should ask three questions to a vendor that offers SDA:
1. Will the vendor’s SDA system be interoperable with another vendor’s SDA system?
2. Will I be able to port my applications from the vendor’s SDA system to another vendor’s SDA system without rewriting?
3. Will the SDA vendor’s cost savings from the use of general-purpose hardware be passed on to me to lower initial- and total cost of ownership?
“Automation professionals should understand the intent of ‘software defined automation’ marketing,” Bartusiak said, “and be aware of the risks of half-measure steps towards the OPA vision. Let’s learn from history in our own industry and from adjacent industry experience and avoid re-making mistakes of the past.”
Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
KEYWORDS: Open Process Automation, standards-based interoperability, IEC 61499
LEARNING OBJECTIVES
• Process automation systems that are standards-based, open, interoperable and secure are known to be operating or underway at ExxonMobil, Reliance Industries, Shell and Texas A&M and are applicable to other process industries, as explained at the 2026 ARC Leadership Forum by ARC Advisory Group,
• Software-defined automation (SDA) isn’t sufficient for openness, interoperability or compliance with the O-PAS Standard is a “standard of standards” developed by the Open Process Automation Forum (OPAF).
• Experts involved in this 10-year standards effort recommend specifying OPA compliance in any request for proposal for a process automation system.
CONSIDER THIS
In your next process control request for proposal, are you specifying OPA compliance? ONLINE
From Control Engineering, see more OPA coverage: https://www.controleng.com/new-projects-for-open-process-automation-software-defined-automation https://www.controleng.com/new-insights-100-controller-exxonmobil-open-process-automation/ https://www.controleng.com/new-cost-analysis-open-process-automation-saves-52-versus-dcs/
For more on IEC 61499, see: https://universalautomation.org/
For more on OPA, see: https://www.opengroup.org/forum/open-process-automation-forum
More from ARC Advisory Group: https://www.arcweb.com/events/arc-industry-leadership-forum-orlando
Content Specialists/Editorial
Mark T. Hoske, editor-in-chief 847-830-3215, MHoske@WTWHMedia.com
Gary Cohen, senior editor GCohen@WTWHMedia.com
Sheri Kasprzak, executive editor, engineering, automation and controls, SKasprzak@WTWHMedia.com
Stephanie Neil, vice president, editorial director engineering, automation and control, 508-344-0620
SNeil@WTWHMedia.com
Jill Lowe, webinar manager JLowe@WTWHMedia.com
Amanda Pelliccione, marketing research manager 978-302-3463, APelliccione@WTWHMedia.com
Daniel E. Capano, senior project manager, Gannett Fleming Engineers and Architects, www.gannettfleming.com
Frank Lamb, founder and owner Automation Consulting LLC, www.automationllc.com
Joe Martin, president and founder Martin Control Systems, www.martincsi.com
Rick Pierro, president and co-founder Superior Controls, www.superiorcontrols.com
Eric J. Silverman, PE, PMP, CDT, vice president, senior automation engineer, CDM Smith, www.cdmsmith.com
Mark Voigtmann, partner, automation practice lead Faegre Baker Daniels, www.FaegreBD.com
WTWH Media Contributor Guidelines Overview
Content For Engineers. WTWH Media focuses on engineers sharing with their peers. We welcome content submissions for all interested parties in engineering. We will use those materials online, on our Website, in print and in newsletters to keep engineers informed about the products, solutions and industry trends.
* Control Engineering Submissions instructions at https://www.controleng.com/connect/how-to-contribute gives an overview of how to submit press releases, products, images and graphics, bylined feature articles, case studies, white papers and other media.
* Content should focus on helping engineers solve problems. Articles that are commercial in nature or that are critical of other products or organizations will be rejected. (Technology discussions and comparative tables may be accepted if nonpromotional and if contributor corroborates information with sources cited.)
* If the content meets criteria noted in guidelines, expect to see it first on the website. Content for enewsletters comes from content already available on the website. All content for print also will be online. All content that appears in the print magazine will appear as space permits, and we will indicate in print if more content from that article is available online.
* Deadlines for feature articles vary based on where it appears. Print-related content is due at least three months in advance of the publication date. Again, it is best to discuss all feature articles with the content manager prior to submission. Learn more at: https://www.controleng.com/connect/how-to-contribute
Brian LaMothe, Emerson
Today’s industrial operations need software that spans the enterprise
Next-generation control systems give manufacturers the tools to operate more efficiently, standardize more easily and stay ahead in a competitive global marketplace.
Control technology is moving from plant-centric systems into enterprise operations platforms with standards-based automation, simulation pipelines, secure incremental updates and open interface adoption to integrate computational intelligence and deterministic logic. New platforms will address complexities of modern process manufacturing that have increased dramatically in the expanding global marketplace. Success is no longer defined by the performance of a flagship plant. It needs sustained efficiency, stable production and consistent quality across facilities.
The industry has crossed a threshold. Addressing obvious inefficiencies inside individual plants no longer yields transformative gains. Organizations are now realizing that achieving the next level of operational excellence means expanding focus to drive improved enterprise-wide optimization.
The organizations accelerating the fastest toward enterprise-wide optimization are those that treat the next-generation distributed control system (DCS) as an enterprise operations platform (EOP): a software-defined control and intelligence ecosystem that spans sites, exposes trusted real-time data, and coordinates deterministic control with predictive and prescriptive analytics.
The EOP strategy for modernization leverages next-generation, software-defined control to deliver seamless integration, enterprise visibility, near limitless scalability, and improved operational excellence. Because the strategy relies on a comprehensive data
fabric, it can be implemented incrementally, helping teams preserve existing automation investments.
Data, intelligence, metrics
This evolution is not just architectural—it delivers actionable decisions in minutes or seconds instead of days. As an example, consider a coastal chemical plant facing a forecasted 48-hour tropical storm disruption. As the EOP ingests weather and logistics data, it simulates expected throughput loss (such as 12% of monthly ethylene output). Before the storm hits, a sister inland facility automatically receives a recommended ramp profile: increase feed rates by 8% within validated safety and energy envelopes, pre-position catalyst, and adjust steam demand. The platform verifies constraints (flare limits, compressor loading, utility contracts, etc.) via digital twin models, then publishes the approved changes as a versioned control update. The result for the enterprise is that the shortfall is cut from a projected 12% to 3%, without manual spreadsheet coordination. It is no longer enough to ensure that each plant operates at its best in isolation. Competitive advantage now depends on optimizing and standardizing every plant in the enterprise to work efficiently and effectively as a seamless, interconnected and holistic operation.
Chasing enterprise visibility
Organizations have chased enterprise-wide operational improvement for years, but workforce retirements and lean staffing have intensified the urgency. Fewer expert engineers mean companies must capture
controleng.com
KEYWORDS: Distributed control systems, DCS, enterprise operations platforms, OEP CONSIDER THIS Is your DCS preparing your facility and enterprise for the future?
ONLINE
With this article online, see: -Easier evergreen operations -Building a foundation for automation of the future -Tomorrow’s foundations; today’s control technologies
FIGURE 1: A move from manual enterprise standards to automated, data-driven standards governance can lock in controlled, auditable changes. Images courtesy: Emerson
institutional knowledge, lock in best practices, and manage more assets with less hands-on oversight.
To meet these goals, organizations across every industry have been developing corporate and enterprise standards libraries to provide optimization, reduction of complexity, and consistency of operation. Standards libraries can be an important tool in streamlining operation across the enterprise and more rapidly rolling out best practices as they are discovered. Yet, without strong governance, standards libraries often fail to maintain required consistency across disparate sites.
Manual paper- or spreadsheet-based enforcement is a useful starting point: write the standards, distribute them, and then ask sites to conform. But lean teams quickly hit limitations. Comparing a live configuration between two plants becomes a time-consuming detective exercise, and subtle configuration drift goes unnoticed.
Even when sites adopt the same baseline standards, real-world variability erodes alignment due to local edits to control logic, personnel changes, environmental influences (seasonal temperature or humidity changes), shifts in raw material quality and other factors. These factors aren’t captured well in static documents, leading to uneven performance across sites separated by hundreds or thousands of miles.
What organizations need next is a move from static documentation to automated, data-driven standards governance. This approach will provide continuous visibility into deviations, contextual comparison, and a living repository that reflects the approved baseline and controlled, auditable changes over time.
Enterprise engineering tools
Enterprise engineering software is a foundational element of the emerging EOP. It replaces labor-intensive, manual standards management, with automated, data-driven governance of engineering standards across the enterprise. It visually surfaces configuration differences in the context of the approved control strategy, instead of forcing staff to hunt through spreadsheets or potentially outdated documents (Figure 1). These tools transmit live and historical system configurations to a secure cloud environment, giving authorized users remote access from any location. Role-based access control presents each user with only the systems and configuration elements in their span of responsibility, reducing noise and tightening security posture.
Staff can log into the cloud application and view the most up-to-date DCS configuration from any system and any site they have permission to access.
Exception-based views highlight where a configuration deviates from corporate standards—turning “find the drift” into a targeted review instead of a manual audit. Versioned historical data and version management allow engineers to correlate specific adjustments with shifts in process performance, quality and reliability indicators (Figure 2).
By capturing lessons learned, approved templates, and annotated changes in a common system, enterprise engineering software provides an evergreen, single source of truth for control and engineering standards across the enterprise. This shrinks time required to propagate a validated improvement from a pilot plant to the rest of the fleet and reduces the risk of undetected divergence. As part of an EOP, the software provides data consistency and a common data model and definitions—along with data availability to personnel, artificial intelligence (AI) and machine learning (ML) applications. As organizations establish corporate standards across the enterprise, they will need easier ways to identify, test, and implement new control strategies to further increase operational excellence. Digital twin simulation, fed by a unified data fabric spanning intelligent field devices, the edge, and cloud analytics, enables a “provebefore-you-update” workflow. Teams use the
FIGURE 2: Enterprise engineering software empowers users to log into a cloud application and view the most up-to-date DCS configuration from any system and any site they have permission to access.
current control configuration as the base model in the simulation, inject candidate logic or ML-derived setpoint strategies and compare predicted key performance indicator (KPI) shifts (energy intensity, yield variance, constraint margin) to determine promotable changes.
With enterprise engineering software, successful scenarios will advance to a gated deployment pipeline—first shadow mode, then canary release, and then fleet rollout. The outcome is an enterprise operations environment where teams can run operations while testing and watching simulated scenarios in parallel, and then seamlessly transition new control logic to real-time control, driving the innovation and flexibility necessary to capture competitive advantage. ce
Brian LaMothe is vice president of applied research and emerging technologies for Emerson’s Process Systems and Solutions business. Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
Insightsu
Next-generation DCS insights
uToday’s distributed control systems are providing more data, wider intelligence and better metrics for greater enterprise visibility.
uEnterprise engineering tools and simulation increase success.
uToday’s control technologies can build a foundation for automation of the future.
Ed Bullerdiek, process control engineer, retired
PID spotlight, part 27: Calculating PID tuning constants
Here are tuning calculations for fast, moderate and slow tuning of PID controllers for all lag/deadtime ratios.
Properly blending proportional-integral-derivative (PID) tuning constants to get the best possible control has been a theme of this series since PID spotlight part 3. The succeeding articles covered the mechanics of PID controller tuning, but we didn’t get back to how to find the best mix of controller gain, integral and derivative until PID spotlight part 26. Figure 1 recaps this for a 5:1 lag/deadtime ratio self-limiting process.
In Figure 1 the double blue lines set a boundary on controller gain and integral tuning constants that will likely work well for this process. The win-
FIGURE 1: Self-limiting process PI controller tuning map. Lag/deadtime ratio = 5:1. K p = 1.0, T1 = 150 seconds, Dt = 30 seconds. Figures, tables courtesy: Ed Bullerdiek, retired control engineer
dows tell us where specific types of controller performance will be found, although the best blend is the one that gives you the best performance for your specific situation whether that is inside one of the windows or not. The red labels tell you where you are if you are using heuristic methods to tune this controller. PID spotlight parts 9 and 10 cover pattern recognition and what to do when you identify that the controller’s (for example) integral is set too slow.
How can I find the PID controller performance windows?
In PID spotlight part 26 we learned that:
• The locations of the performance windows could change relative to each other depending on the lag/deadtime ratio of the process.
• The disturbance rejection and critically damped windows are not available for deadtime dominant processes.
• Some common published loop tuning methods do not work well at all lag/deadtime ratios.
This points out the need for estimation methods for each of the performance windows and the range of applicability. The following calculations were developed using heuristic methods to find optimal tuning for each of the performance windows across lag/deadtime ratios from 100:1 down to 1:100 followed by curve fitting the results. The caveat is these calculations reflect my personal preferences, which is true for all the authors of the 400+ tuning methods (and yes, of course I think mine are better. But I encourage you to find what works best for you.)
Finding the disturbance rejection tuning window
The calculations in Table 1 can be used to calculate the approximate midpoint of the PI and PID disturbance rejection windows from open loop step test data. The PI disturbance rejection tuning calculations will work for lag/deadtime (L/D) ratios from 100:1 down to 1:4. The PID disturbance rejection calculation is good for L/D ratios from 100:1 down to 1:1.
Derivative cannot be used on a true first order plus deadtime (FO+Dt) process because derivative behaves badly on sudden changes in direction. Very high lag/deadtime ratio processes may be effectively FO+Dt. You may not be able to use PID disturbance rejection tuning. Fortunately at very high L/D ratios disturbance rejection is very good for PI only tuning. If you are really trying to squeeze that last bit of performance derivative can be set as high as onetenth of the integral (Td = Ti/10) constant at L/D ratios from 100:1 down to 30:1. For L/D ratios between 4:1 and 1:1 setting derivative to one-fourth of the integral (Td = Ti/4) constant usually works well, and between 30:1 and 4:1 a linear interpolation {Ti/(3.08+0.23*T1/ Dt)} provides a good starting point. Regardless of L/D ratio, if the process is close to a true FO+Dt process, you may have to use heuristics to reduce derivative and possibly controller gain and integral.
The general rule for tuning applies to these and the tuning calculations that follow: The process you encounter will likely not be first order plus deadtime. It may actually have complex dynamics that look kind of like first order. You may be trying to fit a sort of round peg into a sort of square hole and find that you have to use heuristics to get the performance you are looking for.
Finding the critically damped tuning window
Table 2 calculates controller gain and integral constants that will provide slightly aggressive critically damped tuning. In the interest of getting consistent time to setpoint measurements across all lag/deadtime ratios, I aimed for 5% overshoot (a 10% setpoint change results in the process variable overshooting the new setpoint by 0.5%.) at every ratio tuned. This will also result in the controller output overshooting its final value by 5% when a disturbance occurs.
Should you desire less overshoot increase (slow down) the integral time. For high L/D ratios (when deadtime doesn’t affect integral) setting integral
TABLE 1: Disturbance rejection PID tuning constant calculations for proportional-integral (PI) and proportional-integral-derivative (PID) controllers.
TABLE 2: Critically damped PID tuning constant calculations for proportionalintegral (PI) and proportional-integral-derivative (PID) controllers.
equal to the lag time (Ti = T1) will result in no overshoot. Setting Ti = 0.71*T1 results in 1% overshoot; Ti = 0.32*T1 results in 5% overshoot, and Ti = 0.186*T1 results in 10% overshoot. Once deadtime becomes prominent these relationships break down, but work directionally if you need to make adjustments. These calculations are good for L/D ratios from 100:1 down to 1:5, however at an L/D ratio below 1:1 you would be better served using the minimum controller output movement tuning calculations below.
Finding the minimum controller output movement tuning window
Table 3 calculates the controller gain and integral that will result in minimum controller output movement when the controller setpoint is changed. These calculations are valid for the full lag/deadtime ratio from 100:1 down to 1:100. Below an L/D ratio of 1:4 these are the only calculations that will provide good control (with the standard disclaimer that you may still need to do some trimming).
In reality, you will never use this calculation for any L/D ratio greater than 2:1. You can set the controller gain equal to the baseline controller gain (K = Kbase) and integral equal to the lag time (Ti = T1) with very little impact of performance. (Why overthink it?)
Estimating ultimate gain (Ku ) and natural period (Pn )
In the event you want to use a closed loop tuning method but only have open loop step test data, you can use the calculations in Table 4 to estimate
Know how to calculate tuning constants for PID and PI disturbance rejection, critically damped and minimum output movement tuning from open loop step test data.
Know how to estimate controller ultimate gain (Ku) and natural period (Pn) from open loop step test data for use in closed loop tuning methods.
Understand the ranges of applicability for these tuning calculations.
Understand how lower lag/ deadtime ratios compress the controller gain and integral tuning windows.
CONSIDER THIS
After you determine the performance requirement for your PID controller the appropriate tuning constant calculations can get you started in the right direction. However, these are just guidelines, not final answers. ONLINE
Link to PID spotlights, parts 1-26 and with this article online, starting with “Three reasons to tune control loops: Safety, profit, energy efficiency.”
ANSWERS
TABLE 3: Minimum controller output movement PID tuning constant calculations for proportional-integral (PI) and proportional-integral-derivative (PID) controllers.
TABLE 4: Estimating ultimate controller gain (Ku ) and natural period (Pn ) from open loop step test data.
controller ultimate gain (Ku) and natural period (Pn). One of the interesting things that emerged from this exercise was finding out that natural period is not always four times the deadtime. This is only true at high lag/deadtime ratios. Natural period is two times deadtime for deadtime dominant processes and about three times deadtime at an L/D ratio of 1:1.
Charting the data
Insightsu
Insights about calculating tuning constants
uThere are multiple tuning calculations depending on the performance you need for your PID controller. Each has its own range of applicability.
uThese calculations give you a starting point for finding the best tuning for your PID controller. You may set the tuning anywhere inside the general window of reasonable tuning depending on process needs.
uThese calculations apply to true first-order plus deadtime (FO+Dt) selflimiting processes. Your process will likely not be a true FO+Dt process, thus requiring you to customize the tuning for best performance.
Following are charts showing how controller gain and integral change for the controller performance windows across the full range of expected lag/deadtime ratios.
Controller gain multiplier:
In figures 2 and 3 the vertical scale is the controller gain multiplier (Kx), the value you must multiply the baseline controller gain (Kbase) by to get the final controller gain (K).
K = K x * Kbase
Figures 2 and 3 include plots of the ultimate controller gain (Ku), PID and PI disturbance rejection gain (0.60 * Ku and 0.45 * K u respectively), critically damped controller gain (0.25 * Ku) and minimum control output movement gain (always less than 1.0 * Kbase).
Figure 2 is the fully expanded version of Figure 1 from PID spotlight part 6 which is where we first looked at the relationship among lag/deadtime ratio, controller tuning and controller performance. The quick summary of that article is when the lag/
deadtime ratio is high, tuning is easy and disturbance rejection very effective. As the ratio drops, controller tuning becomes more difficult, and performance degrades to the point where disturbances are fully expressed and persistent.
We can see from Figure 2 why tuning in the lag dominant area is easy. At a 100:1 lag/deadtime ratio you can set the controller gain up to 150 times the baseline controller gain and while going above 67 is probably not a great idea that still leaves you a lot of room to work. Even at 10:1 there is ample room to work with controller gain.
However, as we get to lower lag/deadtime ratios ultimate gain drops, which constrains the controller gain to the point where we need to take a closer look at what’s going on (Figure 3).
Figure 3 three is a close up of when the lag/deadtime ratio transitions from lag/dominant to deadtime dominant. The important transition points are:
• Lag/deadtime ratio < 1:1:
• Do not use PID disturbance rejection tuning.
• Substitute minimum OP movement tuning for critically damped tuning.
• Lag/deadtime ratio < 1:4:
• Do not use PI disturbance rejection tuning.
Below a 1:4 lag/deadtime ratio only minimum OP movement tuning will work effectively. The other tuning methods become oscillatory and ineffective.
Let’s take a moment to appreciate the work of Ziegler and Nichols on controller gain. I started heuristic tuning for PID and PI disturbance rejection tuning from Z-N’s correlation to ultimate controller gain (Ku) and never drifted far from it regardless of lag/deadtime ratio. Critically damped tuning controller gain settled in at 0.25 * Ku for all lag/deadtime ratios also. Ultimate controller gain is very easy to estimate for all lag/deadtime ratios so the individual gain calculations were no more difficult that multiplying 0.6, 0.45 and 0.25 through the ultimate controller gain calculation.
Obviously, the minimum controller output movement correlation is very different as the performance goal for this tuning is very different. And
the integral correlations weren’t as easy.
Controller integral multiplier:
The vertical scale in figures 4 and 5 is the controller integral multiplier (TiX), the value you must multiply the process deadtime (Dt) by to get the final integral (Ti).
Ti = TiX * D t
In contrast to the controller gain multiplier plots on these charts, bigger is slower. There is no equivalent to ultimate controller gain (Ku) as no matter how fast you set integral, the controller can always be made stable if you turn the controller gain down far enough. The process lag is added as a reference as rarely is integral much bigger than the process lag.
Figures 4 and 5 include plots of:
• Lag: This is the process lag.
• PID dist rej: The PID disturbance rejection integral constant multiplier.
• PI dist rej: The disturbance rejection integral constant multiplier.
• Crit damp: The critically damped (setpoint following) integral constant multiplier.
• Min OP move: The minimum controller output movement integral constant multiplier.
• Nat per (Pn): The natural period (Pn) multiplier.
The PID and PI disturbance rejection, critically damped and natural period curves are all hyperbolic curves requiring complex curve fits. The logarithmic plot obscures this by bending them into what appears to be an S-shape. The minimum OP move curve is simple as it lays on top of the process lag until deadtime becomes significant.
In Figure 4 we can see how disturbance rejection, critically damped and minimum controller output movement tuning diverge at higher lag/ deadtime ratios. The integral constant for disturbance rejection tuning remains close to the process
2: Map of baseline controller gain (Kbase) multipliers for disturbance rejection, critically damped and minimum OP movement tuning based on lag/deadtime ratio for self-limiting processes.
FIGURE 3: Map of baseline controller gain (Kbase) multipliers for PID and PI disturbance rejection, critically damped and minimum OP movement tuning based on lag/deadtime ratio for self-limiting processes.
deadtime regardless of the lag/deadtime ratio. This tells us that any tuning method that sets integral equal to the process lag will perform poorly at disturbance rejection at high L/D ratios.
FIGURE
ANSWERS
Critically damped tuning naturally falls between the deadtime and process lag at high L/D ratios. As stated above the choice of integral constant can be
FIGURE 4: Map of integral by deadtime (Dt ) multipliers for disturbance rejection, critically damped and minimum OP movement tuning based on lag/deadtime ratio for self-limiting processes.
FIGURE 5: Map of integral by deadtime (Dt ) multipliers for disturbance rejection, critically damped and minimum OP movement tuning based on lag/deadtime ratio for self-limiting processes.
moved up (slower) or down (faster) depending on how aggressive the tuning needs to be.
Integral for minimum OP movement tuning matches the process lag at high L/D ratios as this is the setting that results in no overshoot or undershoot on controller output movement. Figure 5 is a close up of the lower integral multiplier range so that we can get a closer look at the impact of increasing deadtime on controller tuning. Several things stand out:
• The natural period (Pn) drops from slightly over 4.0 at a 10:1 lag/deadtime ratio to 2.0 at a 1:30 L/D ratio. Various publications have stated that the natural period can be estimated at either three or four times deadtime, which begs the question: Which is true? The true answer is, “It depends.”
• The PID and PI disturbance rejection integral lines cross over the critically damped and minimum controller output integral lines between a L/D ratios of 1 and 2. Presumably the higher controller gain constrains how much integral can be applied.
• The critically damped integral line crosses the process lag at about a 1.05 L/D ratio. At this point the integral tuning is now slower than the process lag.
• The minimum controller output movement integral line diverges from the process lag as the deadtime increases.
All of these expose the increasing impact deadtime is having on controller tuning and performance. Once controller gain has dropped below baseline controller gain, and integral is slower than the process lag, we have transitioned to a point where the PID controller can no longer respond to disturbances well or execute setpoint changes crisply. All of this occurs around a lag/deadtime ratio of 1:1. ce
Ed Bullerdiek is a retired control engineer with 37 years of process control experience in petroleum refining and oil production. Send comments and questions to freerangecontrol@ameritech.net. Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
APC 2.0 part 4: APC optimization
APC 2.0 optimization fills many gaps when considering the process optimization conundrum.
In conventional model-based multivariable control (MPC), optimization is a complicated endeavor. A steady-state optimizer is deployed in the control layer, which is problematic in itself, because control networks are not really meant for applications like steady-state optimizers (control networks, in principle, are for concise deterministic automation programs). But moreover, it then becomes necessary to reconcile this unit level optimization solution with the site-wide optimization solution that takes place on the business side. This can lead to frequent ad hoc meetings at the control console with operators, process engineers, and control engineers, to figure out how to adjust the MPC unit optimizer to get the best result – the one that best aligns with the sitewide solution. Thus, the operating team is adjusting targets, limits, gains, etc., to coerce the desired unit optimizer solution. This stands the whole idea on its head. Site-wide optimization solutions typically change daily and weekly, so these ad hoc meetings can become a regular nuisance. Industry could have seen this coming because, at the unit level, there is vastly insufficient data to derive a unit optimization solution that will match the sitewide solution. All this begs the question, why are there two optimizers in the first place?
In APC 2.0, the site-wide optimization solution is promoted and the control layer optimization is sidelined or eliminated. In this paradigm, the APC multivariable controller implements the optimization solution, even if it doesn’t solve it in the first place. Updated targets and limits from the business side may be communicated to the control side via the network, but common industry practice is to pass most updates via the chain of command (think
morning meeting) so that everyone is informed and prepared for changes that can then be implemented together in a timely coordinated manner. Stealth changes, that is, optimization results that get implemented automatically and asynchronously, have come to be frowned upon. That’s basically another reason to skip the control layer optimizer and follow the business layer optimizer.
Implementing versus solving optimization
The APC 2.0 multivariable controller implements, but does not itself solve, the optimization solution. What does this mean? Obviously, based on the discussion so far, optimization is solved in the business layer, usually in a production planning and optimization unit (PP&O) of some sort. A new updated optimization solution means certain manipulated variable (MV) and controlled variable (CV) limits and targets may change, and certain MVs or CVs may be switched in or out of service. That’s the essence of the handshaking between business side optimization and control side multivariable control – some targets, limits, and maybe some gains.
Next, implementing the optimization solution means the APC multivariable controller takes appropriate action based on all limits and targets, including when they change. Moreover, it must be able to utilize all remaining manipulated variable (MV) availability to pursue the optimization
FIGURE: Site-wide optimization takes place in the business layer, usually in some sort of production planning & optimization unit (PP&O). PP&O outputs fan out across the site asynchronously, and it is up to each unit to implement their part for the overall plan.
Courtesy: Lin & Associates Inc.
targets (one per MV, experienced users will recall) after first satisfying all constraint limits. (In multivariable control, keeping the process within the constraint limit window is the first priority; optimization is the second priority.) That’s APC multivariable control in a nutshell.
APC 2.0 Optimization paradigm
As shown in the Figure, optimization can be drawn as two layers or circles (not really a pyramid): PP&O in the business layer, where optimization takes place (or is “solved”), and the operations layer, where each unit implements its part.
In the business layer, site-wide optimization is a complex task involving a variety of inputs, tools, timeframes, and communications. Inputs include market opportunities, schedule constraints, blending capabilities, tankage inventories, pricing, planned and current outages, etc. By and large, each input has unique needs, customized tools, and its own appropriate time frame, so that expe-
‘The multivariable control problem includes managing all constraint limits and using all MV availability to maximize optimization targets.’
rienced personnel piece together the final overall solution semi-manually. This heterogeneous optimization process produces an asynchronous stream of outputs, such as production schedules, buy and sell orders, blending plans, and updated APC limits and targets, among other things. A single unified integrated optimization tool remains a stretch goal for industry – maybe artificial intelligence will solve that.
In the operations layer, each operating unit is responsible – counted upon – to implement their part of the PP&O master plan. In the case of the process control layer, APC controllers need to effectively manage the multivariable nature (multiple-inputs-multiple-outputs) of the control problem, in the face of limits and targets that may change anytime, whether due to a new optimization solution from PP&O, or due to an operator input for a variety of reasons, such as to address an ordinary alarm.
Only one optimizer is needed
In APC 2.0, there is only one optimizer – the site-wide optimization that takes place in the business layer. There is no optimizer in the control layer, although the APC multivariable controller remains responsible for implementing appropriate parts of the business-side solution. It still needs to manage the multivariable control problem, which includes managing all constraint limits and utilizing all MV availability to maximize optimization targets. ce
Allan Kern, P.E., is principal APC Consultant with Lin & Associates Inc., Phoenix, Arizona, USA. Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
VFD component and technology advances
Variable frequency drive (VFD) technologies have progressed to advanced insulated gate bipolar transistor (IGBT) topologies that improve efficiency, control and harmonic mitigation to meet modern IEEE 519 standards.
VFDs have undergone significant advancements since their introduction in the late 1950s. Early VFDs relied on silicon-controlled rectifiers (SCRs), which were first introduced in 1957. These devices allowed for basic control of alternating current (AC) motor speed by rectifying incoming AC voltage and switching it through a gate signal. Early SCR-based drives were bulky, expensive, and limited in performance, unsuitable many industrial applications.
To improve controllability, gate turn-off thyristors (GTOs) were developed in the 1960s. Unlike SCRs, GTOs could be turned off via a gate signal, allowing for more precise control of motor operation. Despite this improvement, GTOs required high gate currents and had slower switching speeds, which limited use in high-performance applications. Early drives often used six-step inverter technology and were classified as either variable-voltage inverters or current-source inverters. They provided basic speed control and introduced significant waveform distortion and required large enclosures due to cooling equipment and component size. In the mid-1960s, pulsewidth modulation (PWM) marked a turning point in VFD design. PWM allowed for smoother motor voltage output by modulating the width of voltage pulses
Joe Doughney, PE; Levi Ambrose, EIT, CDM Smith
to simulate a sinusoidal waveform. This improved motor performance and reduced torque ripple. Early PWM drives used bipolar transistors and were limited by switching speed and efficiency.
The development of insulated gate bipolar transistors (IGBTs) in the early 1980s revolutionized VFDs. IGBTs combined the high-current capability of bipolar transistors with the easy gate control of metal-oxide-semiconductor field-effect transistors (MOSFETs), enabling high-speed switching and compact drive designs. By the late 1980s, IGBT-based PWM drives became the industry standard, offering improved efficiency, reduced size and lower cost. These drives could operate at ultrasonic switching frequencies (10 to 15 kilohertz) for smoother motor operation and quieter performance. Widespread use of IGBT technology enabled the integration of advanced control algorithms, such as vector control and direct torque control, enhancing capabilities.
In the early 2000s, matrix VFDs emerged as a new topology that eliminated the traditional direct current (DC) bus. Instead of converting AC to DC and back to AC, matrix drives perform direct AC-to-AC
FIGURE 1: A 6-Pulse VFD has an internal line reactor; line reactors, better for lower-horsepower (hp) motors (below 30 to 50 hp) or need to be paired with other harmonic mitigation techniques to further reduce the distortion levels. Images courtesy: CDM Smith
ANSWERS
Online
controleng.com
KEYWORDS: Variable frequency drives, VFD designs, VFD features, modern VFDs
CONSIDER THIS
Are your VFDs using up-todate technologies to deliver maximum benefits?
ONLINE
Control Engineering has more industrial motors and drives knowledge.
https://www.controleng.com/ motors-drives/
Control Engineering has more mechatronics and motion control.
https://www.controleng.com/ mechatronics/
CDM Smith also wrote: Fit motor enclosures, protective sensors to the application
FIGURE 2: Phase shifting transformer is shown for an 18-pulse VFD. The 18-pulse VFD offered superior harmonic mitigation, with marginally higher costs, over the 12-pulse VFD, often meeting IEEE 519 limits by getting the THD down to 5% or lower.
conversion using nine bidirectional IGBTs (18 IGBTs total) arranged in a matrix configuration. This design allows any input phase to connect to any output phase at any time, resulting in a naturally sinusoidal input waveform. Matrix VFDs also support regenerative braking and offer high efficiency, as only six IGBTs are actively switching at a time and no additional components, like braking resistors or inductor-capacitor-inductor (LCL) filters, are required. While a limited number of manufacturers currently produce matrix drives, they represent a promising advancement in VFD technology, particularly for applications requiring compact design and low harmonic impact.
Harmonic mitigation advances
VFDs are nonlinear loads and, therefore, create harmonic waveforms and introduce them into the electrical system. Harmonics are unwanted waveforms with a frequency that is a multiple of the fundamental frequency. They cause current and voltage distortion within the electrical system, leading to a non-sinusoidal waveform. Harmonics can lead to breaker nuisance tripping, equipment overheating, premature motor and cable failures, and malfunction of sensitive electronics, among other things. Because of these issues, Institute of Electrical and Electronic Engineers (IEEE) 519, Recommended Practices and Requirements for Harmonic Control in Electric Power Systems, was introduced in 1981 to set limits on harmonic distortion in electrical systems. While this document is a guideline, it has been enforced by electric utility companies and energy codes, essentially forcing manufacturers to implement several techniques to mitigate the harmonics over the years.
Line-side harmonic mitigation techniques
The simplest and most cost-effective form of harmonic mitigation is the line reactor. A line reactor is an inductor placed on the line side of a VFD. They usually come in two forms, 3% and 5% impedance, with higher impedance providing better harmonic mitigation. However, line reactors typically can only reduce high levels of harmonic distortion created by the VFD (roughly 80%) down to 30% to 40%. Therefore, line reactors are better for lower-horsepower (hp) motors (below 30 to 50 hp) or need to be paired with other harmonic mitigation techniques to further reduce distortion (Figure 1).
In the 1980s and 1990s, multi-pulse VFDs gained popularity as the need to reduce harmonic distortion caused by 6-pulse VFDs increased. Multi-pulse VFDs were added a phase shifting transformer to the input of the VFD. Phase-shifting transformers have multiple secondary windings, allowing for connection of more diodes (six per secondary winding) in the rectifier section of the VFD.
Phase-shifting VFDs, two varieties
Two main varieties of phase-shifting VFDs were developed for low-voltage VFDs: 12-pulse and 18-pulse designs. For a 12-pulse VFD, the phase-shifting transformer has two three-phase secondary windings that are phase-shifted 30 degrees from each other, allowing for two sets of six diode bridges to operate in parallel. This setup would create double the amount of voltage pulses on the DC bus, providing a smoother DC waveform, which leads to overall better VFD performance and cleaner output for the motor. Also, the phase shifting helps to cancel out lower-order harmonics, namely the fifth and seventh, thus reducing total harmonic distortion (THD) down to 12% to 20%. However, while the 12-pulse VFD further improved upon a 6-pulse VFD, it was large and expensive, and it still fell short of the harmonic distortion limits set forth in IEEE 519 (1992 version), leading to obsoletion in the early 2000s.
The need to further improve upon the harmonic distortion led to the introduction of the 18-pulse VFD. This design expands on the 12-pulse VFD by adding another three-phase secondary winding to the phase shifting transformer (with a phase shift at 20 degrees), allowing for three sets of six diodes in the rectifier section. This created an even smoother DC
bus and canceled out more harmonics, namely the 5th, 7th, 11th, and 13th. The 18-pulse VFD offered superior harmonic mitigation, with marginally higher costs, over the 12-pulse VFD, often meeting IEEE 519 limits by getting the THD to 5% or less. These VFDs dominated for larger motors (50 to 100 hp and up) for many years. While the 18-pulse VFD is still used, the advancement of technology has given rise to other low harmonic solutions, such as passive harmonic filters (PHF), active front-end (AFE) VFDs and matrix VFDs (Figure 2). The PHF is not a VFD but a harmonic filter added to the line side of a 6-pulse VFD to create a complete package. A PHF consists of inductors and capacitors configured in series or parallel. The inductors work by blocking the high-frequency harmonics while the capacitors provide a low-impedance path for the harmonic currents and remove them from the system. PHFs can be tuned for certain harmonic orders, such as the fifth or seventh, or provide broad protection for several harmonic orders. PHFs can typically reduce THD below 5%, usually meeting IEEE 519 guidelines. These filters have created a cost-effective alternative to 18-pulse VFDs or AFE VFDs and can be used for low hp and very large hp (500+) motors (Figure 3).
AFE VFDs take the concept of a 6-pulse VFD but swap out the uncontrolled diode rectifier with six IGBTs, or other controllable transistors. This allows the rectifier section to have PWM switching, allowing the VFD to draw nearly sinusoidal current from the power source, minimizing harmonic distortion.
An AFE VFD integrates an inductor-capacitor-inductor (LCL) filter on the line side of the VFD to further smooth out the waveform. While these VFDs have been around since the late 1990s and early 2000s, recently many manufacturers discontinued 18-pulse VFD lines in favor of AFEs. These VFDs meet IEEE 519 by reducing THD below 5%, often as low as 2% to 3%. Cost often limits use to above 100 hp.
Matrix VFDs use a different design topology compared to the other VFDs mentioned. The use of nine bidirectional IGBTs and elimination of the DC bus allows the input waveform to be naturally sinusoidal, leading to low levels of harmonics (typically 3% to 5% THD). While matrix VFDs are available for motors as low as 5 hp, the harmonic mitigation benefits do not outweigh the cost below 100 hp, as options like a 6-pulse VFD paired with a line reactor or PHF are more cost-effective at lower hps.
Load-side harmonic mitigation
VFDs create voltage issues at the VFD output because of the high switching frequency of the IGBTs. The rapid switching causes a high rate of voltage change over time (dv/dt) and a high rate of current change over time (di/dt). With inductance (L) in cables and motor windings, the output voltage to the motor is increases as: V = L × di/dt.
A phenomenon called reflected wave, because of impedance differences in the motor and cables, causes the voltage to reflect back to the VFD, resulting in doubling the output voltage. These high voltage levels can cause arcing across motor windings, leading to insulation damage and shortening the motor lifespan. High voltage levels can do the same thing to cables. Keeping cable lengths 50 to 100 feet or less helps mitigate these issues; however, this is not always possible.
VFD devices to mitigate voltage spikes
Devices can mitigate voltage spikes:
• Load reactors are a simple, cost-effective device (similar to line reactors) that can be placed on the output of a VFD. They slow down the rapid current changes and reduce the voltage spikes. Load reactors are suited for cable lengths less than 300 feet.
• A dv/dt filter combines inductors, capacitors, and resistors to decrease voltage spikes and filter out fast transients from the circuit. They are more expensive than load reactors, but they provide much greater protection, generally allowing motors to be upwards of 1,000 feet away from the VFD.
• Sinewave filters are the most expensive and complex output filter available for VFDs, but they provide the best protection. They transform the output voltage into a clean sinusoidal waveform and provide protection up to 15,000 feet. Because of cost, they should be considered for distances greater than 1,000 feet. ce
Joe Doughney, PE, is a CDM Smith electrical engineer, Raleigh, North Carolina; Levi Ambrose, EIT, is a CDM Smith electrical engineer, Fairfax, Virginia. Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
FIGURE 3: Passive harmonic filter is shown inside 6-pulse VFD enclosure. These filters have created a cost-effective alternative to 18-pulse VFDs or AFE VFDs and can be used for both low hp and very large hp (500+) motors.
uUnderstand the evolution of VFD technology
uIdentify harmonic distortion issues and mitigation techniques
uLearn different VFD topologies and their applications
ANSWERS
Roman Skorupa, Yaskawa America
Servo drive safety integrated functions: How to maximize fail-safe behavior
Maximize fail-safe behavior so that for any single fault, the system is transitioned to a safe state. When creating the system’s communication and structure, reaction time in safe events can be the difference between a safe stop and machine failure.
SFIGURE 1: Safe torque off (STO) function safety stops motion, as this velocity and time plot shows. A good example of an application where STO would be used would be for a door or guard with a sensor that shuts off the active drive when it is opened. Products like Yaskawa’s ASM-X contain multiple slots, so an application like this can be stacked. All graphics courtesy: Yaskawa America Inc.
afety integrated functions in servo drives and controls have basic and advanced options. Understanding technologies and related advancements can improve motion control implementations and reduce risk.
Motion control safety concepts, history
Before the 1990s, most safety precautions were handled by external hardware such as contactors, relays and switches. Emergency stops were used to physically remove power from drives and controllers.
In 1998, the IEC 61508 standard for Functional Safety was released for electrical systems. This provided standardization for safety concepts and laid out the groundwork for more technological advancements in safety.
After some time, servo drive manufacturers began implementing these changes, which led to the development of additional functions, including core functions (STO) in the 2000s and safe stop and speed functions (SS1, SS2, SOS,
SLS, SLP, SDI) in the 2010s.
Also at this time, networked safety solutions and programming were being developed and implemented. Fieldbus solutions like Functional Safety over EtherCAT (FsOE), Profisafe, and CIP Safety were created to allow these safety functions to be easily programmed.
The progress of this technology and the standards that drive it have made machines safer to operate, easier to prevent accidents and easier to maintain.
Core functions of safe torque off (STO)
Before talking about safety functions, there are a few key acronyms to cover.
HWBB: Hard-wired base block (HWBB) is a state of the servo drive that is designed to shut off the servo drive current and servomotor movement. Different safety functions use this differently and depend on how the safety parameters are configured.
SRIS: The safety request input signal (SRIS) is the signal that determines when a safety function becomes active or inactive. This signal can be represented in multiple ways, such as a digital signal from a sensor or over a fieldbus such as FsOE, where it can be programmed to activate for whatever relevant application. When and how it activates depends on how the hardware is set up and the servo drive parameters.
STO: The most basic of safety functions is safe torque off (STO). This is the simplest function because, in essence, it is two states: On or Off. When STO is configured, when the SRIS signal is active, the servo drive goes into an HWBB state, thus not allowing any movement to the servomotor. STO is also the common error state when configuring safety functions. Figure 1 shows a general velocity and time plot of how this function safely stops motion.
A good example of an application where STO would be used would be for a door or guard with a sensor that shuts off the active drive when it is opened. The sensor on the door would set the SRIS signal to the drive and set the drive to go into HWBB state, disallowing motion. Although simple,
FIGURE 2: Safe stop functions for Yaskawa’s ASM-X include safe stop 1 ramp monitored, safe stop 1 time controlled, safe stop 2 ramp monitored, safe stop 1 ramp monitored and safe operating stop. The safety operating stop (SOS) function position monitoring can be shown as a plot.
STO can be used in multiple ways like this and is easy to set up, thus making it useful.
Safe stop functions
FIGURE 3: Monitoring functions of Yaskawa’s ASM-X include safely-limited speed (SLS), safely-limited acceleration (SLA), safe speed range (SSR) and safe speed monitor (SSM). Time-controlled and ramp-monitored options are shown for the safe torque off (STO) function.
Safe stop functions are another type of safety function whose purpose is to bring unsafe motion to a safe condition in a safe way. The main difference between STO and safe stop functions is that STO will immediately disable torque-producing power, while a safe stop function will have some sort of active controlled stop. All safe stop functions are executed when the SRIS signal becomes active.
Many different types of safe stop functions can include safe stop 1 ramp monitored, safe stop 1 time controlled, safe stop 2 ramp monitored, safe stop 1 ramp monitored and safe operating stop.
When the safe operating stop (SOS) function is activated, the drive is put into position monitoring mode. This is often done after the servomotor's torque is safely stopped, but the drive still has power available. In SOS, the position is monitored and will shut down if the configured position limits are violated. That is the main difference between safe stop 1 (SS1) and safe stop 2 (SS2). After SS1 stops the motor, the drive goes into STO state, and after SS2 stops the motor, the drive goes into SOS state. Figure 2 shows what the position monitoring looks like as a plot.
The other types of functions are time-controlled and ramp monitored. Time-controlled means that
Time-controlled
the servomotor will have a set configured amount of time to stop in either STO or SOS, depending on if it is in SS1 or SS2. Ramp monitored means that the servomotor’s deceleration is monitored in a set configured amount of time to stop in either STO or SOS, depending on if it is in SS1 or SS2. If the servomotor doesn’t decelerate as expected, a violation will occur, and the drive will be set to HWBB, and the power will be cut, no matter if it is in SS1 and SS2. This is shown in Figures 3 and 4.
There are many ways to configure and implement safe stop functions, and what should be used depends heavily on the application. SS1 works better for applications where a safety event occurs, and the servomotor comes to a stop but doesn’t need to hold torque. SS2 works better when a stopped servomotor does need to hold a torque. For example, SS1 would work better for a conveyor or roll-
FIGURE 5: Safely-limited speed (SLS) function monitors the servo motor speed and will bring the servo motor to a safe stop if the speed limit is exceeded. When the safety request input signal (SRIS) is active, speed monitoring starts. Speed and timing limits are user selected, where s1 is the speed limit when SRIS is active, until t1 elapses. Then the speed limit is changed to s2, where the speed limit decreases linearly when SRIS becomes active, until t2 elapses.
FIGURE 4:
and ramp-monitored options are shown for the safe operating stop (SOS) function. Yaskawa product support can include help with motion control safety.
ANSWERS
FIGURE 6: Safely-limited acceleration (SLA) function monitors the servo motor’s acceleration and will bring the motor to a safe stop if that acceleration limit is exceeded. A good application for this function could be a tension controlled system, where any unwanted acceleration in one motor can be amplified into the other system, creating unwanted motion.
FIGURE 7: Safe speed range (SSR) function monitors the servo motor’s speed and will bring the motor to a safe stop if the speed falls out of a user selected range. The main difference between SSR and SLS is the timing for when the speed starts to monitor, and SSR has a minimum and maximum speed where a limit violation can occur where SLS only throws a limit violation for a maximum speed. A good application for this function could be a high-speed spindle motor, where deviations in spinning the motor too fast and too slow could cause problems with the tooling.
FIGURE 8: Safe Speed Motor (SSM) function monitors the servo motor’s speed and will control an output signal (SSMOS) based on whether or not a limit violation has occurred. Function applications could include controlling machine guards by using SSMOS as a digital signal based on motor speed and/or using SSMOS as a software signal to trigger a routine to bring the motor to a desired speed.
ing application where, if something goes wrong, the belts don’t get stuck in a locked position, potentially causing more damage or harm. SS2 would work better for a vertical axis or lift application where the stopped holding torque will keep a load from falling due to gravity.
Monitoring functions check motion metric safety
Monitoring functions are used to continually check that motion metrics are within safe limits. The Monitoring functions differ from the safe stop functions in that when the SRIS signal becomes active, monitoring functions will allow the servo motor to move at normal operations within safe limits. When the limits are exceeded, the servo drive will go into either STO or a safe stop function.
Monitoring functions can include safely-limited speed (SLS), safely-limited acceleration (SLA), safe speed range (SSR) and safe speed monitor (SSM).
SLS monitors the servo motor’s actual speed and will bring the servo motor to a safe stop if the speed limit is exceeded. When SRIS is active, speed will start to be monitored. The speed and timing limits are user selected, where s1 is the speed limit once SRIS becomes active, until t1 elapses. Then the speed limit is changed to s2, where the speed limit decreases linearly from once SRIS becomes active, until t2 elapses. The plot of this is shown in Figure 5. A good application for this function would be a packaging machine,
where if one motor starts going too fast, the entire system can shut down quickly, reducing the risk of the machine tearing itself apart.
Safely-limited acceleration (SLA) function monitors the servo motor’s acceleration and will bring the motor to a safe stop if that acceleration limit is exceeded. Same as SLS, the acceleration and timing limits are user-selected. Once the SRIS signal becomes active, t1 is the time until acceleration starts to be monitored. Once t1 elapses, the acceleration is monitored and will bring the motor to a safe stop when the servo motor reaches the acceleration limit a1 in any direction. The plot of this is shown in Figure 6. A good application for this function could be a tension-controlled system, where any unwanted acceleration in one motor can be amplified into the other system, creating unwanted motion.
The SSR function monitors the servo motor’s speed and will bring the motor to a safe stop if the speed falls out of a user selected range. The main difference between SSR and SLS is the timing for when the speed starts to monitor, and SSR has a minimum and maximum speed where a limit violation can occur where SLS only throws a limit violation for a maximum speed. The plot of this is shown in Figure 7. A good application for this function could be a high-speed spindle motor, where deviations in spinning the motor too fast and too slow could cause problems with the tooling.
Safe speed motor (SSM) function monitors the servo motor’s speed and will control an output signal (SSMOS) based on whether or not a limit violation has occurred. The speed and timing limits are again user-selected, where speed monitoring will start once t1 elapses. Once the speed limit s1 is reached, the SSMOS signal will be on and only turned off once the motor reaches a lower speed limit s2. The plot of this is shown in Figure 8.
Good applications for this function could include controlling machine guards by using SSMOS as a digital signal based on motor speed and/or using SSMOS as a software signal to trigger a routine to bring the motor to a desired speed.
Networking access to motion safety functions
With these functions, safety applications already seem endless. There is more to correctly setting up a system that includes these safety mechanisms. Many of the signals that were referenced with these safety functions can be directly accessed either over a network in software or digitally to control sensors, actuators, machine guards, etc. Figure 9 shows a simple way of setting this up.
When setting up these signals digitally, there are a few key safety features to consider. ASM-X and other devices can use redundant signals. This means there are two digital inputs/outputs for one signal. This is used so the safety device can detect if the signal is unreliable. If both inputs/outputs don’t agree, the machine can go into a safe state. The signals have test pulses to again check that these signals are not hung up.
These digital signals are often connected to other safety I/O modules, safety relays or safety programmable logic controllers (PLCs). This is to ensure these important signals are correctly sent to and from device to device, with all devices in a system being able to detect when something is wrong and can put the machine in a safe position if that occurs.
When setting up these signals over a fieldbus, there are many things to be aware of. The data from devices are sent over a safety version of whichever device communication your system uses (FsOE, Profisafe, CIP Safety, etc.) and will use a safety PLC to reliably communicate the information. The safety PLC will send information with cyclic redundancy checks that include checksums to detect that the data sent was the same as what was received. When programming over a safety fieldbus, there are more checks and balances to ensure that what is programmed is valid and safe. Thus, programming safety devices often takes more time, effort and data checking. This is supplemented with each safety device having its own
individual safety ID to confirm that data is going to and from the correct devices. These devices have watchdog time monitoring, where if a device in the system fails, it can put the machine in a safe position.
Summary advice on motion control safety
For whatever application is being used, however it is implemented, these are the important goals to keep in mind. Maximize fail-safe behavior so that for any single fault, the system is transitioned to a safe state. When creating the system’s communication and structure, reaction time in safe events can be the difference between a safe stop and machine failure. The time between the detection of a hazardous condition and a safety response must be known and as small as possible. Selecting and configuring the correct safety devices helps with this process. Most importantly, all hardware, firmware, and software must be verified, validated, and documented heavily to ensure all elements of the system are set up correctly. When setting up systems like this, it’s important to have contact with the right experts when support is needed. ce
Roman Skorupa is product support engineer, Yaskawa America Inc. Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
FIGURE 9: Many signals referenced with these safety functions can be directly accessed over a network in software or digitally to control sensors, actuators, machine guards, etc. The Yaskawa iC9226-FSoE Motion Controller achieves safety programming over FsOE with ASM-X offering 10 of each digital input points and digital output points, and 15 of each virtual input points and virtual output points for communicating over FsOE Fieldbus.
Insightsu
Safety integrated functions
uThis article covers basic and advanced options for safety integration functions used in servo drives and controls.
uSafe torque off (STO), safe stop functions and monitoring functions can help improve motion control safety.
ANSWERS
Tom Bishop, PE, EASA
How to compare NEMA, IEC motors for replacement
Replacing a motor? Know key differences among IEC and NEMA motor standards to minimize risk of mismatching NEMA or IEC motor replacements in motor and motion control applications.
Au
Online controleng.com
KEYWORDS: IEC and NEMA motors, motor efficiency, motor electrical characteristics, motor sizes
CONSIDER THIS
Are you looking at key metrics when interchanging NEMA and IEC rated motors?
ONLINE
https://www.easa.com
With this article online, Table 2 compares key NEMA and IEC frame dimensions.
[LINK-OMIT this text from online version]
Also from Tom Bishop and Control Engineering, see Recognize service conditions for motors and generators
s the Nort h American population of IEC motors continues to grow, two replacement scenarios have emerged for situations where repair/rewind is not viable. One is to replace the IEC motor with an available IEC equivalent (straightforward). The other is to replace the IEC design with a NEMA motor (or vice versa). The latter is a potentially more complicated conversion that should only be undertaken after careful comparison of the electrical, mechanical and physical characteristics of NEMA (National Electrical Manufacturers Association) and IEC (International Electrotechnical Commission) motors covered in this article.
For this discussion, we’ll first compare how IEC and NEMA handle applicable electrical characteristics and ratings before considering the respective mechanical and physical characteristics. Table 1 highlights key differences between IEC and NEMA ratings and tolerances for three-phase squirrel cage induction motors (SCIMs). Except for NEMA’s use of service factors (SFs) versus IEC duty types (S110), though, there are more similarities than differences in the two standards.
Electrical characteristics of EIC and NEMA motors
This section examines the electrical characteristics of voltage and frequency, current, efficiency, and speed, highlighting the similarities and differ-
ences between IEC and NEMA standards as they relate to motor design and performance. Although technically a mechanical property, speed is closely tied to a motor’s electrical design and is therefore included here.
Voltage and frequency: NEMA MG00001-12.44
Note: NEMA changed its Motors and Generators standard designation from MG 1 to MG 00001 in 2024, to follow IEC’s 5-digit numbering system. The NEMA MG 00001-12.44 tolerance for voltage variation under running conditions is ±10% of rated voltage at rated frequency. At other than rated frequency, the absolute value of the percent of voltage variation, plus the absolute value of the percent of frequency variation, should not exceed 10%, provided the frequency variation is within ±5% of rated. NEMA MG 00001 also cautions that under those conditions, performance may not be in accordance with the “standards established for operation at rated voltage and frequency.”
Two examples of NEMA motor nameplate voltage
The following examples explain how to interpret and apply the NEMA MG 00001 motor nameplate voltage.
Example 1 looks at voltage variation at rated frequency for a motor rated 230V:
• Operation at +10% tolerance (230 + 23 = 253V) will increase stator core heating due to increased magnetic flux densities (magnetizing strength) and may increase full load current. For winding designs with relatively low magnetic flux densities (as in many premium efficiency motors), operating at +10% voltage may reduce the current, operating temperature and losses.
• Operation at -10% tolerance (230 - 23 = 207V) will almost certainly increase stator current and heating as the motor attempts to deliver the torque required by the load. For example, since output torque is proportional to the square of the voltage (V2), a motor operating at 10% below rated voltage (that is, 90%, or 0.9) would produce only about 0.9 x 0.9 = 81% of rated torque.
Note that the 230V rating in the above example matches that of some motors used on 208V systems. In that case, the system voltage is only 1V above the 230V motor’s rated minimum (230 x 0.9 = 207V), yet the system voltage could be as low as 190V–well below the tolerance for a 230V motor. This illustrates the importance of checking the system voltage against the motor’s rating to ensure proper application within its voltage tolerance.
Example 2 looks at voltage and frequency variation at rated frequency for a 460V motor.
The second case that NEMA MG 00001 describes is a variation of frequency and voltage. Although this scenario is almost nonexistent with utility-supplied power, it could occur with generated power.
For example, consider a 460V motor supplied by a generator operating below rated speed at 57Hz and 442V. The voltage is 4% below rated [1 - (442/460) = 1 - 0.96], and the frequency is 5% below rated [1 - (57/60) = 1 - 0.95], resulting in 9% (4% + 5%) total variation. While this is within its 10% tolerance, MG 00001 cautions that these variations may still affect motor performance
Voltage and frequency: IEC 60034-1-7.3
IEC 60034-1 also addresses variations from rated voltage and frequency but differs from NEMA MG 00001 by considering their combined, not individual, effects. It also uses a zone system, with zone A being more restrictive than zone B (see Figure 1). Within zone A, the machine must be capable of producing rated power; however, as with NEMA MG 00001, it “need not comply with its performance at rated voltage and frequency.” The IEC standard cautions, though, that the temperature rise may exceed that at rated voltage and frequency. Regarding zone B, the standard implies it is not intended for continuous operation, noting that operation outside zone A “should be limited
in value, duration and frequency of occurrence.” It also recommends derating motors for zone B operation.
IEC 60034 1 allows variation from rated voltage of ±5% (zone A) and ±10% (zone B) and ±2% (zone A) and +3%/-5% (zone B) from rated frequency.
Current: NEMA MG00001-12.47
The NEMA MG 00001 clause regarding the motor current (ampere) rating is short and to the point: When operated at rated voltage, rated frequency, and rated horsepower output, the input in amperes shall not vary from the nameplate value by more than 10%. That means the actual full-load current can be ±10% the nameplate rating. For example, if the motor nameplate rating is 100 amps, any value between 90 (10% less than 100) and 110 amps (10% greater than 100) could indicate a fullload condition.
Because the actual full-load current can vary from the nameplate value, the actual value may not accurately indicate the load on the motor. However, for most other purposes such as selecting overload protection, the nameplate current should be used.
Current: IEC 60034-1
Since IEC 60034-1 does not address current variation or specify a tolerance, it implies that the nameplate rated current is an exact value.
FIGURE 1: Voltage and frequency limits for IEC motors are shown, according to IEC 60034-1, 7.3, Figure 12. Images and table courtesy: EASA
‘
When operated at rated voltage, rated frequency, and rated horsepower output, the input in amperes shall not vary from the nameplate value by more than 10%.
’
ANSWERS
FIGURE 2: Diagram
shows key NEMA and IEC frame dimensions letter designations as values show in table 2 with this article online.
Speed (rpm): NEMA MG 00001-12.46
NEMA MG 00001 allows a seemingly liberal full load speed tolerance to account for material and manufacturing differences among identically rated motors. Specifically, it states that the variation “shall not exceed 20 percent of the difference between synchronous speed and rated speed when measured at rated voltage, frequency, and load and with an ambient temperature of 25°C.” However, the 20% variation is less significant than it appears because it applies to slip speed. Slip speed is the difference between the synchronous speed of the magnetic field produced by the stator windings and the physical speed of the rotor.
Insights
NEMA-IEC motor comparison insights
When motor repair or rewind is not viable, a NEMA or IEC motor may be considered for replacing the other motor.
uCare must be taken when considering changing an IEC motor with a NEMA motor or a NEMA motor with an IEC motor because IEC and NEMA motor standards have key differences.
uDifferences in IEC and NEMA motors discussed here include maximum supply voltage variation, current, maximum frequency variation, speed, design code, duty types, service factor and efficiency.
An example will illustrate the impact of the speed tolerance. A 4-pole motor has a synchronous speed of 1800 rpm at 60 Hz. If it is rated at 1750 rpm at full load, its slip speed is 50 rpm (1800 - 1750). Applying the 20% tolerance, the allowable variation is 10 rpm (20% of 50). Because this is a plus-or-minus tolerance, the actual fullload speed may range from 1740 rpm (1750 minus 10) to 1760 rpm (1750 plus 10). As long as the full-load speed falls within this range, the motor is operating in accordance with its nameplate rating. The variation between actual speed and nameplate speed suggests another caution: do not use nameplate speed to estimate motor load. Like nameplate current, it is an inaccurate indicator that can lead to erroneous conclusions.
Speed (rpm): IEC 60034-1-12.1
For small motors rated <1kW, the IEC tolerance for speed is ±30% of the slip; and for motors rated ≥1kW, the tolerance is ±20%. Thus, the NEMA and IEC standards are in agreement for the vast majority of induction motors.
Efficiency: NEMA MG 00001-12.58
Per NEMA MG 00001, motors operating at rated voltage and frequency must meet or exceed the minimum efficiency associated with the “NEMA Nominal Efficiency” (or “NEMA Nom. Eff”) listed on the nameplate. The minimum efficiency values represent 20% higher losses than the associated nominal values–e.g., for 94.5% nominal efficiency, minimum efficiency = 93.6%. NEMA tables list the nominal efficiencies and associated minimums by horsepower and speed (poles).
Efficiency: IEC 60034-1
Table 20 of IEC 60034-1 provides tolerances for efficiency variations based on the motor’s power rating. For motors rated ≤150 kW, the tolerance is:
-15% x (1 - decimal value of efficiency).
For example, if the motor efficiency was 93%, the tolerance is:
-15% x (1 - 0.93) or -0.15 x 0.07 = -0.0105.
Similarly, for motors rated >150 kW, the tolerance is: -10% x (1 - decimal value of efficiency).
Mechanical and physical characteristics of NEMA, IEC motors
This section focuses on mechanical and physical characteristics of frames and shafts, terminal boxes (enclosures), and NEMA and IEC frame and power ratings.
Frames and shafts: NEMA, IEC motors
Both NEMA and IEC assign specific power ratings to certain frame sizes according to speed:
• In general, output power ratings and frame sizes are comparable.
• Shaft heights, foot spacings, shaft diameters are equal within 3 or 4 mm.
• NEMA output shaft lengths tend to be longer.
• NEMA frame sizes designate shaft dimensions.
• IEC frame sizes do not designate shaft dimensions. (A separate standard IEC 60072-1 provides shaft dimensions).
First 2 digits of NEMA frame size designation represent 4 times the actual centerline of shaft to bottom of feet (hereafter “shaft height”) in inches (Example: 11-inch shaft height x 4 = 440 frame series)
IEC motors use actual shaft height in millimeters (Example: 280 mm shaft height = 280 frame)
Most frame sizes in either NEMA MG 00001 or IEC 60072-1 have a comparable equivalent in terms of shaft height:
• Example: 280 mm / (25.4 mm/in) = 11.02 inches
• Example: 11 in x 25.4 mm/in = 279.4 mm
• One exception: 100 frame IEC motor has no comparable NEMA counterpart [100 mm / (25.4 mm/in)] x 4 = 15.7 inches, ~NEMA 160 frame.
IEC defines wider range of shaft height:
• 56 through 900 mm shaft heights
No NEMA equivalents for all of them:
• NEMA stops at the 680-frame series
Terminal boxes (enclosures) for NEMA and IEC motors
NEMA standard terminal box location is on left hand side facing output shaft (F1):
• Optional positions on righthand side (F2) and on top (F0)
• Flying (unsecured) leads require extra space to connect and contain inside enclosure.
IEC standard is terminal box on top B3T (NEMA F3):
• Optional locations on either side – left B3L (F1) or right B3R (F2)
• Terminal box generally can be rotated 4 x 90 degrees
• Terminal posts make for easy connection of leads.
NEMA and IEC frame and power rating comparisons are shown in Figure 2 and Table 2.
Annex of referenced standards
Standards referenced in this article are named below.
IEC 60034-1 Rotating electrical machines - Part 1: Rating and performance
IEC 60034-2-1 Rotating electrical machinesPart 2-1: Standard methods for determining losses and efficiency from tests
IEC 60072-1 Rotating electrical machinesDimensions and output series - Part 1: Frame numbers 56 to 400 and flange numbers 55 to 1080 NEMA MG00001 Motors and Generators. ce
Thomas H. Bishop, P.E. is a senior technical support specialist at EASA Inc. www.easa.com Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
TABLE: Key differences between IEC and NEMA motor standards include maximum supply voltage variation, current, maximum frequency variation, speed, design code, duty types, service factor and efficiency, as shown.
ANSWERS
Chet Barton, process safety director, and Heath Stephens, automation solutions director, Hargrove Controls & Automation
Securing the last line of defense: Cybersecurity in safety instrumented systems
Cybersecurity best practices for SIS includes knowing key risks, assessment requirements under IEC 61511 and practical steps to improve industrial system resilience.
Safety instrumented systems (SIS) are engineered to step in when other layers of protection fail. They monitor the process for hazardous conditions and automatically bring equipment to a safe state when the basic process control system (BPCS) can no longer maintain stable operation. In high-consequence environments, this function is often the last barrier between a process upset and a catastrophic event.
cyber threats, but some recent incidents have proven otherwise. One such example was the 2017 attack on a Triconex safety system at a petrochemical facility in Saudi Arabia. Though details were not widely publicized, we know the attack targeted the safety layer.
• Legacy systems: a persistent risk
• Embedding cyber resilience into process safety lifecycle.
CONSIDER THIS How can cybersecurity be fully integrated into the safety lifecycle through risk assessments, network segmentation and strict access controls to ensure safety instrumented systems remain reliable and protected from cyber threats?
To support this role, SIS are designed to be logically and physically separate from other systems. But in practice, that separation is not always perfect. Many SIS share routers, firewalls or maintenance devices with the BPCS. Even systems that are air gapped still require periodic connections for programming, proof testing or troubleshooting. Past incidents like Stuxnet malware showed how removable media can bridge those gaps. Every one of these touchpoints introduces potential pathways for threats to enter otherwise isolated environments. This design intent (independence, reliability and infallibility) is exactly what makes SIS cybersecurity so critical. A compromise of the safety layer has the potential to reach beyond just the tech realm and into the physical world of equipment damage, environmental release or loss of life.
The growing risk to safety layers
In the past, safety instrumented systems (SIS) were seen as disconnected fortresses, sealed off from business networks and immune to the growing tide of
While the incident did not result in a major accident, investigators found that the attackers gained control over the safety controller logic and attempted to modify it. If the altered logic had gone undetected, the system may have failed to bring the plant to a safe state during an upset, potentially leading to a catastrophic event. The attackers tried to disable protections that prevent loss of life and large-scale damage. This represents a fundamental change in intent. Traditional cyber incidents often focus on data theft or operational downtime, but targeting an SIS shows a willingness to cause physical harm, not just digital disruption. That’s why the Triconex attack is still seen as a watershed moment. It illustrated that threat actors understand industrial safety functions and are actively exploring ways to compromise them. This shift in threat posture reveals a hard truth: Cybersecurity for SIS can no longer be treated as optional or out of scope. These systems are often the last line of defense against physical harm in industrial environments. If compromised, they may fail to perform their safety functions, or worse, become a vehicle for harm.
Cybersecurity in IEC 61511
Under IEC 61511, cyber assessments are now a required part of the hazard and risk analysis lifecycle. These assessments feed directly into the safety case. The presence of cyber vulnerabilities can alter risk assessments and mitigation strategies, making cybersecurity a core element of process safety.
In a recent case, a facility with outdated documentation requested Hargrove’s assistance to evaluate
the current state of their systems, conduct a cyber assessment alongside traditional hazard analysis and make the recommended fixes. This kind of dual focus reflects a growing awareness that process safety must now include cyber resilience. Cyber assessments should be conducted by personnel trained and certified in industrial cybersecurity, such as those certified through TÜV or similar programs. This qualification helps ensure that assessments meet technical standards and that findings can be translated into actionable, risk-informed mitigation steps. Find guidance related to SIS security in ISA TR84.00.09, ISO/IEC 27001:2013 and IEC 62443-2-1:2010.
Build SIS resilience: cyber hygiene
Protecting SIS begins with disciplined engineering and operational control. At one facility, for example, cyber risk is mitigated by issuing dedicated, internet-free laptops for SIS programming. These machines are not used for email or internet browsing and must be returned after use. Programs are backed up to physical media, adding another layer of access control. While this level of discipline can feel cumbersome, it is highly effective and increasingly necessary. Core practices that help strengthen SIS cybersecurity and maintain the integrity of the safety layer.
Physical and network isolation - Keep SIS off the internet, and, where feasible, use dedicated hardware and network paths that are separate from the BPCS. Even air-gapped systems can be at risk. Removable media or maintenance laptops often become unintended bridges between isolated systems and the broader network. Strong hygiene procedures must apply even to standalone or “offline” systems.
Controlled access devices - Use clean, dedicated laptops or terminals for programming and diagnostics. Avoid devices that also connect to email, the internet or other corporate systems. The approach used at the facility mentioned above (assigning internet-free programming laptops that are checked in and out and backed up to physical media) illustrates how targeted controls can significantly reduce exposure.
Patch and vulnerability management - Keep SIS and related components up to date with vendor-provided patches. Monitor technical advisories. New vulnerabilities are often weaponized quickly. Outdated
Elements of cybersecurity for safety integrated systems
software or unsupported devices increase risk.
Antivirus and endpoint protection - Many SIS are deployed without basic protections such as antivirus software. Ensure these tools are installed, regularly updated and actively monitored. One unprotected endpoint can offer an entry point for malware.
Segmentation and DMZs - Use network segmentation to isolate SIS from plant-wide and business systems. A DMZ can help manage limited, necessary communications while reducing overall exposure. Segmentation also limits the spread of cyber incidents. In another example, an incursion affected parts of a server infrastructure. Because segmentation had been properly implemented, the threat was contained.
Monitoring and anomaly detection - Monitoring plays a crucial role in maintaining cyber hygiene. Some tools analyze network traffic down to the packet level, flagging anomalies such as unfamiliar ports or unexpected communication paths. ce
Chet Barton, process safety director, and Heath Stephens, automation solutions director, Hargrove Controls & Automation; edited by Gary Cohen, Control Engineering senior editor, gcohen@wtwhmedia.com.
Insightsu
Safety instrumented systems insights
uSafety instrumented systems (SIS), once assumed to be isolated from cyber threats, are increasingly targeted by attackers seeking to compromise safety layers and potentially cause physical harm.
uStandards such as IEC 61511 now require cybersecurity assessments as part of the process safety lifecycle, making cyber risk management a core component of hazard and risk analysis.
uStrengthening SIS resilience requires disciplined cybersecurity practices, including isolation, controlled access devices, patch management, segmentation and monitoring.
FIGURE: Safety integrated systems' cybersecurity hygiene can improve with attention to six elements. Courtesy: Control Engineering with information from Hargrove Controls & Automation
Courtesy: Adobe Stock
Janelle Armstead-English, Seeq
How AI-powered advanced analytics platforms drive operational lifecycle innovation
Through persistent but calculated AI analytical tool adoption, manufacturers are optimizing production efficiency, bolstering predictive maintenance and accelerating time from data to insight.
As competition in global markets continues to hasten, digital transformation is increasingly differentiating successful innovation from stagnant production. A recent Digitopia study reports a widening divide between digital pioneers and those hesitant to modernize, as many companies prioritize endurance and reliability over transformation. This is
especially true in risk-averse environments, such as the process industries.
However, as sensors, control systems and automation technologies grow in quantity and generate increasingly vast amounts of data, attaining meaningful outcomes by storing, accessing and processing this information effectively becomes ever more challenging. Left to traditional automation toolsets, companies are struggling to keep pace with escalating customer demands.
Artificial intelligence (AI) software tools, including leading advanced analytics platforms, are helping bridge the gap from data generation and management to actionable insights. Data analytics plays a key role in the shift, transforming raw data into valuable knowledge. Some of the most groundbreaking innovations are occurring in this area.
AI in the process industries
AI is transforming nearly every business environment, but its adoption is distinctively different across different sectors. Consumer-focused fields like retail, logistics and health care are latching onto AI tools quickly, using them to innovate and achieve immediate value. The process manufacturing industries, by contrast, are more deliberate. With regulatory compliance, consumer and employee well-being and operational reliability at stake, industrial organizations must proceed methodically.
Safety, security and quality remain top manufacturing priorities when evaluating AI tools. Applications range from narrow use cases, such as self-driving vehicles adapting to traffic, to more sophisticated systems with human-like adaptability. Within the industrial context, today’s emphasis is on machine learning (ML) and generative AI (GenAI). These technologies analyze patterns, predict outcomes and improve with iterative experience,
While adoption across the sector may not be explosive, the steady growth is still highly consequential. For example, McKinsey estimates that petrochemical companies can capture 3 to 5 percent additional margin from existing assets by investing in digital and AI capabilities, redefining how organizations plan and optimize their capacity.
Like earlier waves of cloud computing and big data, AI is surrounded by buzz. Yet unlike some past trends, AI’s tangible impact is already evident, and it is here to stay. However, high-quality and accessible data is the critical enabler for success, and many process manufacturers either lack sufficient usable data or are burdened with silos that prevent effective analysis. Overcoming these barriers is essential for AI models to deliver insights that directly improve performance and KPIs.
AI in the automation lifecycle
AI and machine learning are transforming the automation lifecycle by enhancing key stages such as optimization, decision support and maintenance. AI is pivotal in predictive maintenance, where it helps organizations shift from reactive to proactive equipment health assessments and data-driven upkeep strategies. Traditional maintenance relies on historical patterns or fixed schedules, which often result
FIGURE 2: A specialty chemical manufacturer leveraged Seeq to establish an operational baseline model for its distillation columns. It then implemented predictive maintenance by comparing live operational data to the standard to reduce fouling.
in unnecessary downtime or unexpected failures, but AI changes this dynamic. By applying machine learning algorithms to sensor and performance data, manufacturers can detect subtle anomalies that signal emerging issues¬, often well before they deteriorate into costly breakdowns (Figure 1).
Additionally, continuous monitoring through AI-driven models improves anomaly detection and enables more accurate failure predictions. Leveraging these insights, organizations can optimize maintenance schedules that are tailored to actual asset conditions, which reduces unplanned downtime, extends equipment lifespans and empowers subject matter experts (SMEs) to focus on high-value tasks rather than routine monitoring. AI can also automate the data interpretation process, reducing the manual effort required for diagnostics.
Integrating AI into advanced analytics platforms further expands its impact. Beyond identifying potential failures, AI enables prescriptive analytics to provide recommendations for the most effective corrective actions. This helps organizations move beyond simply predicting problems to actively optimizing their responses.
KEYWORDS: Online Text online text
LEARNING OBJECTIVES:
• Understand how AI-powered advanced analytics platforms convert large volumes of industrial data into actionable insights that improve operational performance and decision-making.
• Learn how AI and machine learning enable predictive maintenance, anomaly detection and prescriptive recommendations that optimize equipment reliability and reduce unplanned downtime.
• Recognize how advanced analytics platforms help institutionalize operational knowledge and support knowledge transfer by capturing expertise and integrating it into datadriven decision support systems.
ANSWERS
‘Analysis revealed condenser temperature variances as the primary contributors to fouling; adjustments increased time between cleanings.’
CASE STUDY: reducing fouling with AI-powered predictive maintenance
A global specialty chemicals manufacturer used an advanced analytics and AI platform to predict fouling in its distillation towers and significantly improve operational efficiency. Before implementing the software, unexpected fouling frequently caught the facilities team off-guard, prompting unplanned shutdowns and operational interruptions.
Insightsu
OT cyber threats insights
uAI-powered advanced analytics platforms help manufacturers convert growing volumes of industrial data into actionable insights that improve operational efficiency, maintenance planning and decision-making.
uMachine learning enables predictive maintenance by identifying anomalies and early warning signs of equipment issues, allowing organizations to shift from reactive or schedule-based maintenance to conditionbased strategies.
uAdvanced analytics platforms also help capture and scale operational expertise by turning historical data and subject matter expert knowledge into reusable insights that support knowledge transfer and faster problem solving.
A mixed team of engineering and operational SMEs collaboratively specified a wide range of operating conditions — including composition, temperature, pressure and flow rates — for analysis using the platform’s multivariate modeling capabilities, with the objectives of identifying and eventually predicting fouling events. The machine learning model generated rapid insights into the contributing factors and their complex interactions, providing high-level summary metrics, trend views and recommended maintenance actions to plant personnel for clarity and actionable guidance.
The first step was establishing baseline operating conditions and selecting target analysis periods. Using the platform’s signal selection tools, SMEs identified and removed low-variance signals and filtered out highly correlated variables. They then ranked the importance of remaining signals, revealing which factors most strongly influenced fouling. This approach not only identified key contributors but also highlighted causal relationships across the stages before, during and after fouling, enabling a clearer view of root causes (Figure 2).
Over time, the model self-optimized to more reliably detect precursory conditions to fouling, providing the team with up to two months before serious anomalies appeared. These predictive capabilities now inform proactive planning so staff can conduct maintenance activities during scheduled downtime to significantly reduce unanticipated interruptions.
The analysis revealed condenser temperature variances as the primary contributors to fouling. Armed with this knowledge, the company adjusted operations to extend runtime between cleanings, reducing unnecessary upkeep. These alterations increased efficiency, reduced downtime and delivered measurable profitability gains.
AI and knowledge transfer
AI helps close the loop of the automation lifecycle by institutionalizing knowledge, which is often overlooked but increasingly vital. With nearly a quarter of today’s engineering workforce expected to retire within the next decade, establishing a strong knowledge transfer process is critical to prepare the next generation of engineers for success. This effort extends beyond conventional documentation, to building a living knowledge base where every insight, success and failure becomes part of an organization’s collective memory.
These occasions also feed AI tools, which continuously draw from historical and real-time data to uncover long-term trends, root causes and best practices. By codifying operator expertise into decision-support systems, AI makes human knowledge scalable across plants, shifts and geographies, ensuring critical insights are always within reach.
Historically, SMEs lacked the time to sift through extensive reports to extract actionable insights, but AI models excel at mining these resources. With a structured knowledge base that includes process documentation, troubleshooting records and maintenance logs, AI can rapidly surface relevant information, empowering SMEs to make faster and better-informed decisions that ensure past lessons continually improve present and future performance.
As AI proliferates, manufacturers cannot afford to stand idly by. Fused with methodical and calculated approaches to safety, quality and operational reliability, advanced analytics and AI platforms are propelling continuous improvement and predictive maintenance throughout the process industries. ce
Janelle Armstead-English is the industry principal for chemicals with Seeq Corp. Edited by Gary Cohen, senior editor, Control Engineering , gcohen@wtwhmedia.com.
New sensors, expert services yield 4 benefits
Process professionals can fill critical gaps, providing real-time diagnostic sensing insights, centralized documentation, and on-site measurement expertise.
Manufacturers must balance relentless productivity demands with stringent quality and safety requirements, including with process instrumentation hardware, software, workflows, calibration and services to ensure sensors are providing needed information, as part of today’s competitive industrial landscape. It’s more complicated when plants have fragmented instrumentation data, reactive maintenance habits and staffing shortages, factors that collectively erode uptime and inflate costs.
Instrument suppliers with consultative and technology-driven approaches can help companies address these and other pain points, restoring reliable and cost-effective operation. Real-world case studies from a bottling operation and a metal smelting plant illustrate how scalable digital tools, hands-on support and holistic engineering services can dramatically improve plant reliability, reduce unplanned downtime, and unlock the full value of existing instrumentation.
Barriers to process efficiency
Processors across a broad spectrum of industries frequently encounter a recurring set of challenges to operational efficiency. Digital records often end up siloed or incomplete, making it difficult to trace equipment calibration and maintenance history, verify compliance, and extract actionable insights. Manual and reactive asset management activities frequently result in missed calibration windows, inconsistent spare parts inventories, and suboptimal use of available equipment.
In many cases, these challenges are exacerbated by overextended technical staff. A limited pool of plant engineers and technicians that juggle trou-
bleshooting and preventive upkeep tasks, leaving little capacity for strategic improvement.
Unplanned downtime (equipment failure, calibration errors or inadequate monitoring) can cost millions annually, disrupt supply chains and jeopardize regulatory compliance. These create a cycle where a lack of accessible data fuels inefficient resource allocation, which can cause downtime and financial loss.
Condition monitoring, process support
Top instrumentation suppliers partner with processors across multiple industry segments and maintain a deep understanding of issues affecting plant personnel. Suppliers can use a consultative approach that blends real-time condition monitoring, centralized documentation and expert on-demand support (Figure 1).
Consultative engagement begins with a discovery phase in which the supplier’s engineers work with plant leadership to map current workflows, data gaps and pain points. By asking targeted questions and reviewing existing instrumentation, they identify each facility’s unique needs, such as tighter calibration control, faster spare parts replacement or deeper insight into process variability. The findings are applied toward solutions aligned with the processor’s specific objectives and budget.
Central analytics and insight platforms are frequently employed, which provide plants with continuous health diagnostics for every instrument so that subject matter experts (SMEs) can adopt predictive, condition-based maintenance practices, rather than costly time-based service schedules (Figure 2). All calibration results, device logs and performance trends can be stored in these unified libraries,
controleng.com
KEYWORDS: Instrumentation, calibration reliability, process measurement
CONSIDER THIS
Is your process instrumentation enabling process optimization or leading to inefficiencies and downtime?
ONLINE
From Control Engineering, also see: https://www.controleng. com/processinstrumentation-sensors/ New smart instruments, wireless for process industry applications https://www.controleng. com/new-smartinstruments-wirelessfor-process-industryapplications/
FIGURE 1:
Endress+Hauser recommends a consultative approach centers around collaboration with plant personnel to identify and mitigate operational challenges, including sources of downtime, manual and reactive workflows, disorganized records and process inefficiencies. Images courtesy: Endress+Hauser
providing instant, audit-ready documentation and a single source of truth for asset managers, along with historical performance analytics that empower teams to spot degradation patterns early.
For facilities experiencing a shortage of technical staff, leading suppliers can supplement in-house plant personnel with hands-on support and coaching, amplifying immediate and long-term workforce effectiveness. In addition, 24/7 remote support via digital tools empowers companies to diagnose and resolve urgent issues.
Such capabilities reduce unplanned downtime, streamline asset management and boost operational and maintenance efficiency. Modular and scalable solution can be rolled out incrementally as needs require, while maintaining confidence through close collaboration with a trusted partner.
Scalable digital transformation
A leading U.S. bottler was facing several interrelated challenges with its operational reliability.
Fragmented digital records and reactive maintenance practices inhibited the company from responding quickly to compliance audits and equipment failures, resulting in frequent unplanned downtime.
The company struggled to uphold consistent process traceability across multiple production lines, risking regulatory penalties and lost business. Lack of real-time visibility into the condition of critical devices forced the maintenance team into routine, time-based services instead of predictive and condition-based interventions.
Improve instrumentation applications
An instrumentation supplier conducted workshops and interviews with senior leadership and plant managers to extract operational priorities (Figure 3). This effort produced three core solutions:
• Unified calibration strategy: A cross-site calibration framework introduced standardized key performance indicators (KPIs) and digital reporting. Calibration downtime was contractually aligned with preventive maintenance windows, ensuring minimal disruption while preserving data integrity.
• Condition-based maintenance via Netilion: Leveraging smart instrumentation, the Netilion platform delivers continuous device health monitoring, centralized documentation, and historical performance analytics.
• Dynamic spare parts management: The instrumentation provider piloted a flexible spare parts program at five sites, validating and refining inventory levels and logistics before scaling the approach enterprise-wide.
Benefits included:
1. Turning fragmented and reactive processes into a cohesive and data-driven maintenance ecosystem that delivered measurable gains immediately.
2. The real-time diagnostics in the bottler’s Netilion dashboards now flag instrument anomalies long before they impact the production line, which reduced unexpected shutdowns.
3. Automated calibration records are seamlessly linked to business systems, delivering auditready documentation that satisfies Food and Drug Administration (FDA) and other regulatory requirements.
4. Standardized calibration data stored in Netilion improves records visibility, empowering maintenance crews to act swiftly and confidently when issues are detected.
Scalable digital tools, continuous diagnostics and round-the-clock technical information helped the bottling company enhance operational efficiency and regulatory preparedness, positioning the company to acquire and retain strategic agreements.
Smelting plant: Instrumentation
A smelting company was experiencing downtime, delayed projects and uneven maintenance, all stemming from a thin technical workforce and aging infrastructure. With 20% of the fulltime staff needed to run the plant effectively, the company faced a large backlog of commissioning, troubleshooting and preventive maintenance tasks.
The processor commissioned an instrument supplier to produce objectives focused on realistic outcomes, rather than sweeping theoretical digital transformation. The resulting joint team developed a phased roadmap that tackled operational gaps while laying out a clear trajectory for longterm reliability and efficiency. The plan emphasized on-site support, instrumentation that closed critical gaps and continuous knowledge transfer. Instrumentation installations included:
• Flowmeters in key process streams to deliver accurate material balance data and reduce waste.
• Level sensors in tanks and vessels to ensure feedstock availability and prevent overfills.
• Pressure transmitters and gauges in high-pressure systems to enhance safety and control.
• Liquid and optical analysis instruments to provide real-time chemical and gas data, supporting compliance and safety.
All devices were fully commissioned and integrated into the plant control logic, and subject-mater experts worked with the smelter staff to ensure they understood each tool. The supplier stationed technicians with the full-time operations crew. Specialists assisted with calibration, valve adjustments, troubleshooting, and other details. ce
Kara Witsman is the service portfolio manager at Endress+Hauser. Edited by Mark T. Hoske, editor-in-chief, Control Engineering, WTWH Media, mhoske@wtwhmedia.com.
FIGURE 2: Endress+Hauser’s Netilion cloud suite includes accessible tools for device health monitoring, centralized documentation, predictive maintenance insights and performance analytics.
COVER, FIGURE 3: Endress+Hauser can conduct a series of workshops with plant personnel to improve traceability and identify optimization opportunities.
Insightsu
New process sensors, expert services, benefits insights uHardware, software and skills gap issues create barriers to consistent production, process efficiency.
uInstrumentation and expert services help condition monitoring, process documentation and support.
uExamples include scalable digital transformation for a bottling company and instrumentation consultation and support at a smelting plant.
Innovations
Vote: 2026 Control Engineering Product of the Year
®
Control Engineering subscribers: See the Product of the Year finalists and select the best products based on technological advancement, service to the industry and market impact. https://www.controleng.com/voting-is-open-for-the-2026-control-engineeringproduct-of-the-year-program/
Software-defined radio for radar, satellite, 6G research
The Emerson NI USRP X420 is a software-defined radio (SDR) for radar systems, satellite communications and 6G research. The X420 extends the NI USRP product line with frequency coverage up to 20 GHz, including FR3, Ku and X bands, for applications such as multi-channel radar, non-terrestrial networks (NTN) and integrated sensing and communications (ISAC). The X420 provides high RF performance in a software-defined platform designed for research, prototyping and deployment in aerospace, defense, wireless communications and academia. Emerson, www.emerson.com
One sensor platform for fill, motion and material monitoring
The Pepperl+Fuchs UB*-F42 ultrasonic sensor can detect objects, track material consumption, measure fill levels and monitor travel. The IO-Link interface supports integration with IIoT systems. In addition to measured values, it transfers process and status data for control and maintenance applications. The series builds on an existing device concept used in earlier products, adding new generation adds device versions and expands functions and adjustment options, including an adjustable sound beam diameter, a small dead band, IP67 protection and automatic sensor synchronization. Pepperl+Fuchs, www.pepperl-fuchs.com
Latest modular I/O approach supports distributed machine control
Trio Motion Technology launched Trio MS I/O system for Motion-PLC controller range. The system integrates highspeed I/O with Trio Motion-PLC controllers using modular I/O slices. The system connects I/O slices to a controller using the MS-Bus interface, a local communication protocol for MS I/O slice connection. It supports combinations of digital inputs and outputs and analog inputs and outputs. The I/O slices use forward insertion and mount on a DIN rail. Trio Motion Technology, www.triomotion.com
Unlocking a new edge in distributed control
Schneider Electric’s EcoStruxure Foxboro Software Defined Automation (SDA) is an open, software-defined distributed control system (DCS). The offering combines Foxboro technology with software-based automation to support modernization for hybrid and process-industry operations. Foxboro DCS is used for realtime process control in industrial operations. EcoStruxure Foxboro SDA is designed to provide a scalable, software-defined approach while maintaining reliability. The software-defined distributed control system is built on EcoStruxure Automation Expert (EAE). Schneider Electric, www.se.com
New hybrid solution drives electrification
Honeywell introduced a hybrid industrial process-heating system that uses natural gas and electricity, aiming to reduce operating costs and emissions. By automating heating controls, manufacturers can switch between natural gas and electricity for industrial heating to manage energy costs and avoid relying on one fuel. Honeywell said its hybrid heating system is designed to integrate with existing equipment and can supply up to 30% of heating capacity from electricity. Alerts and remote monitoring provide more visibility into performance. Honeywell, www.honeywell.com
Surge devices target safety circuits and PV systems
Phoenix Contact introduced Phoenix Valvetrab Safe Protection Plus (VAL-SPP) surge protective devices with miswiring and touch protection. The SPDs have a specified tightening torque of 3 Newton-meters. The family includes the VAL-US-SPP for North American power supply systems, aligned with applicable NEMA requirements. The North American version is UL Listed and complies with NFPA 79, which includes requirements for surge protection on safety circuits. The devices have modular, plug-in remote signaling. Phoenix Contact, www.phoenixcontact.com
Modern control system, modular, easier integration
The ABB Ability System 800xA 7.0 is the latest release of the ABB distributed control system, aimed at supporting phased modernization of industrial automation environments. As a long-term support (LTS) release, System 800xA 7.0 is intended for existing installations and new projects and adds broad Microsoft Windows OS compatibility, expanded virtualization support and an extended support lifecycle with fewer major upgrade events. It builds on its process automation portfolio and is intended to give more options to keep control systems current and supported. A modular architecture is intended to support integration with other industrial systems and to accommodate system expansion. Users can run performance monitoring, analytics and AI-based decision-support tools while isolating core control functions. ABB, www.abb.com
Active alignment, nanometer-level precision
Aerotech Inc. collaborated with Santec and Senko Advanced Components to develop the PICAlign architecture, an active alignment approach intended to address manufacturing challenges in high-volume co-packaged optics (CPO) production. Scaling CPO introduces significant manufacturing challenges, including aligning large numbers of optical fibers to photonic integrated circuits (PICs) with nanometer-level precision across six degrees of freedom. Aerotech, www.aerotech.com
Snap in, scan out: I/O setup simplified
Pressure data for automation systems
Noshok PTI10 pressure transmitter and Noshok PTI15 transmitter/switch with IO-Link communication add IO-Link digital communication on the PTI Series platform. The Noshok PTI15 Series transmitter/switch replaces the 800 Series Electronic Indicating Pressure Transmitter/Switch and is available at a lower price. IO-Link technology enables integration with automation and process networks. They support streamlined installation, remote configuration and ongoing diagnostics, providing access to device data for maintenance and system integration. Noshok, www.noshok.com
Beckhoff ED series EtherCAT Terminals add a new housing format and other functions. Terminals remain compatible with existing Beckhoff hardware and maintain the same core functions and built-in diagnostics as other EtherCAT terminals. The IP20-rated ED series is designed for modular, scalable I/O applications in industrial automation systems. The DIN-rail mounted terminals have an updated housing design, push-in wiring for tool-less installation and diagnostics accessible via an app using a scannable data matrix code on the product. Updates help streamline wiring and diagnostic access for distributed I/O system and support future distributed I/O requirements. Beckhoff, www.beckhoff.com
Back to Basics
AUTOMATION, CONTROLS
Automated condition monitoring in the IIoT era
Industrial internet of things (IIoT) is helping machine condition monitoring by using wireless sensors, autonomous monitoring and analytics. Control Engineering Europe asked Ryan Roaldson, product line director at Baker Hughes, about condition monitoring advances in the IIoT era. More wireless sensors, cost-effective and rugged, are helping.
Control Engineering Europe (CEE): How has the IIoT altered condition monitoring in industrial settings, and how can operational efficiency and machinery health benefit?
Roaldson: Advances in IIoT technology have transformed condition monitoring in industrial environments. Traditionally, only the most critical assets were monitored due to high costs and deployment challenges, leaving less critical assets to timebased or run-to-fail maintenance strategies. This often leads to unnecessary maintenance on healthy equipment or unplanned downtime and costly repairs due to unexpected failures. With IIoT, the landscape has shifted in several ways. For example, sensor deployment is now widespread. Sensors are now wireless, more intelligent, cost-effective and rugged, making it feasible to monitor a broader range of assets, including those in hazardous or hard-to-reach areas. Secure, wireless transmission of sensor data to the cloud, then enables scalable, enterprise-wide asset health management,
Three ways automation technology helps industrial internet of things (IIoT) is through wireless sensors, cloud analytics and edge computing. Courtesy: Control Engineering with information from Control Engineering Europe and Baker Hughes
reducing reliance on manual data collection. The IIoT also enables continuous, autonomous monitoring of assets, supporting early detection of degradation and potential failures.
Condition monitoring data can be analyzed closer to its source using edge computing, reducing latency and bandwidth issues and speeding up decision-making while the integration of artificial intelligence (AI) and machine learning with physics-based models allows for more accurate diagnostics, reducing false positives and missed events. These models leverage decades of high-quality industrial data to predict failures, diagnose root causes, and prescribe corrective actions. Advanced analytics enhances workforce efficiency, democratizes complex domain knowledge and makes it easier to scale expertise.
CEE: What challenges need to be overcome to benefit from today’s condition monitoring technology?
Roaldson: The biggest hurdle is overcoming cultural resistance to change. Many teams are used to siloed practices and struggle to adopt new ways of working. Successful implementation of condition monitoring technologies requires a deliberate focus on organizational change: people, process and technology. Only after this foundation is established can organizations effectively address issues like data fragmentation, cybersecurity, operational inefficiencies.
Industrial environments generate vast amounts of data from hundreds or thousands of assets, often stored in disparate systems and organizational silos. Integrating and managing this data for actionable insights is complex. The volume of alarms can overwhelm operators, making it difficult to prioritize and respond effectively, especially with limited resources, while the loss of experienced workers and resource limitations can hinder effective asset health management and knowledge transfer.
Legacy asset health management often requires significant capital investment in infrastructure, software licenses and in-house expertise, which can pose problems. Scaling across the enterprise with cybersecurity is another challenge. ce
Suzanne Gill is editor, Control Engineering Europe www.controlengeurope.com Published by Control Engineering Europe, Nov. 9, 2025. https://www.controlengeurope.com/article/218732/Condition-monitoring-in-the-IIoT-era.aspx Edited by Mark T. Hoske, editor-in-chief, Control Engineering.
Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries.
Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.
Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.
Rick Ellis, Director, Audience Growth 303-246-1250, REllis@WTWHMedia.com
Custom reprints, print/electronic: Matt Claney, 216-860-5253, MClaney@WTWHMedia.com
Information: For a Media Kit or Editorial Calendar, go to https://www.controleng.com/advertise-with-us.
Letters to the editor: Please e-mail us your opinions to MHoske@WTWHMedia.com. Letters should include name, company, and address, and may be edited.
Less energy costs. More performance.
Driving Efficiency: DR2C Permanent Magnet Motor
Engineered for ultra-premium efficiency, the DR2C Permanent Magnet Motor from SEW-EURODRIVE delivers the performance today’s operations require while reducing long-term energy costs. With up to 50% lower energy losses than standard IE3 motors, the DR2C reduces total cost of ownership (TCO) and enhances reliability. Built with Interior Permanent Magnet (IPM) technology, offering high torque density in a compact, space-saving design, enabling smaller motor sizes without sacrificing power. Optimized for continuous duty and high-cycle start/stop operation, the DR2C performs efficiently across a wide speed range. Ideal for conveyors, automated logistics, packaging lines, and manufacturing systems.