
AWARE AND READYTO-WEAR WIRELESS
ECONOMIC VALUE OF BETTER CONTROL
CONTROLLER
OUTPUT EFFECTS ON DATA INTEGRITY
![]()

AWARE AND READYTO-WEAR WIRELESS
ECONOMIC VALUE OF BETTER CONTROL
CONTROLLER
OUTPUT EFFECTS ON DATA INTEGRITY
System virtualization lets users throw off rigid hardware and software, and gain the flexibility to automate, optimize, customize, control and scale up faster







With an experienced partner, you can achieve more.
Optimizing processes and maximizing efficiency is important to remain competitive. We are the partner that helps you master yield, quality, and compliance. With real-time inline insights and close monitoring of crucial parameters, we support manufacturers to optimize processes, reduce waste, and increase yield.

SCAN HERE to learn more














The journey to a smarter future begins now. VEGA’s intelligent level and pressure instrument solutions simplify everyday operations – with reliable sensor data analysis, secure industrial communication, you can streamline your logistics and Everything is possible. With VEGA.








Part two of this three-part series reveals control technology concepts to reduce process variability and increase profitability by R. Russell Rhinehart
9 EDITOR’S PAGE
The control loop in my neighborhood
How a snowstorm exposes that everything must run on an efficient control system
10 OTHER VOICES
Rewriting the rules of motion: 50 years of innovation in VSDs
How the low-voltage, variable speed drive (VSD) changed the way electric motors operate
12 ON THE BUS
Just doing what we always did
Field-device diagnostics can improve instrument reliability, but diagnostics alone won’t deliver reliability gains
14 WITHOUT WIRES
Wireless gauging redefines tank level measurement
Sensor networks and automated tank gauging deliver precise level, interface and inventory data
16 FLOW POINT
Tight coupling keys flowmeter performance
Accuracy is fundamentally driven by how directly and precisely the measurement principle is connected to flow
18 EXCLUSIVE
Gas analyzer combines laser, paramagnetic detection
Rosemount QX1000’s
selectivity and accuracy ideal for cold-dry CEMS, CAMS, DeNOx/SCR and CCUS
38 LEADERS IN AUTOMATION
Recognizing innovation, leadership and excellence of automation technology suppliers
51 IN PROCESS
ISA names eight members as fellows
Valmet to automate two Vietnamese hydropower plants; Phoenix Contact launches Technology Alliance Program (TAP)
53 ROUNDUP
Ultrasonics clamp onto flowmeter market
Enable portable, non-invasive, hybrid capabilities everywhere
55 RESOURCES
Digital twins double up and down
Control’s monthly resources guide
56 CONTROL TALK
Data integrity series: controller output issues
Final control element problems create process variability that corrupts data
58 CONTROL REPORT
Lots of features
Self-examination sets the baloney filter just right
Connect Legacy Hardware
Eliminate Manual Paperwork
Centralize Real-Time Data
Scale Without License Limits
Standardize Global Operations
In industrial automation, the difference between a project that launches and one that lingers is the foundation. You don’t need a total overhaul; you need the essential step that transforms your systems into a scalable enterprise.



































































Endeavor Business Media, LLC
30 Burton Hills Blvd, Ste. 185 Nashville, TN 37215
800-547-7377
EXECUTIVE TEAM
CEO Chris Ferrell
COO
Patrick Rains
CDO
Jacquie Niemiec
CALO
Tracy Kane
CMO
Amanda Landsaw
EVP Manufacturing & Engineering Group
Lisa Paonessa
VP of Content Strategy, Manufacturing & Engineering Group
Robert Schoenberger
EDITORIAL TEAM
Editor in Chief
Len Vermillion
lvermillion@endeavorb2b.com
Executive Editor
Jim Montague
jmontague@endeavorb2b.com
Digital Editor Madison Ratcliff
mratcliff@endeavorb2b.com
Contributing Editor
John Rezabek
Columnists
Béla Lipták
Greg McMillan
Ian Verhappen
DESIGN & PRODUCTION TEAM
Art Director
Meg Fuschetti
Production Manager
Rita Fitzgerald
Ad Services Manager
Jennifer George
PUBLISHING TEAM
Group Editorial Director
Keith Larson
630-625-1129
klarson@endeavorb2b.com
Group Sales Director
Mitch Brian
208-521-5050
mbrian@endeavorb2b.com
Account Manager
Greg Zamin
704-256-5433
gzamin@endeavorb2b.com
Account Manager
Jeff Mylin
847-533-9789
jmylin@endeavorb2b.com
Subscriptions
Local: 847-559-7598
Toll free: 877-382-9187
Control@omeda.com
Jesse H. Neal Award Winner & Three Time Finalist
Two Time ASBPE Magazine of the Year Finalist
Dozens of ASBPE Excellence in Graphics and Editorial Excellence Awards
Five Time Winner Ozzie Awards for Graphics Excellence
How a snowstorm exposes that everything must run on an efficient control system
JUST LIKE some 200 million people across the U.S., I spent the last weekend of January dealing with winter storm Fern. I’m lucky because, unlike more serious victims of the elements, my biggest concern was how my neighbors and I were going to shovel so much snow from our little enclave in the Pittsburgh area. Incidentally, this was the event once every couple of years that makes us wonder why we cancelled our snow removal contractor in favor putting on our boots and scarfs and just doing it ourselves, but I digress.
Because it was a Sunday, with little else to do but sit around and watch the white stuff pile up, it occurred to me that we needed a good plan and some good data analytics. Following basic principles of process control, I knew we needed accurate input data (how many inches, rate of snowfall) and an optimized operation mixed with a bunch of safety procedures. Boy, could we have used a proficient level sensor, but we made do with an old-fashioned wooden yard stick from circa sometime in the 1970s, I presume.
Then there were our available optimization options. Should we go out now, while the heavy stuff still fell for a bit of preventive maintenance? Should we shovel the snow while it was still fresh, soft and easier to lift, but the wind was brisk and visibility was limited? A proper data analysis probably would have given us the information to make the most efficient decision for each case.
Of course, we had to mind the temperature. Was it futile or even too dangerously cold to shovel at the most optimal time of the snowfall? Perhaps, we should simply wait and react to the “damage,” i.e., buildup, but stay safe from frostbite?
Eventually, there would be several people working together—in quite a haphazard way. Where was our DCS? With so much snow and nowhere to put it all, we needed an efficiently designed flow control process to get the snow from the sidewalks and parking areas to a depository that wasn’t even more disruptive by clearing one car to only block another.
And don’t forget the feedback loops; everyone was reminded not to overexert themselves. After all, we’re a collection of legacy parts (people) developed in different eras. While some of the newer components moved more quickly, some of us needed to work at a slower, steadier pace, adding disproportionate stress to the newer, moving parts.
After the storm, I sat back, opened a winter ale, and thought to myself, everything really is a control system at work, isn’t it?


LEN VERMILLION Editor-in-Chief lvermillion@endeavorb2b.com
“After the storm, I sat back, opened a winter ale, and thought to myself, everything really is a control system at work, isn’t it?”

Business Line Manager,
Low Voltage Drives
ABB Motion High Power
“VSDs pump and purify water, move materials on factory floors, control airflows in HVAC systems, and power compressors in production lines. What began as a specialist innovation become a core part of everyday motion and energy efficiency.”
How the low-voltage, variable speed drive (VSD) changed the way electric motors operate
HALF A CENTURY ago, a defining innovation changed the course of modern industry. It didn’t make headlines at the time, yet its impact was felt across every sector that depends on motion: manufacturing, infrastructure, food and beverage and transportation, to name a few. The low-voltage, variable speed drive (VSD), first commercialized in the 1970s by ABB’s (abb.com) predecessor Strömberg, hailed a new era in industrial efficiency.
Before the VSD, electric motors operated at a single, fixed speed. Controlling flow or output was only possible with mechanical throttles, such as dampers and valves, resulting in large amounts of wasted energy. Then came the realization that if you could adjust the motor’s speed electronically, you could align its power precisely with demand. That idea, which seems so obvious in hindsight, became the foundation of the VSD.
The first installation of a low-voltage VSD was at the Karihaara sawmill in northern Finland in 1975. It was there that engineers realized electronic motor control could achieve the same mechanical output while using much less energy. Soon after, the technology found its way into Helsinki’s metro system, where passengers enjoyed smoother stops and starts and quieter journeys, all thanks to the same principle of variable speed control.
Designed under the leadership of engineer Martti Harmoinen, those early systems transformed industrial operations by uniting electrical precision and mechanical performance. The innovation was recognized at home and abroad, quickly taking hold across industries all over the world. VSDs pump and purify water, move materials on factory floors, control airflows in HVAC systems, and power
The low-voltage variable speed drive (VSD) was first commercialized in the 1970s by ABB’s predecessor Strömberg.


compressors in production lines. What began as a specialist innovation become a core part of everyday motion and energy efficiency.
The invisible enabler
Today, roughly 45% of the world’s electricity (bit.ly/4b4E5Da) powers electric motors. Yet, fewer than one in four (bit. ly/4pLV5BR) of those motors are currently equipped with variable speed control. This means that an enormous share of global energy use could still be optimized by using technology that already exists. VSDs allows motors to draw only the power they need, reducing consumption by up to 30% on average—or up to 80% in some applications. They also curb mechanical wear by easing motors gently into operation and controlling load precisely, extending equipment life in the process.
Decarbonization, one kilowatt at a time
Energy efficiency may not always steal headlines in conversations around climate or environmental goals, but it’s one of the smartest routes to cutting emissions. Integrating drives is something that can happen immediately, with
the support of existing infrastructure and proven ROI.
If Europe alone applied drives more broadly across pumps, fans and compressors, electricity use could fall by 140 terawatt-hours annually. For scale, that’s the equivalent of powering 5 million homes and avoiding 38 million tonnes of CO2 emissions each year. These are gains are within reach, yet they’re being left on the table.
Sustainability and productivity aren’t at odds. OEMs and users who adopt drives often find that what’s good for the planet is equally good for business. Lower electricity bills and equipment that stands the test of time are define operational efficiency as much as positive environmental impact.
But the story of the VSD shouldn’t be stuck in the past. It’s a powerful example of how innovation continues to evolve and adapt. Modern drives are intelligent devices that can integrate seamlessly with digital systems, fully equipped to meet the demands of Industry 4.0.
In fact, digitalization has become a key component of VSDs. ABB’s advances in this areas include innovative
tools that allow ease of selection and ordering, efficient engineering and customization, straightforward commissioning, optimized performance including ABB Crealizer open software for motors and drives, and fast recovery and prevention of unplanned shutdowns. These digital tools can improve energy efficiency and productivity to minimize total cost of ownership.
As we commemorate this milestone, we also look to a future where every piece of the motion puzzle is improving and evolving. The world’s electrification journey is booming, from renewable energy generation and storage to transportation and heavy industries electrifying. In each area, drives do the work that enables progress.
In future decades, we expect drives to push boundaries even further. They might become more compact, intelligent and connected within complete motor systems. The next generation of innovators will build on this foundation, just as their predecessors did at the Karihaara sawmill. The principle remains the same: to turn every movement into progress. VSDs have become essential for efficient motion control and process optimization.

JOHN REZABEK
Contributing Editor
JRezabek@ashland.com
“It’s not hard to imagine network architectures, where asset management tools are peers with controllers and operator interface workstations.”
Field-device diagnostics can improve instrument reliability, but diagnostics alone won’t deliver reliability gains
JAKE SURVEYED the menagerie of dissimilar devices sharing his home network: two doorbells, new microwave, stove and dishwasher, Costco weather station, cable boxes, thermostat, his family’s cell phones, and a few laptops and PCs. Once he input the SSID (network name) and password, these devices connected, and appeared to function happily despite different protocols and purposes.
Perhaps you remember Jake from last year, when he had a compelling vision of digitally integrated field devices turbocharging his instrument reliability goals (controlglobal.com/ toolatesmart). Instrumentation was always “guilty until proven innocent” for any number of surprise upsets, going off spec or even causing a spurious trip. Instead, his team started seeing the diagnostics of every device, in one place, and hopefully foresaw impending issues—and fixed them—before they impacted their production processes.
Some devices had few diagnostics, and most were meaningless until the device was obviously broken. Fieldbus devices often lacked capabilities beyond the HART version, which wasn’t all that useful for preemptive fault detection.
Naturally, the bandwidth and speed offered by Ethernet-based solutions had great promise to lift Jake from his trough of despair. With the new Ethernet Advanced Physical Layer (APL), Jake could potentially communicate with field devices at a couple megabaud or more. If an intelligent device management appliance could sample a pressure signal at even 40 Hz, the system could assess for process noise attenuation and deviations from normal, alerting him to possible process connection issues like freeze-ups or plugging in its impulse lines. Ethernet supports parallel communication, so like Jake’s microwave, stove and dishwasher at home, the signals can coexist with other transactions on the same media, like his phones and computers.
It’s not hard to imagine network architectures, where asset management tools are peers with controllers and operator interface workstations. But it’s not likely to be an easy transition from the installed control system—unless you employ Profibus/Profinet, extensively. It lends itself to APL quite nicely. However, Jake had a diverse installed base with relatively few Profibus devices. Most process distributed control systems (DCS) systems use a customized, proprietary version of Ethernet, and make special accommodations to ensure priority messaging for alarming, near- determinism for communications with PID controller instances, and near-real-time updating of process graphics.
I think layering parallel, intervening device communications on such proprietary networks is unlikely to be embraced. Instead, field Ethernet will be hosted on dedicated interface cards, as protocols like Modbus TCP, EtherNet/IP and Profinet are today. The DCS and its I/O subsystem can remain the conduit through which diagnostics must pass.
Jake will also face a difficult brownfield problem. The 1990s-era fieldbus pioneers and dreamers envisioned a gradual path to adopting fieldbus, which is why ordinary twisted-pair cable was an approved physical layer. Jake has a higher hurdle to invest in APL infrastructure, and after doing so, he will still have primarily conventional, HART or legacy fieldbus devices.
The challenge of carving out funding to invest in an instrument reliability project may be the highest hurdle, even if there’s a visionary with six-figure approval authority, who’s jazzed about Industrie 4.0 or the Industrial Internet of Things (IIoT). The inertia of “just doing what we’ve always done” is powerful. Not to mention, there are competing paths to providing preemptive fault detection, spanning more than just the measurement and control system.




The SLA Multiloop & Multifunctional Logic Solver and Alarm is built for safety practitioners seeking an IEC 61508-certified, SIL 3-capable logic solver that eliminates programming complexity. Its FMEDA report provides validated reliability data and metrics essential for designing safety loops that effectively mitigate risk. Featuring straightforward configuration for alarming, math and logic functions, and voting architectures, the SLA delivers a streamlined safety solution without the programming overhead and high cost typically associated with full-scale safety PLCs.
Learn Moore! Call 1-800-999-2900 or visit www.miinet.com/sla-co

IAN VERHAPPEN
Solutions Architect Willowglen Systems Ian.Verhappen@ willowglensystems.com
“Tank overfills are one of the leading causes of serious safety incidents at bulk liquid storage facilities, but they don’t occur randomly. They’re predictable and preventable.”
Sensor networks and automated tank gauging deliver precise level, interface and inventory data
LARGE TANKS and reservoirs typically have large cross-sectional areas compared to the rate fluids arrive or withdraw. Their levels tend to change slowly, and because of their size, normally cover a large area such as a tank farm. This combination makes them wellsuited for wireless sensor networks.
Because of their large, cross-sectional areas, accuracy is critical. Digital communications via buses and wireless sensor networks support the need for accuracy by removing the analog-to-digital conversions associated with using analog (4-20 mA) measurements. Though a basic process control system (BPCS) can control tank levels and inventory, most tank farms, especially for custody transfer, rely on automated tank gauging (ATG) systems to not only control levels, but also aid tasks such as batch control and storage planning. These tasks must follow standards and regulations for bulk liquid storage operations.
Key components and technologies of a fluid-storage management system include:
J Sensors that measure level, temperature and pressure in each vessel. They use radar, ultrasonic, capacitance, differential pressure, nuclear, electromechanical, load cells and magnetostrictive technologies.
J Connectivity for data transmission from sensors to a data-gathering system and control platform.
J Control platform that not only uses BPCS for routine level control, but also specialized ATG systems that have the advantage of also providing track oil movement and operations, inventory control, custody transfer and volume reconciliation.
J Integration that’s necessary for gathering liquid product data from lab samples, such as density and water content.
J Safety features such as leak detection (interstitial, oil/water), overfill alarms and intrinsically safe (C1D1) sensors, which are also necessary for hazardous areas.
Similar to a BPCS, an ATG system is the primary, independent protection layer that continuously prevents tank overfills. When an ATG system functions correctly, it reduces the likelihood of an overfill, so other protection layers won’t need to be activated.
Tank overfills are one of the leading causes of serious safety incidents at bulk liquid storage facilities, but they don’t occur randomly. They’re predictable and preventable. ATG systems help by providing accurate, compensated measurements for the entire facility. Despite this, depending on the process, SILrated overfill protection may still be required. Accurately determining the usable volume in the storage vessel requires knowing the interface location.
Interface level measurement is more challenging, particularly between two liquids, with the oil-water interface being the most common. Fortunately, in most cases, this interface is clean, meaning there’s a distinct, sharp boundary between the oil and water layers. In some cases, fine oil-wet solids (clays, asphaltenes) and indigenous stabilizing components in the oil (resins, waxes) accumulate at the interface. This inhibits water droplets from coalescing, resulting in an emulsion or “rag layer” that’s exasperated by upstream turbulence, and encourages these challenging components to mingle. I’ve also experienced rag layers because of incorrect (too high) dosing of chemicals meant to clean up the problem. More isn’t always better.
Measuring the interface between liquids with a rag layer is challenging because this mixed layer varies in density. This confuses sensors looking for a clean difference in the property being measured (dielectric, density, viscosity) between the layers. Fortunately, this problem can be overcome with an accurate interface determination, which is possible by judiciously selecting the measurement technology or a combination.


Don’t let the cost and inconvenience of hard-wired AC power send you up a tree. Instead, be as remote as you want to be with the ONLY lithium cells to offer PROVEN 40-year service life without recharging or replacing the battery. When it comes to wireless power solutions, Tadiran is taking innovation to extremes.


JESSE YODER Founder & President Flow Research Inc.
jesse@flowresearch.com
“In addition to the type of meter, fluid type plays a major role in flowmeter performance and accuracy.”
Accuracy is fundamentally driven by how directly and precisely the measurement principle is connected to flow
FLOWMETERS COME IN many types: Coriolis, magnetic, ultrasonic, vortex, differential pressure, turbine and variable. Each measures flow in a different way, and comes with its own accuracy specification. However, the question that underlies all this is: why are some flowmeters more accurate than others?
I believe the most important factor is the most accurate flowmeters have a close connection between the operating principle of the flowmeter and the variables it depends on to generate its output. When those variables are small and can be precisely determined, it’s tightly coupled. A flowmeter is loosely coupled when its output is influenced by variables whose values aren’t precisely determined by the operating principle.
Some flowmeters are more accurate than others because their operating principle is tightly coupled to mass or volumetric flow. Others require inference, modeling or secondary variables that introduce uncertainty because they can’t be measured with precision. So, what is tight coupling, and why is it a key to understanding flowmeter accuracy?

Many flowmeter discussions focus on electrode materials, bluff body geometry, signal processing, Reynolds numbers, transducer signals or installation effects. Beneath all of that lies a simpler, more fundamental truth: flowmeters differ in accuracy in part because they differ in how much the output value depends on precisely measured values.
The relationship between coupling and accuracy can be viewed as a continuum. In general, tightly coupled flowmeters tend to support higher accuracy, moderately coupled meters tend to support moderate accuracy, and loosely coupled meters tend to support lower accuracy. The concept of tight vs. loose coupling can be most easily seen by looking at examples.
Coriolis flowmeters have tight coupling. They measure mass via the deflection of a vibrating tube caused by inertial mass. Fluid particles experience inertial forces due to the combination of their linear flow through the tubes and the oscillatory motion of those tubes. These inertial forces cause a secondary twisting motion in the vibrating tubes. Inlet and outlet sensors detect the phase shift caused by the induced twisting motion. The phase difference (ΔT) detected by the inlet and outlet sensors is directly proportional to mass flowrate. There are few intervening variables.
Positive displacement meters have tight coupling. Each fill-and-sweep cycle displaces a known volume. Almost no assumptions are involved. Positive displacement meters measure actual volume, although as mechanical meters they’re subject to wear. Their accuracy can also be affected by variations in temperature and pressure, and by entrained air or gas in the fluid. These meters can cause pressure
drops, especially at high flow rates. Despite these known issues with their operation, their output depends on few if any imprecisely defined variables when they’re working properly.
Magnetic flowmeters measure velocity via Faraday’s Law, which states that when a conductive liquid flows through a magnetic field generated by the meter, it induces a voltage signal proportional to the fluid’s velocity. Using velocity, we can calculate that volumetric flow = velocity × pipe area.
Magnetic flowmeters require conductivity but few secondary factors. They’re very stable if the pipe is full and the diameter is known. Magnetic flowmeters rely on magnetic field strength, electrode spacing and induced voltage to determine flow velocity. These are precisely determined variables.
Magnetic flowmeters precisely determine electromagnetic interactions between fluids and their measuring field, but their output remains influenced by velocity profile and conductivity distributions that aren’t precisely determined by the operating principle. Factors that affect velocity profile include upstream piping, elbows, valves and reducers. Swirl and asymmetry shift the effective averaging. Partially conductive fluids or coatings change current paths. Electrode fouling alters the effective measurement volume. Particulate matter such as sand can damage or erode the electrodes, and can cause uneven flow. Air bubbles can disrupt the meter’s conductivity. Because these variables aren’t precisely determined by the operating principle, magnetic flowmeters are moderately coupled.
Vortex flowmeters are also moderately coupled. They operate by measuring the frequency of vortices generated downstream of a bluff body—a phenomenon that depends on velocity, but is influenced by flow profile, pipe geometry,
Reynolds number, bluff body shape, vibration and installation conditions. Vortex meters just count vortices without regard to their size, strength and coherence. They have looser coupling than Coriolis meters because the accuracy of vortex meters depends on a variety of imprecisely determinable conditions. This explains their lower accuracy under real-world conditions. Temperature and pressure readings are required for mass flow measurement, introducing two more variables.
The operating principle for thermal flowmeters requires heat transfer from the sensor to the flowing fluid. Heat transfer is proportional to mass flow, but the reading depends on fluid properties whose value isn’t determined by the principle itself. For example, heat capacity depends on gas composition and varies with temperature and pressure. Thermal conductivity also varies with gas composition. Heat transfer varies significantly with laminar vs. turbulent flow. While thermal flowmeters are good for clean uniform gases, they don’t perform as well for varying gas mixtures. Most thermal meters don’t perform well on liquids. Flowrate is inferred from heat-transfer behavior, which itself depends on multiple fluid properties that aren’t precisely determined from the thermal flow principle. As a result, thermal flowmeters are loosely coupled.
Variable area flowmeters have loose coupling. The float position is affected by viscosity, density, friction and user interpretation. Manual reading introduces additional looseness. Even though some suppliers have introduced transmitters to read the height of the fluid, the connection between fluid height and flowrate remains loose.
This analysis can be performed for any flowmeter. The ones described here
are a representative sample. In general, a tight physical coupling between a flowmeter occurs when the reading depends on few variables, and these variables can be determined with a high degree of certainty. The coupling becomes looser as the flow reading depends on more variables, and these variables can’t be measured precisely. Values such as temperature and pressure that are read live and reflect that present conditions are preferable to reading from a table.
Proper calibration, favorable flow profile, removing impurities from fluid, and proper installation can each improve the performance of any meter. However, the principle of operation of certain meters, such as vortex and thermal, make it unlikely that they’ll achieve the accuracy of Coriolis and positive displacement meters.
In addition to the type of meter, fluid type plays a major role in flowmeter performance and accuracy. Even Coriolis meters can’t achieve the same high accuracy with gas as they do with liquids, while vortex meters perform well on steam. Both vortex and differential pressure (DP) flowmeters use temperature and pressure values, along with volumetric flow, to determine mass flow. Multivariable vortex and DP flowmeters incorporate temperature and pressure sensors to provide an onboard way to provide mass flow.
One approach that suppliers can take to improve performance is to identify any imprecisely determined variables that affect a flowmeter’s performance, and try to make them more precise. This could involve replacing a thermistor with an RTD, improving the pressure reading, or removing impurities from the flowstream. It could also involve adding diagnostics to the flowmeter. It’s important to keep in mind that not all applications require custody-transfer accuracy, and identifying the variables that help determine flow output can often lead to better performance.
Rosemount QX1000’s selectivity and accuracy ideal for cold-dry CEMS, CAMS, DeNOx/SCR and CCUS
WHAT’S BETTER than getting several devices to merely cooperate? Totally integrating them into one solution that can perform all their former jobs faster and with unprecedented efficiency.
For example, Emerson’s new Rosemount QX1000 continuous gas analyzer merges paramagnetic detection for oxygen (O2) and quantum-cascade laser (QCL), direct IR-absorption spectroscopy for other gases. This hybrid, modular analyzer is ideal for cold-dry continuous emissions monitoring systems (CEMS) required for facilities with gas-to-atmosphere-emitting processes. But it’s flexible, single-system solution is also suitable for continuous ambient monitoring systems (CAMS), DeNOx/selective catalytic reduction (SCR), andcarbon-capture utilization and storage (CCUS).
“The big difference is QX1000 is the first analyzer that combines laser spectroscopy for gases like NO2 and SO2 with paramagnetic O2 measurement,” says Dr. Beth Livingstone, global product manager for process gas analyzers at Emerson. “It replaces two or more devices previously required to make these measurements with a single, streamlined, harmonized, hybrid platform that’s a best fit for many applications.”
QX1000’s paramagnetic O2 module measures oxygen level percentages in sample gases, while up to three QCL detectors monitor concentrations of other gases in the stream, including CO, CO2, NO, NO2 and SO. Different configurations can detect up to four or five of these regulatory gases, and it delivers continuous, real-time results as parts per million (PPM) or percentrange measurements. It can also measure other gases, such as CH4 and N2O. Thanks to its high-selectivity and accuracy, QX1000’s measurements are ideal for CEMs in multiple process industries. In addition, QX1000 provides:
J Repeatability of ±1 % of measurement or limit of detection (LOD), whichever is greater;
J Inherent linear response from direct absorption spectroscopy;
J LOD that’s typically less than or equal to 1% of full range, depending on application;
J Greater than 0.1 Hz measurement rate;
J 5-50 °C (41-122 °F) ambient operating temperature and sample gas temperature ranges;
J Maximum 80% relative humidity for temperatures up to 88 °F (31 °C), and decreasing linearly to 20% relative humidity at 122 °F (50 °C); and
J 100-240 VAC (50 to 60 Hz) power supply.

Rosemount QX1000 continuous gas analyzer
Source: Emerson
“QX1000’s laser technologies have also been redesigned to be more cost effective,” explains Livingstone. “This will enable it to better serve in essential applications and compete in global markets, which are being tasked with new regulations and added sustainability challenges.”
Livingstone reports QX1000’s robust design and low-maintenance requirements reduces its lifecycle costs and potential downtime. Its reliability is likewise enhanced by eliminating moving parts typically prone to failure and frequent replacement. Its reduction of consumable technologies is especially useful to CEMs, where ongoing costs can be a barrier to implementation. By reducing system downtime and maintenance needs, QX1000 also helps users maintain continuous compliance with regulations.
“QX1000 combines proven and reliable technologies with low failure rates, as well as low maintenance and service requirements, and accomplishes all these goals at less cost,” adds Livingstone. “Being able to install one hybrid gas analyzer instead of several traditional ones means more flexibility for brownfield and greenfield applications alike. It’s also easier to coordinate measurements in one device, and it’s simpler to calibrate and validate. It’s also launching with a web-based, cyber-secure HMI and Ethernet, so users can set up, configure, monitor and work with it using their PCs and laptops.”
In addition, QX1000’s cold-dry capability simplifies measurements and reduces maintenance. Its sample-conditioning system transports gas extracted from a user’s process to the analyzer via a thermoelectric chiller. This reduces its temperature to about 4 °C (39 °F), so most moisture condenses and drops out. This lets QX1000 easily integrate into existing plant infrastructures. It can also deploy as part of an overall, Emerson-based solution, such as sample-conditioning systems.
QX1000 is the first member of Emerson’s new QX portfolio. It’s expected to be joined soon by an explosion-proof version for hazardous applications such as natural gas facilities.
For more information, visit www.emerson.com/ rosemountQX1000


OPTIWAVE series with SIL 2/3 certification –Full portfolio of 80, 24 and 10 GHz safety radar level transmitters
• Continuous, non-contact level measurement of liquids, pastes, granulates, powders and other solids (Ɛ r ≥1.4) in safety-related applications
• Antenna options for corrosive and aggressive media
• For process conditions up to +250 °C / +482 °F and 100 barg / 1450 psig

















by Jim Montague


System virtualization lets users throw off rigid hardware and software, and gain the flexibility to automate, optimize, customize, control and scale up faster


IT’S NOT ABOUT pretty pictures. It’s about action. So don’t get distracted. Leaping off clipboards, paper chart recorders and other manual sources a few decades ago, most process automation and control data came to users through their human machine interface/ supervisory control and data acquisition (HMI/SCADA) screens and software for at least 25 years. It’s just part of the scenery on the long, winding, hardwareturning-into-software highway.
More recently, software’s protean and portable nature on servers, in the cloud or elsewhere is allowing developers to build containerized, low-code/no-code and artificial intelligence versions, which promise to let process-industry users move beyond monitoring, analysis and prediction, and finally turn the corner on gaining genuine, software-based control and actuation. So, if users are willing accept some short-term unfamiliarity, discomfort and readjustments, it’s likely they can achieve some huge, long-term improvements in speed, efficiency, adjustability, customization, optimization, productivity, scaleup and profitability.
“Change comes slowly to the process control world. We aren’t yet seeing much of a shift in control technologies to the cloud, but we are seeing more process monitoring, historization and analysis slowly moving to the cloud,” says Heath Stephens, PE, automation solutions director at Hargrove Controls & Automation (www.hargrove-ca.com) in Mobile, Ala., a division of Hargrove Engineers & Constructors, and a certified member of the Control System Integrators Association (CSIA, www.controlsys.org). “We’re also seeing more software-based control technologies. In fact, we see much less resistance, or even questions, about whether a technology is software- or hardwarebased these days. This is partly due to the improved robustness of Windows and Linux platforms and embedded PC hardware.”
Stephens reports Hargrove has been virtualizing control system servers and operator stations for years, condensing server hardware, and using thin client workstations. “Now virtualization is coming to system controllers. Virtual controllers that used to be used for testing and development only are now rolling out to the production floor,” adds Stephens. “Emerson recently unveiled its new virtual controller for its DeltaV DCS, and I think other vendors will be offering this type of product in the future.”
Dylan Lane, digital manufacturing systems manager at George T. Hall (GTH, www.georgethall.com), adds, “We’ve been dealing with system virtualization during the past decade. We’ve moved from the platform side of PLC hardware with SCADA software, such as Wonderware, Pro-face and Ignition, and shifted to support and encourage virtualization of SCADA functions in software-based Docker containers. Process control and automation lives in a niche, but we’re steadily moving toward virtualization and supporting lowcode, no-code applications, which let non-controls people shift PLC functions and other control tasks to mainstream, IT-based, object-oriented programs.” Located in Anaheim, Calif., GTH is also a CSIA-certified system integrator . Lane reports system virtualization can be understood as an attempt to modernize industrial automation, which often lags behind other technical disciplines because it’s practitioners don’t want to be on the bleeding edge of change due mainly to their core mission of maintaining safety. However, even though it’s taking awhile, Lane observes that process industry organizations are catching up, and adding more applications to cloud-computing services enhanced by artificial intelligence (AI), and its promise of epic efficiencies and savings.
“Usually, upgrades to hardware in traditional software are costly and timeconsuming, and process automation had to deal with these and other limits,”
explains Lane. “So why would you bring a 30-year-old machine up to integrate with the cloud? Well, easier upgrades is another potential advantage of system virtualization. Plus, users can do onpremise virtualization from hardware. If an initial project goes well, then users can scale similar improvements, more easily, change software platforms without changing hardware, and even add new CPUs, RAM and power more easily. And, they can upgrade without adding as many new and costly products and accessories. For example, middleware translates between other software packages, and translates and runs data pickup APIs for users. However, it’s a lot easier to add middleware layers if they’re on a virtual system because users no longer have to integrate them with physical nodes and other hardware.”
Logically, the primary attraction of system virtualization is it lets users escape the confines of PLCs and other traditionally rigid hardware and software, communicate more freely among layers that couldn’t interact directly before, and gain new capabilities, including beginning to integrate some AI tools.
For instance, Borouge Plc (www.borouge.com) reported Jan. 7 that it’s completed an AI-powered proof-of-concept (PoC) in collaboration with Honeywell (process.honeywell.com) for autonomous operations at its polyolefin facility at Al Ruwais Industrial City, United Arab Emirates (UAE). The project supports the company’s plan to enhance operations, strengthen competitiveness, and help Abu Dhabi National Oil Co. (www. adnoc.ae) become the most AI-enabled energy company. Borouge is joint venture by ADNOC and Borealis.
Starting with trials last year, the PoC progressed toward developing an AI-driven control room for full-scale, real-time operations. Conducted in a live production environment, the PoC showed AI can increase efficiency up to 20%, cut downtime by 20%, and
improve performance while reducing operating costs up to 15%. The technology also enhances process safety, and supports more sustainable operations by reducing energy consumption and associated emissions (Figure 1).
Just as end-users are encouraged to explore and practice with new technologies in small, non-critical experiments before deploying them, this advice goes double for system virtualization, AI and other software-based solutions.
“We’re eager to deploy containerized applications for our clients, but so far, we’ve only done it internally. While it has several advantages, this technology leans more into an IT skillset than some of our operations clients are comfortable with,” says Hargrove’s Stephens. “We’ve used J-SON for web applications for process historians and MES applications. RESTful API has been very useful for data interfacing. While there are many other helpful technologies, we still use OPC UA, EtherNet I/P and Modbus TCP for most data transfers.”
While it’s mostly using AI to streamline internal processes, Hargrove has used several AI/machine learning (ML) applications to improve process control
and predictive reliability for clients. However, most of these technologies are closer to simple ML tools, rather than advanced AI agents.
“The reasons for virtualizing systems are to reduce hardware investments and ongoing maintenance of multiple physical PCs. Virtualization has become the norm for any medium to large control system. The benefits are well proven through years of industry deployments,” adds Stephens. “Agentic AI is a much newer technology, and it’s promising for fully autonomous AI decision-making and action. To date, few of these applications have advanced much past the pilot stage, and their real-world benefits are still being evaluated. I expect more development in the next few years.”
Stephens reports a massive mental shift is required to move to AI-powered and digitalized production. Without it, new AI technologies will only gradually improve productivity and reliability. “A true shift will require re-envisioning our work processes and how we utilize our human workforce alongside new technology. We also have to accept that moving from a binary, pre-programmed world to one that’s self-organizing and intelligent comes with challenges and uncertainties. When we ask people to

make predictions, we understand that even skilled and experienced people will sometimes be wrong. We need to accept the same of our AI predictions. An AI prediction tool will be valuable if it makes profitable decisions on the whole, not based on perfection.”
Most water/wastewater utilities serve relatively well-defined municipalities, so it usually isn’t too difficult for their operators to monitor, manage and automate their processes and assets. However, many of their counterparts are scattered over much larger, rural areas, with few residents and correspondingly smaller budgets and staffs. Though many deploy radios and uncrewed processes where possible and affordable, they also rely heavily on in-person inspections and manual data gathering—so some simpler data processing could be a big help.
For instance, LangeTech Inc. (langetech.com) is helping upgrade a rural water utility in the Midwest that supplies wholesale water to residences and farms in 16 small communities. With seven wells, two potable water treatment plants, and approximately 200 field devices spread across lift stations, boosters, tanks and treatment facilities, its few staff members face daily challenges due to distance and scale.
The utility previously relied on proprietary controls that were difficult to maintain and required specialized knowledge. As equipment aged and staffing remained lean, its leadership recognized the need for a more sustainable approach that would simplify training, improve visibility, and reduce dependence on proprietary platforms. The utility sought help from LangeTech, which is located in Chesterfield, Mo., near St. Louis. It’s part of SJE Inc.’s engineered controls division, which provides consulting, programming, engineering and system integration services. Both are CSIA members.
The utility’s new network and control system includes 20 remote terminal

Figure 2: To model hundreds of locally stored machine tags on the 48 bioreactors in its process development (PD) labs, Catalent implemented HighByte’s Intelligence Hub software to create a bioreactor data model that can be deployed to any number of sites in about an hour. It also added HighByte SQL connector to extract data from inline equipment without off-the-shelf connectors, and uses Intelligence Hub to curate, orchestrate and model data at the edge before publishing into HiveMQ’s message broker. This easily contextualizes bioreactor data, and makes it available for analytics and remote monitoring, and saves Catalent’s teams hundreds of hours by eliminating manual tasks.
units (RTUs) with PLCs installed at wells, booster stations, elevated tanks and pressure-reducing stations across its coverage area. Endress+Hauser’s flow, pressure and temperature instrumentation provides real-time process data; Rockwell Automation’s CompactLogix and ControlLogix PLCs manage controls; and Stratus’ ztC Edge server runs Rockwell’s FactoryTalk Optix software as the central SCADA platform. Communications across the geographically dispersed system are handled by a combination of Endress+Hauser’s wireless gateways, secure cellular networks, and fiber-optic connections that provide flexibility and redundancy for the rural infrastructure. LangeTech is also implementing Endress+Hauser’s Netilion software to collect detailed instrumentation diagnostics and asset data from remote sites. It will be securely delivered to the utility’s servers and integrated with Rockwell’s Fiix asset-performance management software.
“When an anomaly shows up in Netilion, Fiix can automatically report on it,” says Mike Morris, sales, marketing and business development director
at LangeTech. “Operators can see what’s happening, understand the issue, and respond with the right corrective action.”
Netilion also centralizes device diagnostics, stores configuration files, and simplifies regulatory reporting, including EPA documentation. Using Netilion Connect’s API, data is shared seamlessly with Fiix, allowing maintenance issues to be identified before they turn into failures. “Operators can now see a problem developing at a site 30 miles away, and know what tools or replacement parts to bring before they leave,” Morris adds. “We expect this to reduce travel and troubleshooting time by 25-30%.”
Beyond its obvious flexibility, another one of system virtualization’s benefits is that it can simplify complex process applications, and enable improvements that might have seemed unapproachable before.
For instance, Catalent (www.catalent.com) is a contract development manufacturing organization (CDMO) in Somerset, N.J., that builds flexible production platforms for pharmaceutical,
biotech and consumer health clients. Its equipment runs at more than 50 sites worldwide, which produce more than 70 billion doses per year of nearly 7,000 products for more than 1,000 wholesale customers. Unfortunately, process development (PD) labs typically have varied devices and poorly supported data links, while Catalent reports that available lab data systems with prebuilt connectors are too costly, and don’t fulfill its unified namespace (UNS) strategy. At the same time, the company found that modeling thousands of machine tags for all its PD assets using IIoT solutions was time-consuming and inefficient.
More specifically, Catalent’s highthroughput labs have more than 48 bioreactor platforms, and each of those has hundreds of tags stored locally with no regular backup, Also, inline devices such as cell counters and metabolite analyzers require costly, third-party connectors to extract data.
Consequently, Catalent enlisted HighByte (www.highbyte.com) in 2023 to scale its PD lab processes, and adopted its Intelligence Hub software to create a bioreactor data model that can be deployed to any number of sites in
about an hour. The company also implemented HighByte SQL connector to extract data from inline equipment without needing off-the-shelf connectors, and uses Intelligence Hub to curate, orchestrate and model data at the edge before publishing into HiveMQ’s message broker. This let Catalent develop a scalable UNS with Intelligence Hub serving as the abstraction layer and HiveMQ as the enterprise broker (Figure 2).
So far, Catalent reports its bioreactor data is easily contextualized and available for analytics and remote monitoring. Likewise, its teams save hundreds of hours by eliminating manual tasks like transcribing digital HMI data, while its PD labs can easily curate data by customer, and share insights with them in real time.
“Intelligence Hub is key to our digital architecture because it enables small teams with limited resources to scale out faster than other connectivity tools,” says Chris Demers, global lead for plant data and analytics at Catalent. “For instruments with existing data models, we can start to model and publish data at a new site within minutes. Connections to new instruments only take a few hours, even for the most complicated equipment, and can often be performed by the equipment subject matter expert with a simple drag-and-drop interface.”
Just as wireless networking can help users and applications span long distances, virtualized control programming can make it easier to connect, develop, implement and scale up control functions and the processes they manage. For example, to keep on simulating and testing its wood-processing solutions despite increasing demand, Comact (www.comact.com) recently worked with Rockwell to better emulate its controllers, access more processing power, and test in a virtual environment.
Typical sawmill projects cover multiple acres, integrate complicated processes and varied equipment, and

Figure 3: To simulate and test its wood-processing solutions despite increasing demand and throughput, Comact recently implemented Rockwell Automation’s FactoryTalk Logix Echo software to better emulate the behavior of its controllers, access greater processing power, and test them in a virtual environment. The company added three sets of 16 emulated processors for a total of 48 usable systems, which reportedly cost about one-tenth as much as physical processors and associated hardware.
usually require 15 or more controllers. Because this processing power was often scarce, Comact reports its engineers shared access, and frequently waited until the end of a project to test it. In addition, they had to convert their control programs to run the emulation software, and were losing configurations. Finally, because the final emulated project wasn’t usable in the field, it also had to be reconverted for installation on the physical hardware (Figure 3).
“We’d always simulated and tested our control code before startup,” says Bruno Laplante, SCADA director at Comact. “However, the existing emulation system relied on a limited number of physical PLC processors on racks in our facility. Plus, the old emulator was no longer up to the task, especially given our increased project volume.”
Because it relies on Rockwell’s ControlLogix 5580 PLCs for its turnkey facilities, Comact gained the increased processing power it needed for simulation and testing by using Rockwell’s FactoryTalk Logix Echo software to emulate the controllers’ behavior. The company added three sets of 16 emulated processors for a total of 48 usable systems.
Laplante estimates this strategy cost about one-tenth as much as physical processors and associated hardware. Using their new emulation software, Comact’s engineers can fully test control code in a virtual environment. The software can emulate machines, production lines, or even an entire plant. These emulated controllers can also be paired with other software for other tasks.
“The process is exactly the same, whether you’re using the virtual processor or the real one,” explains Laplante. “Our team was skeptical at first because when something seems too easy it usually means something is missing in the big picture. However, once they realized it was real and efficient, they embraced the system quickly. Of course, we also worked closely with our IT department to ensure we could allocate the recommended resources to host the virtual machines and support the system. We put everything in place to do it right the first time, and it was a success.”
One of the best ways to understand and begin to employ virtualization is by learning how existing developers and





















































Just like algae blooms in the ocean and pollen in the spring, there’s been an explosion in the past year or two of new software and AI-related tools and lingo from the IT and mainstream/ consumer side. Some are well-known, but many are brand-new, and each promises to develop, run, monitor and scale up process automation functions and systems. Here are some of the latest players and their capabilities:
J Ansible (docs.ansible.com) is agentless, open-source software for automating IT functions, such as deploying applications, managing configurations, and performing cloud provisioning and orchestration.
J Apache Kafka (kafka.apache.org) is an open-source system, distributed-event market and stream-processing setting. It strives to deliver a unified, low-latency, high-throughput platform for managing real-time data feeds.
J Claude Code (code.claude.com) is an AI-based, agentic, command-line interface (CLI) tool from Anthropic that runs in local software terminals, and uses natural language to interact, understand, construct and debug codebases.
J Codesys is an integrated development environment (IDE) that complies with IEC 61131 for programming controller functions. Its Control Runtime System has long been able to turn PCs into compatible controllers, but more recently its Virtual Control SL soft PLC can run as a Docker or Podman container or virtual machine.
J Containers such as Docker, Podman and others are computationally lightweight, standalone, executable software units. They combine a program’s code, runtime, system tools, libraries, settings and other dependencies, but their minimal footprint lets them consume fewer system resources, such as RAM, CPU cycles and disk space.
J Industrial information interoperability eXchange (i3x) (connect.cesmii.org/i3x) is a standard, manufacturing-information API that delivers provide a common interface for accessing, contextualizing and sharing production data.
J JavaScript object notation (www.JSON.org) is an open-standard, language-independent, data-interchange file format. It employs readable text to relay data blocks, which include attributevalue pairs and array data types, or other serializable content.
J Kubernetes (K8s) is a well-known type of software container, but it’s considered to be computationally heavier due to typically using more CPU, RAM and other resources. Likewise, Argo CD is a Kubernetes controller in charge of monitoring operating applications continuously, while Kargo (kargo.io) is a continuous promotion orchestration layer that complements Argo CD.
J Large language models (LLM) access huge volumes of text, which is used for training by machine learning (ML) that’s selfsupervised. They typically generate natural language and perform other language-processing tasks. Generative, pre-trained transformers (GPT) are the biggest, most capable LLMs. They can
be tailored for particular jobs or guided by prompts. They gain predictive capabilities by using human syntax, semantics and ontologies, but also acquire preexisting mistakes and prejudices in their training data. The best-known LLMs include ChatGPT, Gemini LLM, Claude and Llama.
J Low-code/no-code software includes visual development environments that help users write software with a graphical user interface (GUI) rather than doing it manually. Using a GUI accelerates code development.
J Model context protocol (MCP, modelcontextprotocol.io) is an open-source standard, also developed by Anthropic, which lets AI-based applications and models like ChatGPT and Claude link securely to outside information sources. It serves as a common bridge, like a USB port for AI. Likwise, MCP servers are programs that introduce functions to AI applications via standardized protocol interfaces, and deliver functions via three building blocks: tools, resources and prompts.
J Node-RED (nodered.org) is free, low-code, visual programming software for merging APIs with devices and Internet services. Its web-based editor generates JavaScript objects, and combines flows with a choice of nodes that are easy to send to its runtime.
J Portainer (www.portainer.io) is a container management control environment that’s self-hosted, and manages Docker, Swarm, Kubernetes and Podman environments with a simple, intuitive, web-based interface.
J Python scripting engine is an interpreter or runtime setting that lets Python (www.python.org) code be executed, usually in a bigger application or platform. This automates tasks, extends capabilities, or customizes how the application acts.
J REpresentational state transfer (REST) defines rules for developing interoperable Internet services, and works as a software framework for distributed hypermedia. Services that meet with REST’s six stipulations are called RESTful web services (RWS).
J Retrieval-augmented generation (RAG) lets LLMs bring back and integrate new information from outside sources, and permits LLMs to employ domain-specific and/or up-to-date content that isn’t available in training data.
J Software agents are behavior-based programs that can act on behalf of a human user or another software program. They’re reportedly able to start themselves up in response to contextual conditions, activate other functions such as initiating communications, and don’t need to interact with users to perform tasks.
J Unified namespace (UNS) lets users collect data, incorporate meaning and context, and convert it into a format that other users and systems can understand. UNS usually partners with Message Queuing Telemetry Transport (MQTT) publish-subscribe protocol with Sparkplug B framework on top.




The 3DPro2300 transforms Bulk Solids management by evolving from traditional point-level sensing to real-time surface profiling. While standard sensors provide a single distance, our high-penetration radar is designed to "see" through dense dust and vapor to map the material topography across silos, stockyards, and stockpiles.

Specifically engineered for these challenging environments, the 3DPro2300 provides precise volume data and supports workplace safety by reducing the need for hazardous manual inspections. Backed by 180+ patents and ATEX/CE certifications, RETTAR delivers operational transparency across the global industrial landscape.
users are employing it in their process controls. For example, Codesys (us.codesys.com) is manufacturer-independent, IEC 61131-3 software with an independent development environment (IDE), runtime system and other tools, which have been used to program controllers and machines for many years. However, increasingly digitalized functions and operations have fueled the need for more hardware-based controls, maintenance and other resources, so Codesys recently partnered with Cisco (www. cisco.com) to use its switches and other devices to virtualize many of its control capabilities.
This strategy lets users consolidate plant-floor PLCs, industrial PCs, HMIs, network gateways and other physical-compute resources onto virtual machines (VM), which can run on a hyperconverged compute and storage infrastructure (HCI). Codesys and Cisco report that virtualization can make operations more agile and scalable, improve cybersecurity, aid disaster recovery, accelerate application development, reduce costs, enable sustainability, and extend equipment lifecycles.
While the overall network is key to migrating controllers and virtualizing control systems, it’s just as crucial to automate switch reconfiguration and other network maintenance tasks, so they don’t also become too time-consuming. This can be accomplished by using a software-defined network (SDN) and centralized, intelligent SDN controller. They can reconfigure routers to handle communications traffic flows, and update all the devices on a network, instead of requiring users to manually updating each one separately.
Because virtualizing PLCs requires dealing with real-time determinism, specialized hardware, legacy code and safety issues, Codesys and Cisco add it’s more complicated to virtualize them. Consequently, Cisco built and tested a software-defined network architecture that can support the newly virtualized controllers (Figure 4).

Figure 4: Codesys and Cisco collaborated to build and test a software-defined network architecture that supports virtualizing industrial automation and control systems (IACS) and controllers. This strategy lets users consolidate plant-floor PLCs, industrial PCs, HMIs, network gateways and other physical-compute resources onto virtual machines (VM), which can run on a hyperconverged compute and storage infrastructure (HCI).
These tools are enabled by Codesys’s virtualized controller automation software, which includes its Development System IDE with IEC 61131-3-compliant textual and graphical editors for programming control logic in HMIs, fieldbuses, I/O, and safety and data exchange functions. The application code created is translated into binary code with its own compilers for the respective target hardware.
Similarly, GTH’s Lane recently helped digitalize and virtualize a battery and energy storage solutions (BESS) plant that wanted to modernize its production lines and increase throughput. Just like many decades-old factories, it had lots of disconnected machines and equipment, and other plant-floor and organizational islands and data silos, including wash-and-clean stations and assembly areas. It also had no central HMI/ SCADA system, so shift leads manually passed information down to local HMIs,
and wrote up production counts on paper templates for delivery to decisionmakers on the business side.
“Staff had to plug into each station, and send information to a central database,” adds Lane. “It was a well-oiled operation, but the question was—how they could keep working without having just the right people in each position all the time?”
To help the BESS plant digitize, upgrade and integrate AI, Lane collaborated with the company to develop a modern data-collection and SCADA system with interconnected, web-based communications. Ironically, some new hardware was required to support this digitalization/virtualization project, including 155 edge servers to handle the terabytes of data generated by the facility’s approximately 15 million device tags, and replace the approximately 500 PLCs it needed to manage them.
“This plant also had some data acquisition (DAQ) devices for qualityassurance metrics, and tracking and
tracing on its existing PLC network, but all of these machines and other parts were still islands with little coordination or scaling between them,” says Lane. “There were also many islands because this plant builds all types of batteries and BESSs from raw materials. It operates coil winders, jellyroll cathodeassembly equipment, electrolyte fillers, cappers and welders, and washing and cleaning equipment.”
Because these processes and their components, generate so much data, and require so much networking, cooling and support, Lane reports the BESS company couldn’t simply install a typical server architecture. However, it could distribute its data processing with edge-style servers, which would also help meet its goal of virtualizing as many functions as possible. To help the battery company better coordinate its Ethernet and TCP/IP networks with others using serial communications, these new servers run Ignition’s webbased SCADA software, and network via MQTT brokers or Kafka. They allow connections to different machines at each level on their terms, with the goal of creating a central, unified platform. Kafka is a similar, processstreaming platform that handles more real-time data with low latency, fault tolerance, and can connect with many data streams.
“The main technical philosophy here is having a unified namespace (UNS) in which participants use the same language, even as they collect data from different places,” explains Lane. “This virtualization lets them plug and play, and achieve a scalable publish-subscribe system that presents standardized data in a UNS supported by MQTT or Kafka. Overall virtualization is the key to making all this possible because it creates a system that makes data as redundant and cloneable as needed. Users can already integrate information reliably and redundantly, but
virtualization in Docker containers make it much easier.”
Lane reports that UNS is a conceptual architecture for standardizing communications around existing names and layers. It uses a semantic hierarchy to manage and communicate states of data, and essentially employs that semantic layer to let participants talk.
“The most important aspect of UNS is that it can skip the traditional network layers defined by the ISA-95 Purdue model,” explains Lane. “For example, if an enterprise resource planning (ERP) application is seeking data, then the applicable SCADA system can use UNS to skip the manufacturing execution system (MES) layer, and talk directly to the ERP, which is a lot more efficient.”
Beyond these organizational hurdles, Lane explains that data previously changed each time a new location or user touched it, so each SCADA, MES or ERP layer might add multiple
discrepancies due to their individual efforts to contextualize that data. Consequently, a request for measuring equipment efficiency might produce three different numbers from three different layers. Thanks to its publish-subscribe methodology, UNS halts individual contextualization, and provides parameters all from one place.
“In the case of our battery manufacturer, its digitalization and virtualization effort succeeded because they centralized their SCADA system virtually on server hardware,” adds Lane. “This allowed the company’s production system to collect data, and apply AI-aided decision-making, including AI models trained to solve large, multivariable problems based on initial data, such as determining where and when to route raw materials. So far, the BESS company has implemented multiple digitalization/virtualization projects, and is saving $5-10 million per year.”








Its reach and tools are more capable than ever, but wireless networking still requires conscious investigation and design to succeed
BY JIM MONTAGUE

WIRELESS CAN GET data acquisition, monitoring, maintenance and even control into all kinds of process areas they couldn’t reach before—but this doesn’t mean users can check their brains at the door.
The capability and convenience of wireless isn’t supposed to pave the way for neglect, even though some users may desire it. It’s supposed to free users to identify and complete value-added process improvements they wouldn’t have time for otherwise, including performing several crucial tasks that wireless needs to function properly. Set it and forget it isn’t an option.
Depending on the industries and applications where its deployed, wireless has been proving itself on plants floors and out in the field for 20 years or more, typically for data gathering, monitoring
and other non-control tasks. Some wireless technologies, like radios and satellite communications, have been widespread for decades. Several others, like Wi-Fi, Bluetooth and cellular, emerged more recently, achieved standardization, and quickly became ubiquitous. Their deployments are based largely on the different distances, signal strengths, power requirements and data volumes they provide, and how well they can meet users’ requirements.
“We’re still seeing a lot of conversations about which wireless protocols are most effective for reporting data and what their tradeoffs are for production devices. The main competition used to be between ISA100 and WirelessHART standards, but long-range wide-area networking (LoRaWAN) and low-power cellular have been gaining popularity for IIoT applications,” says Andrew Cureton, business development manager for Emerson’s (www.emerson.com) Pervasive
Sensing division for the chemicals industry. “There’s value in each depending on what individual applications need, but users continue to demand longer distances, greater data fidelity and reduced costs.”
For instance, to alleviate severe corrosion among pressure-relief valves at the top of a crude distillation column, Preem’s (www.preem.com) Lysekil refinery in Sweden initially redesigned the area’s complex arrangement of piping and valves, and replaced a broken injection quill that injects corrosion-neutralizing amine into the pipes. This solution worked briefly, but a high rate of corrosion reoccurred after a few months.
Preem’s corrosion engineer, Joakim Nilsson, deployed Emerson’s Rosemount Wireless Corrosion and Erosion transmitters to continuously monitor the column’s corrosion rate and identify its root cause. Correlating corrosion and process data using AspenTech IP.21
software showed the persistent corrosion was caused by high salts in a specific crude slate, which produced corrosion further down the process (Figure 1). However, when this new crude was blended back out, the corrosion still didn’t stop. The refinery’s team inspected the injection location again, and found the injection quill was broken again, likely causing inhibitor to spread along the pipe wall where the non-intrusive transmitters were located.
Consequently, the problematic amine injections were stopped in July 2021, which reduced the corrosion rate.
A second new injection quill was installed in September 2021. Preem has prevented further corrosion, ensured the pipes’ integrity and longevity, and implemented more effective predictive maintenance that avoids unexpected shutdowns. “Using Rosemount Corrosion and Erosion Transmitters with AspenTech IP.21 software, we quickly detected and diagnosed the corrosion’s root cause, which let us take measures to prevent it from happening again,” says Nilsson. “This also avoided further unplanned outages, saving added time and revenue.”
To make wireless sensing more affordable and widespread, Cureton reports Emerson is introducing Synchros low-cost, compact, surface-mount, WirelessHART sensors for numerous common, time-consuming jobs, such as heat-trace and environmental
Innovators have long sought to send power over communication wires, or relay data over power cables. Both efforts have varying and typically limited degrees of success usually because of power limits and other physical constraints. However, transmitting power via wireless was always a hard stop due to even greater through-the-air limits. Until now.
Phoenix Contact (www.phoenixcontact. com) recently introduced its NearFi wireless power and data transmission device in the near-field, centimeter range. It can communicate 100 Mbps of data at 60 GHz and up to 4 cm, and transmit power at up to 12 mm. It can even transmit through non-metallic material, such as glass, plastic or wood, enabling communications with clean rooms or other enclosed settings. Its six basic parts are available in three versions: power and data, data only and power only. Each version consists of a base coupler and a remote coupler.
NearFi provides proprietary, bitoriented data transmission. Its components are transparent to any Ethernet network, so it doesn’t use IP or MAC addresses. This makes it protocol-agnostic, plug-and-play, and able to integrate without configuration or requiring permissions. These capabilities enable NearFi to replace the need for

Figure 1: To better track and trace corrosion at the top of a distillation volume at its Lysekil refinery, Sweden-based Preem deployed Rosemount Wireless Corrosion and Erosion transmitters near pressure-relief valves, found that high corrosion periods (red line) were correlated with crude slate blending (green lines), re-replaced a broken injection quill to eliminate the corrosion, and implemented continuous monitoring to quickly identify any reoccurrence.

slips rings and pluggable connectors on robot arms, automatic guided vehicles (AGV), rotating machines and other equipment, and reduce or eliminate associated maintenance and downtime costs. It can support rotating equipment up to 1,400 rpm, though one user is reportedly already testing higher speeds.
“Because NearFi uses bit-oriented data transmission, it has a latency of 1 microsecond, which is effectively 0. As a result, NearFi is faster than 5G with the only thing faster being wire. It also delivers 50 W at 2 Amps and 24 V,” says Danny Walters, product marketing specialist for wireless and Ethernet connection solutions at Phoenix Contact. “Because of its short range, NearFi’s 60 kHz radio is effectively immune to radio interference, which can be a challenge for other wireless components in increasingly saturated environments. This allows equipment like AGVs or autonomous mobile robots (AMR) to roll up to a workcell or loading dock, communicate wirelessly without Wi-Fi, and get precise position confirmation or to perform other tasks.”

temperature monitoring. In the future, it also plans to introduce Synchros sensors for level, pressure, electrical measurement and discrete detection, and enable them to support other protocols.
“While traditional sensors must measure parameters like temperature accurately, newer, lower-cost devices can check temperatures for indication. They’re still reporting values, but they don’t need to be as accurate because their users just want to know,
for example, if a process or product is within a 10 °F range,” explains Cureton. “Sensors like Synchros are also communicating and interoperating more often with cloud-computing services, so we’re also pursuing more analytics and maintenance in the cloud. They can be integrated with valves, transmitters and other components, which makes them more usable and streamlined for users, who typically have to manage hundreds or thousands of these devices.”
Even though it’s always on and accessible pretty much everywhere, wireless networking still needs users to carry out several essential tasks for successful communications and data transfers:
J Complete a thorough site survey to identify the individual distances, physical characteristics and other environmental requirements of each location, especially possible interference sources and line-of-sight or other barriers to antennas and wireless signals, such as steel walls and roofing sections, nearby motors and conveyors, hilly or wooded terrain, and heavy stationary equipment, infrastructures or vehicles.
J Determine speed, bandwidth ranges and information volumes that each data acquisition, storage and transfer application will likely require, as well as how often they’ll have to operate. In general, longer distances, faster speeds, and relaying more data more often demands greater power. Conversely, shorter distances, slower speeds, and sending less data less frequently needs less power. However, it’s important to remember that relaying less data at lower-power can also conquer longer distances.
J Address power sources and performance issues, and primarily determine whether batteries will be sufficient. For instance, many flows, levels, temperatures, pressures and other process values are generated slowly, and don’t need to be reported as often, so their wireless nodes can use batteries with enough longevity to typically run for years. More frequent and detailed data may need added power harvested or generated onsite, or delivered via local networking.
J Research and evaluate which wireless protocol to employ, especially whether to use Bluetooth, Wi-Fi, long-range wireless area network (LoRaWAN) or IO-Link Wireless, or invest in local, private 4G or 5G LTE cellular. For example, Bluetooth (IEEE 802.15.1) runs at 2.4 GHz to deliver about 64 kbps at up to 30 meters; Wi-Fi (IEEE 802.11ac) uses 2.4, 5, 6 and 60 GHz to move 433-6,933 Mbps up to 35 meters indoors; and long-range wide-area network (LoRaWAN) runs at 915 MHz to send 3-27 kbps at up to 5 km or 15 km with line-of-sight.
J Settle on a suitable wireless networking strategy and appropriate design that addresses each application’s performance needs and environmental issues, including cybersecurity, and decide which types of antennas, nodes, access points, routers, switches, servers and other components will serve it best. Typical topologies include line-of-sight, peer-to-peer, stars or hub-and-spoke, self-healing meshes and others.
J Test a simple version of the network design using its essential, bare-bones components, make adjustments based on new findings, and add on and scale up as they prove capable and secure. Several wireless technologies, notably Wi-Fi, are known for being natively encrypted, but as usual with cybersecurity, it’s important to double check, and make certain that updated protections are in place.
While two of the longest-established wireless protocols based on the IEEE 802.15.4 standard, ISA100 and WirelessHART, have been up and running for many years, they’ve recently been joined by ubiquitous, mainstream protocols like Wi-Fi and Bluetooth. However, because in-the-field users always need more capabilities, they also spurred development of lower-power protocols like LoRaWAN that can send less data over often longer distances, which better suits many process applications.
“We’ve been deploying wireless for more than 15 years, and we’re a founding member of the ISA100s standard. However, in 2019-20, we also started working with LoRaWAN, and introduced our Sushi wireless sensors for monitoring vibration, and later added versions for checking pressure, temperature and steam traps,” says Steven Webster, emerging solutions product manager at Yokogawa. “We’re also extending Sushi with cloud-based solutions such as Wide Area Monitoring (WAM) and Asset Health Insight (AHI). Yokogawa also has standalone, data acquisition (DAQ) software called GA10 that comes with preinstalled AI functions to improve predictive maintenance.”
Before implementing these innovations, Webster reports it’s still crucial for users to walk through their facilities, conduct a site survey, and test Sushi sensors and other wireless components to make sure they can be deployed properly and successfully.
“LoRaWAN leverages sub - GHz spectrum around 900 MHz to support low-power, long-range communications with more consistent coverage in complex environments than higher-frequency, wireless technologies such as 2.5 GHz Wi-Fi,” explains Webster. “LoRaWAN is also low-power, so it can use 3.6 VDC batteries. However, it’s data rate is small, so it typically relays simple information like tag names, process variables or GPS coordinates. It can also go longer distances if line-of-sight is



The American Control Conference (ACC) is the annual conference of the American Automatic Control Council (AACC). It will be held May 26-29, 2026, at the Hilton New Orleans Riverside. ISA is a co-sponsor of the ACC, along with 8 other professional societies.
It is a 4-day event with about 1,300 attendees presenting and discussing research innovations in control for many disciplines. It includes a day of workshops (May 26) intended to bridge the research-practice gap to benefit practitioners, innovators, and instructors.
Here is a brief sampling of several workshops:
There is a well-known “gap” between control theory, research, and practice. Today, the growing power of computation platforms, sensors, actuators, the “Internet of Things”, autonomous bots, drives an explosion of applications requiring feedback control. This workshop is intended to allow: practitioners to improve their applications through best practices supported by theory, and control researchers to credibly apply their work.
For more, visit: https://dabramovitch.com/ practical_methods_acc_2026/
Optimization is fundamental for calculating control action, modeling, design of processes, equipment, and products. This workshop will be a practical guide for those using multivariable, constraint-handling, nonlinear optimization.
The workshop will explain the search algorithm for common gradient-based and direct-search optimization techniques, but the focus is on user choices for the: objective function, constraints, convergence criterion, number of replications to ensure finding the global optimum, and matching the algorithm to the application haracteristics.
For more, visit: https://www.r3eda.com/ then use the “Optimization” link then “Short Course”.
To explore all workshops and the conference program, visit https://acc2026.a2c2.org/program/workshops
To explore the conference program or register, visit https://acc2026.a2c2.org/ Conference registration is not required for workshop registration.
Digital twins and model-based control rely on data, yet experiments are often expensive and constrained. Optimal experimental design provides a systematic way to select experiments that reduce model uncertainty. This workshop introduces Pyomo.DoE, an open-source, equation-oriented framework. By treating experimental inputs (e.g., control trajectories, sampling times, and operating conditions) as decision variables, Pyomo. DoE enables automated experiment design for complex dynamic systems. For more, visit: https://dowlinglab.github.io/ pyomo-doe/Readme.html

available, and also benefits from one-toone gateway connections that can help reduce latency.”
Similarly, Oregon’s largest, customerowned utility, Eugene Water & Electric Board (www.eweb.org), recently sought mobile access to its wastewater control system for operations and maintenance staff from anywhere in its facility. The plant has multiple structures that could interfere with wireless signals, but the
municipality required 100% wireless coverage with no access loss. It operates five main processes, and can treat up to 277 million gallons per day for 200,000 residents (Figure 2).
Figure 2: To give its personnel sitewide, mobile access to its wastewater control system, Eugene Water & Electric Board recently conducted a site survey and report with Yokogawa, called for secure network segregation and integration based on the IEC 62443 standard, and adopted its VTS Portal software.


EWEB collaborated with Yokogawa to develop a cybersecure wireless network with remote connectivity. Their site survey identified optimal, indoor and outdoor locations for wireless access points, and its subsequent report specified suitable networking hardware, and secure network segregation and integration based on the IEC 62443 standard. The design also included installing and configuring centralized patch management and antivirus software, backup and recovery functions, and system hardening. To implement the design’s capabilities, the utility adopted Yokogawa’s VTS Portal software to provide secure remote access and monitoring via its wireless network.




























Beyond performing site surveys and initial testing, Webster adds that users should be trained to recognize potential issues when deploying LoRaWAN or other emerging wireless technologies. He also recommends starting with a small pilot—deploying five to 10 Sushi sensors—and verifying their performance in the specific application and operating environment before scaling up to dozens or hundreds of instruments.











“It’s also important to evaluate the capacity of LoRaWAN wireless gateways, such as Yokogawa’s MultiTech, to ensure they can support the required number of devices,” says Webster. “For applications where devices communicate at longer intervals—every 10 to 15 minutes or up to once per hour—a single gateway can typically support up to 200 devices or more.”



R. RUSSELL RHINEHART
russ@r3eda.com
“Process dynamic simulators, which are grounded in accurate models and validated with process data, are a systematic way to calculate improved control benefits.”
Part two of this three-part series reveals control technology concepts to reduce process variability and increase profitability
WHEN PROCESS LIQUID flows through devices with restrictions (thermowells, orifices, valves and pumps), the fluid accelerates. When exiting the device, its flow rate returns to normal velocity. The Bernoulli effect means the locally higher velocity lowers the fluid pressure.
Cavitation occurs when liquid pressure falls below vapor pressure or degassing pressure, temporarily causing the fluid to flash boil or degas. When exiting the device, velocity reduces and pressure increases, causing bubbles to collapse. Liquid on either side of the collapsing bubble propagates shock waves that can damage equipment. Higher production rates mean higher fluid velocity, which could lead to cavitation. Though it’s not normally an issue, cavitation can limit throughput. Operating with cavitation is possible, but it reduces equipment life.
Cavitation can be eliminated by operating at higher pressure, which is a control decision, such as increasing the level setpoint in a tank, or a design solution such as placing the device in an expander-contractor assembly. Cavitation also can be eliminated by operating at lower temperature, or with improved control that reduces variation in throughput, pressure or temperature. Whatever the solution, the increased production rate or extended equipment life economically justifies the process or setpoint change.
Control systems rely on measurements. If a sensor tends to fail, lose calibration, has noise or poor resolution, or is in a place that causes a measurement delay (deadtime), then control will be degraded. Alternate sensors, locations or inferential measurements can help solve those problems.
Increased blending of material in a process reduces the impact of temperature or
composition variation that comes from disturbances or feedstock variation. Blending can be used in preparing feedstocks, inline mixing, longer or larger process lines, increased tank volume or more effective tank agitation. An ideal analysis relates to the quantity of material being mixed, and the standard deviation of the mixed product composition scales with the inverse of the quantity. If volume or mass is doubled, standard deviation of the process variable (PV) variation is halved.
An assignable cause is an occasional external event that upsets the process. It may be a sudden rainstorm that rapidly cools equipment, an occasional raw material batch with an impurity, a measurement sensor failure, an electrical circuit trip or other singular events. It’s not a continual influence on the process.
Statistical control charts can reveal when such an event creates an unusual deviation from normal process variation. Structured procedures, such as Six Sigma or statisitical process control (SPC), can organize the search for a culprit event. Once identified, process or management procedures can be changed to eliminate such events, or eliminate their impact on the process. “Assignable cause” means the source of the upset might not be known, but it has a statistically real impact on the process, and it can be identified.
Each time the impact of an assignable cause is eliminated, process variance is improved, along with associated benefits.
The concepts illustrated in Figures 1 and 2 of Part 1 (controlglobal.com/economicrationale) and in traditional statistical control charts assume the classical Gaussian variation (normally distributed variation) in a process variable. However, nonlinearity in a process, interactions or persistence of disturbances

Eliminating cavitation increases production rates and extends equipment life to economically justify processes.
may make PV variation non-Gaussian. In this case, classic metrics of variation (variance and standard deviation), classic statistical procedures (T- or F-test, ANOVA), or classic linear regression may be invalid.
If you can determine the pattern of deviations from the setpoint, and the reduction in the distribution due to improved control, then you can determine the economic benefits of control improvements in the process, which permits operating closer to constraints.
A classic heuristic rule is that each advance in control strategy halves the PV variation, but it may or may not. So, the question is, “How to assess the impact of control improvements on variability and from that the impact on process economics?” One answer is to use process simulation.
Process dynamic simulators, which are grounded in accurate models and validated with process data, are a systematic way to calculate improved control benefits. In modern parlance, a dynamic simulator that’s a surrogate for the process is called a digital twin.
Adding environmental effects to the simulator (noise, drift, stiction, resolution, etc) will make the simulation representative of what nature will give you.
A simulation including natural vagaries is a stochastic simulation. By contrast, most simulations are deterministic. When your process is operating, nature doesn’t keep the inlet humidity, fuel BTU content, ambient losses or catalyst reactivity constant. Nature contrives mechanisms that add noise to measurements.
Simple models for generating noise and disturbances can be found in the Control series “Adding realism to dynamic simulation for control testing,” part one (controlglobal.com/dynamicrealism) and part two (controlglobal. com/simulatingdrift), and in the article, “Nonlinear model-based control: using first-principles models in process control (bit.ly/4sL1oby), published by the International Society of Automation (ISA).
Calibrate your digital twin and its input disturbances, so the simulator matches the variation you currently have in your process.
From extended time simulation, measure the frequency and magnitude of
specification violations, waste generation, on-constraint events, and consumption of material and utilities. Change the simulator to represent control improvements that you’re considering, and run it for an extended time to reveal the new CV and PV distributions. This will allow you to explore setpoint changes that will reduce operating expenses and/or improve throughput. Run it with the new setpoints to assess variations and improvements in quality, throughput, etc.
Use the results of the simulated current operation that match present process values to establish credibility of the simulator with your customers, then use the before-and-after results to credibly reveal the economic impact of your proposed control improvement.
Russ Rhinehart started his career in the process industry. After 13 years and rising to engineering supervision, he transferred to a 31-year academic career, serving as the ChE head at Oklahoma State University for 13 years. Russ is a fellow of AIChE and ISA. Now “retired,” he enjoys coaching professionals through books, articles, short courses and postings on his website at www.r3eda.com.









We drive innovation that makes the world healthier, safer, smarter and more sustainable.
Emerson is a global leader in automation technology and software. We help customers in critical industries, like energy, chemical, power and renewables, life sciences and factory automation operate more sustainably while improving productivity, energy security and reliability.
With an unparalleled portfolio of measurement and analytical instrumentation, software, integrated systems, and services, Emerson offers the solutions you need to help take your business to bold new heights. These pioneering technologies and innovative solutions provide the actionable insights you need to meet the changing demands of the process industry while reaching your safety, productivity, and sustainability goals.
Emerson’s field-proven brands – such as Rosemount™, Micro Motion™, Roxar™, Plantweb Insight™ and Flexim, – have been helping manufacturers transform their operations and exceed performance expectations for over 50 years.
Key technologies include:
• Measurement Instrumentation: Pressure, Level, Temperature, and Corrosion & Erosion
• Flow Instrumentation: Coriolis, Magnetic, Differential Pressure, Ultrasonic, Vortex, and Multiphase
• Analytical Instrumentation: Gas Analysis, Flame & Gas Detection, and Liquid Analysis
• Industrial Wireless & Connectivity: Wireless Gateways, Networks, and Data Analytics Software





We support our customers in improving their products and in manufacturing them even more efficiently.
Endress+Hauser is a global leader in measurement instrumentation, services and solutions for industrial process engineering. They provide process solutions for flow, level, pressure, liquid analysis, gas analysis, temperature, recording and digital communications, optimizing processes in terms of economic efficiency, safety and environmental impact. The company serves a variety of industries, including chemical, oil & gas, food & beverage, water & wastewater, life sciences, power & energy, and primaries & metals.
Endress+Hauser, a Switzerland-based company, was founded in 1953, and expanded operations to the U.S. in 1970. More than 80% of all Endress+Hauser instruments ordered and shipped within the U.S. are manufactured in the U.S. This means customers can rely on Endress+Hauser to deliver the products they need quickly. This strong manufacturing base is complimented by a complete network of sales partners and service locations to support its customers – wherever their instruments are installed.
Premium services, customized solutions, project management and IIoT applications round out Endress+Hauser’s offering, helping customers gain efficiency, increase quality and maximize plant availability.





At Inductive Automation, our mission is to create industrial software that empowers our customers to swiftly turn great ideas into reality by removing all technological and economic obstacles.
Ignition by Inductive Automation® is an unlimited platform for enterprise integration, industrial applications, and more. It empowers users to connect to all of the data across their enterprise, rapidly develop any type of industrial automation system, and scale their systems in any way, without limits. Ignition also offers unlimited extensibility through the addition of fully integrated modules.
The latest version, Ignition 8.3, is the world’s most advanced industrial platform, empowering your organization to unlock your data’s full potential, simplify your project workflow, manage and configure systems with ease, and deploy and secure large systems like never before.
SCADA: Control, track, display, and analyze your process.
lIoT: Make your data more accessible and efficient with MQTT.
Enterprise: Empower teams with better data to make smarter decisions.
Digital Transformation: One platform for collecting data, connecting devices, integrating systems, visualizing operations, and deploying solutions all across your organization.
HMI: Build optimized screens to monitor and control your machinery.
Alarming: Build complex alarming systems and get notifications instantly.
Reporting: Create and deliver dynamic, database-driven industrial reports.
Mobile: Easily build monitoring and control applications in HTML5 with the Ignition Perspective Module.
Power Monitoring: Centrally monitor, control, and optimize your power supply.
Unified Namespace: Maximize data management with an Ignition Unified Namespace solution.
Ignition Edge: Capture, process, and visualize critical data at the remote edge of your network.
Ignition Cloud Edition: Extend your enterprise operations, leverage elastic architectures, and securely host and deploy solutions on leading cloud platforms.
Ignition Technology Providers: Find valuable products and services that complement Ignition solutions, built by trusted companies.





KROHNE Inc. is headquartered in Beverly, MA and serves USA, Canada, Mexico, Central America and the Caribbean industrial markets through a network of representative, distributors and direct sales personnel. Our mission is to provide unparalleled application expertise, on-time delivery, and cost-effective quality products so that we can exceed our valued customers’ expectations.
We offer a technically proficient, KROHNE-trained sales force that gets involved in all aspects of technical sales, and applications support. Furthermore, our dedicated technical support, field application and repair teams are located throughout the regions to provide timely and effective services at your site or at our factory. Our TASC (Technical Application Support Center) is the heart of KROHNE’s technical support capability. This group of trained engineers and technicians at your disposal by phone, fax, or e-mail for product application, installation, operation or troubleshooting questions.
Our factory also stocks the most popular devices and spare parts and can also combine prefabricated device subassemblies into complete calibrated instruments for quick delivery. Our factory also houses calibration equipment for variable area flow meters, Coriolis mass meters, Mag meters as well as radar and TDR level instruments.

We value our customers dearly and commit ourselves to putting them first.




For nearly 80 years, Massa Products Corporation has distinguished themselves as the leader in SONAR and Ultrasonic technologies through their expertise in electro-acoustics, focus on innovation, and commitment to delivering application-specific products.
Massa Products Corp. designs, engineers, and manufactures SONAR and Ultrasonic sensing solutions for Defense & Industrial applications. Massa’s full line of line of sensors and transducers, in addition to our robust customization capabilities, allows us to provide the industry with critical tools to increase efficiency, awareness, and agility.
Founded 80 years ago by industry pioneer Frank Massa, Massa’s robust experience and commitment to innovation has pushed the boundaries of acoustic sensing beyond what was previously thought possible. False echoes, turbulent and uneven surfaces, harsh environments, unique form factors, and precise performance are just a few challenges Massa routinely overcomes. Our pragmatic approach to product development allows us to make practical, task-specific improvements that allow our solutions to thrive where alternative products and technologies fail.
Located in Hingham, MA, Massa is a 3rd Generation Family-owned certified small business. By controlling the design, engineering, and manufacturing efforts all under one roof, Massa ensures quality and performance every step of the way from concept to final testing. It also provides for the unique ability to develop bespoke solutions and bring them to market quickly through superior collaboration in each phase.
The bottom line: Massa’s unique combination of acoustical expertise, industry experience, and manufacturing agility allow us to provide task-specific solutions that are durable, accurate, and more reliable than competing solutions.
Massa Products Corp. is ISO9001 Certified




Our mission: Making tough and reliable products. Continue being a world leader in the design and manufacture of interface instruments for industrial process control, system integration, and factory automation. Provide nothing less than the best quality in process industry products and exceptional services as our success isn’t possible without loyal customers and relationships.
Moore Industries-International, Inc. is a world leader in the design and manufacture of exceptionally rugged, reliable and high-quality field and DIN rail-mounted instrumentation for the process monitoring and control industries. Product lines include temperature transmitters and assemblies; functional safety solutions; signal isolators and converters; alarms trips and trip amplifiers; I/P and P/I converters; remote I/O and RTUs; HART® gateways, monitors and interfaces; and more. Our worldwide sales and support offices provide excellent customer service and solutions for many industries including: chemical, petrochemical, utilities, petroleum extraction and refining, data centers, pulp and paper, food and beverage, mining and metal refining, pharmaceuticals, and biotechnology.





We focus on reducing operational risks by providing remote sensing solutions that minimize the need for personnel to conduct manual inventory checks in hazardous environments.
RETTAR is a technology company specializing in bulk solids volume measurement and industrial AI sensing. Founded in 2007 with its roots in Tsinghua University, we combine academic research with industrial expertise to create solutions designed to optimize global resource management.
Innovation is the core of our DNA. While we provide a comprehensive range of high-reliability sensors, our 3DPro Series represents a significant technological evolution: the transition from traditional point sensing to multi-dimensional 3D visualization. Specifically engineered for high-dust and volatile environments, our 3D Radar Scanners utilize high-penetration radar waves and AI algorithms to provide real-time topographical mapping. This enables industrial operators to obtain a comprehensive view of their inventory in silos and stockpiles, even when visibility is severely obstructed by dense dust or vapor.


As a technology partner for the Mining, Energy, and Feed sectors, RETTAR provides the critical data layer necessary for supply chain optimization. Our systems support enhanced workplace safety by offering robust monitoring capabilities that protect personnel from the inherent risks of hazardous storage environments.
Supported by a Global R&D Center and a portfolio of over 180 technology patents, we maintain a consistent commitment to technical refinement. Our solutions meet international quality standards, holding essential certifications such as CE, ATEX, and RoHS, ensuring compliant performance for customers in over 40 countries.



Tadiran is the world’s leading manufacturer of ultra-long life lithium batteries for industrial applications. Nearly 50 years ago, Tadiran pioneered bobbin-type lithium thionyl chloride (LiSOCl2) batteries for low-power applications at remote sites and harsh environments. Common applications include Industrial IoT, SCADA, asset tracking, AMR/AMI utility metering, infrastructure, medical, mil/aero, oceanographic, energy harvesting, toll tags and general automotive, oil & gas, flow metering, and cold chain, to name a few.
Tadiran bobbin-type LiSOCl2 batteries operate for up to 40 years with an annual self-discharge rate as low as 0.7% per year, while also delivering the high pulses required for two-way wireless communications.
Tadiran products include:
XOL Series –delivering up to 40-year operating life with low pulses
iXtra Series –delivering up to 10-year operating life with moderate pulses
PulsesPlus Series –delivering up to 40-year operating life with very high pulses
Extended temp –operating reliably in harsh environments ranging from -80°C to +125°C
TLM Series –delivering up to 20-year shelf life with high pulses of short duration
TLI Series –Rechargeable Li-ion cells featuring 25-year operating life and 5,000 charging cycles with high pulses
Tadiran industrial grade batteries are safe, environmentallyfriendly, and UL-listed with numerous third-party certifications. To choose the right battery, start by visiting tadiranbat.com and submitting an online applications questionnaire.





With innovative technologies and services, VEGA develops solutions that inspire. Through our sense of simplicity and our focus on people, we are looking to the future with curiosity. Locally grounded and globally connected, together we give values – measurement values as well as human values – a home. VEGA is the HOME OF VALUES.
For more than 70 years, VEGA has provided industry-leading products for the measurement of level, pressure, density, and weight. Through constant innovation, the company has become the market leader in radar level measurement instrumentation.
VEGA has sensors in use in over one million applications around the world. Their latest innovation is the VEGAPULS 6X non-contact radar sensor, the one sensor for any application. Each VEGAPULS 6X is configured to the customer’s applicationno more navigating confusing model numbers and frequency ranges. Powered by VEGA’s new radar chip, it is VEGA’s first radar sensor with both SIL certification and IEC 62443 cybersecurity compliance, ensuring unmatched safety and security.
VEGA manufactures hygienic pressure sensors and point level devices with a brilliant advantage. The VEGABAR pressure sensors and VEGAPOINT level switches use a universal hygienic adapter system, which provides the flexibility to keep installation effort and parts inventory to a minimum. Process fittings can be selected as needed to meet application-specific requirements.
The VEGABAR 20 and 30 series come standard with a 360° switching status display, which can easily be seen from any direction. The color of the illuminated ring can be customized with one of 256 different colors, all of which remain clearly visible, even in daylight. At a glance, users can see when the process is running, if the sensor is switching, or if the sensor requires maintenance.
Standard IO-Link protocol is built into every pressure sensor and point level switch, ensuring universal, simple communication. This gives these instruments a standardized communication platform, enabling seamless data transfer and simple system integration.
VEGA Americas designs, manufactures, and sells these products throughout North, Central, and South America and is a wholly-owned subsidiary of VEGA Grieshaber KG, headquartered in Schiltach, Germany. VEGA employs more than 2,700 people around the world, 350+ of which are employed with VEGA Americas.





At Yaskawa, we help you explore what’s possible, and open new doors to opportunity. Rather than accepting the status quo, we invite you to wonder, “What if …?” And then we make it possible. That dedication to engineering and innovation is what makes us different.
Experience is often the difference between solving a problem the right way and settling for “good enough.” Our global expertise is unmatched and unquestioned, with 100+ Years of manufacturing excellence, 30 countries with sales, service, and manufacturing locations, and $4.5 billion in global sales per year.
We provide both standard products and tailor-made solutions, all backed by proven quality and reliability. We continuously work to save you money, time, and energy because we believe your machine can always run faster, smoother, and more productively. It’s about making the correct diagnoses, creating the right automation machinery, and implementing it in the best way possible.
Yaskawa low- and medium-voltage AC Variable Frequency Drives cover every automation application in the industrial plant. With outputs ranging from fractional to 16,000 HP, they have a legendary reputation for reliability and advanced technology. Our latest variable frequency drives provide simple motor setup with highly flexible network communications, embedded functional safety, no-power programming, and easy-to-use tools featuring mobile device connectivity with our Drive Wizard mobile app.
Yaskawa AC Servo Systems come to a precise position with a speed and consistency that is unmatched in the industry. Connect our rotary, linear, and direct drive motors to an advanced Yaskawa iC9200 machine controller to manage motion, logic, kinematics, safety, security, and more from a single EtherCAT-based controller utilizing our iCube Control™ platform.
Over 600,000 Yaskawa Robots are at work worldwide, with 150+ models to choose from and the strength of decades of application expertise. Our industrial robots increase efficiency, provide consistent quality, and boost productivity to deliver outstanding ROI.





Yokogawa provides advanced solutions in measurement, control, and information to customers across a broad range of industries, including energy, chemicals, materials, pharmaceuticals, and food. Yokogawa addresses customer challenges in optimizing production, assets, and supply chains through the effective application of digital technologies that enable the transition to autonomous operations.
Founded in Tokyo in 1915, the company now has more than 17,000 employees in a global network of 128 companies spanning 62 countries, all working toward a more sustainable society.
OpreX is the comprehensive brand for Yokogawa’s industrial automation and control business. The OpreX brand stands for excellence in the technology and solutions that Yokogawa cultivates through the co-creation of value with its customers.
OpreX Information: Information technology that helps customers leverage the value of data, drawing on Yokogawa’s strength in operational technology (OT).
OpreX Control: Exceptionally reliable control technology that responds quickly to changes in management and operations and establishes the foundation needed for high efficiency, high quality, safety, and stability in plant operations.
OpreX Measurement: Highly reliable measurement technology for the implementation of value-enhancing operational technology (OT) and information technology (IT) integration.
OpreX Consulting: Consulting services that leverage our exceptional knowledge and experience for the investigation and identification of customer issues and provision of to-be models.
OpreX Execution: Innovative, world-class project implementation capabilities, built on a strong track record of success all over the globe.
OpreX Lifecycle: Maintenance capabilities backed by extensive experience gained from working closely with our customers allow us to optimize operations over the entire plant lifecycle.
OpreX Integrated Solutions: Integrated solutions that make use of products, services, and expertise in specific fields to address a variety of customer challenges.


Recognizes members who have made exceptional contributions to the automation profession
THE INTERNATIONAL SOCIETY of Automation (ISA.org) congratulated eight individuals on Jan. 27 after elevating them to the distinguished grade of ISA Fellow. The esteemed Fellow member grade is one of ISA’s highest honors, recognizing only senior members who have made exceptional contributions to the automation profession in practice or in academia.
“ISA is honored to recognize these outstanding achievers, whose remarkable contributions have advanced the automation industry,” says Ashley Weckwerth, ISA president. “We thank everyone who submitted nominations, and congratulate those being elevated to Fellow. It’s a privilege to recognize their accomplishments.”
ISA’s eight new Fellows are:
J John Cusimano, of Armexa (armexa.com), for pioneering cybersecurity integration in industrial automation, leading ISA/ IEC 62443 standards, and advancing education and best practices in control systems and cybersecurity through community leadership.
J Sarah Fluchs, of Admeritia GmbH (www.admeritia.de), for pioneering cybersecurity integration into industrial automation engineering, leading standards efforts, and bridging the gap between control engineering and cybersecurity through research, tools and international community leadership.
J Ramachandra Kerur, of Sunlux Technovations Pvt. Ltd. (www.sunluxtechnovations.com), for pioneering automation and fault-tolerant control systems for defense, space, metropolitan and industrial applications, enabling safe, uninterrupted operations in critical environments, impacting more than 100 space and industrial missions.
J Rao Mannepalli, of Leidos (leidos.com), for pioneering leadership and sustained contributions in developing aerospace technologies and systems, including missiles, launch vehicles and weapons systems to advance national defense and space exploration.
J Andrew McDonald, of Future of Manufacturing LLC (www. linkedin.com/in/andymcdonald), for pioneering a standard communication language for packaging machinery, and championing standards and innovation in digitalizing manufacturing by applying advanced technologies.
J Glenn Anthony Merrell (www.linkedin.com/in/glenn-merrellcap-a8212a9) for contributing to industrial control system security, training, automation and control systems, critical infrastructure protection, and advancing standards and security workforce development.
J Eloise L. Roche, of SIS-TECH Solutions LP (sis-tech.com) for contributing to the process industries’ functional safety in safety instrumented system (SIS) technology by developing
guidance, standards, training courses, publications and presentations.
J Constantino Seixas Filho, of Accenture (accenture.com), for pioneering control and automation engineering education, and advancing remote, operations center implementation across Brazil, shaping automation’s growth and professionalization in Latin America.
Valmet Oyj (valmet.com) reported Feb. 2 that it will provide automation systems for Daklo 1-3 Power Co., Ltd.’s Daklo 1 and Daklo 3 hydropower plants presently under construction in Kon Plong ward, Quang Ngai province, Vietnam. They’ll control and monitor all essential hydropower processes, ensure safe, reliable, high-performance electricity generation, and support sustainability goals by optimizing water balance and maximizing energy and operational efficiency.
Local Valmet partner Industries Equipment and Solution Company Ltd. (IESC) ordered the automation systems as part of its contract with the dams’ engineering procurement and construction (EPC) contractor. Valmet will implement distributed control systems (DCS) and electric governor controls for Daklo 1 and Daklo 3. The project’s scope also covers engineering, design, supply and commissioning of the complete automation system to ensure safe, reliable, and efficient operation. Terms of the purchase were not disclosed.
“This solution ensures long-term reliability and efficiency through robust performance, complemented by competitive pricing,” explained Nguyen Viet Cuong, deputy director of IESC. “Valmet’s local lifecycle support and collaborative approach with our EPC partner gave us confidence in a successful implementation.”

With capacities of 12 MW and 22 MW, respectively, Daklo 1 and Daklo 3 are expected to be key sources of renewable energy in Quang Ngai province. These run-of-river plants will contribute clean electricity to Vietnam’s national grid, support regional energy supply, and efficiently utilize local water resources through modern hydropower infrastructure. The plants are expected to enter commercial operation in spring 2027.
“This project demonstrates Valmet’s commitment to Vietnam’s renewable energy sector, particularly hydropower, by providing advanced automation solutions and strong local partnerships,” adds Huynh Quang Tuyen, sales manager for automation solutions at Valmet. “Through close collaboration, we can ensure long-term lifecycle support and operational excellence.”
Phoenix Contact USA (www.phoenixcontact.com) launched its Technology Alliance Program (TAP) on Jan. 28 to expands the capabilities of its EP Raptor industrial network switches and its broader computing ecosystem with validated, interoperable third-party applications. In the program’s first wave, Phoenix Contact has formed partnerships with:
J CyVault that supplies OT/ICS cyber-defense solutions.
J EmberOT that visualizes cyber-threats and safeguards systems.
J Industrial Defender that provides OT asset management and compliance automation.
J JPEmbedded that delivers communication and cybersecurity solutions for power systems and industrial networks.
J PCItek that supplies integration consulting, solutions and products to utilities.
J Radiflow that provides unified visibility, real-time threat detection, and automated risk management.
J SyskeyOT that delivers cybersecurity for OT/IACS networks.
Phoenix Contact previously announced partnerships with Xona and Forescout, which are also part of TAP. Its simple principle is that users who rely on Phoenix Contact’s substation-grade hardware can choose the applications that best fit their operational and cybersecurity needs. The program is intended to accelerate the shift from multiple standalone appliances to converged platforms capable of hosting cybersecurity, automation, and analytics applications directly on EP Raptor switches, Phoenix Contact industrial PCs, or within the PLCnext Technology ecosystem.
“Industrial organizations are under enormous pressure to simplify their architectures, while improving security, visibility and operational performance,” says Eric Reichert, automation product director at Phoenix Contact USA. “Our TAP brings together an ecosystem of proven OT and cybersecurity partners, and gives our customers a way to consolidate hardware, reduce complexity, and run their preferred applications on one high-reliability platform.”
J Flow-control supplier John Crane (johncrane.com) reported Feb.2 that it’s supporting NASA’s upcoming Artemis II mission to take four astronauts around the Moon aboard the Orion spacecraft. The company is supplying specialized filtration sieves for the spacecraft’s propellant management devices (PMD), which play a vital role in lowgravity environments.
J E merson (www.emerson.com) introduced Jan. 22 the latest release of Aspen Technology’s Mtell asset performance management (APM) software. It lets users drive immediate value, and seamlessly scale from foundational assethealth monitoring to AI-enabled failure prediction and continuous operational improvement.
J Aveva (www.aveva.com) reported Jan. 20 that it’s completed its first purchase of sustainable aviation fuel certificates (SAFc) five years early. This is an important step in its commitment to the World Economic Forum’s First Movers Coalition (FMC). Aveva bought its SAFcs from British Airways (BA) for use at London’s Heathrow airport. BA has a multi-year agreement to get SAF from EcoCeres, which is a Hong Kong-based renewable fuels producer.
J Tosi (www.tosi.net) granted Jan. 29 its users access to its Tosi Platform that unifies connectivity, visibility and security across operations technology (OT) environments regardless of size or complexity. The platform’s heart is Tosi Control, a cloud-based console that enables OT and security teams to connect remote sites, monitor critical assets, and manage access, all in real time and without requiring extensive IT resources.
J The International Society of Automation announced Jan. 15 that it’s published a new position paper, “Automation and Ethical Sourcing” (www.isa.org/position-papers). It explores how industrial automation technologies and practices can help organizations strengthen ethical sourcing across global supply chains, and increase transparency, accountability, worker safety and environmental stewardship, while supporting long-term business resilience.
J ABB (go.abb/motion) agreed on Jan. 28 to supply motor control solutions to Fervo Energy (fervoenergy.com), a Texas-based supplier of geothermal systems, for its Cape Station project in southwest Utah. Scheduled to start delivering carbon-free power to the grid in 2026, Cape Station is expected to become the largest, geothermal development by installed capacity.
J Cognite (www.cognite.com) reported Jan. 27 that it’s helping Snowflake (www.snowflake.com) launch its Energy Solutions program to let energy organizations use data and AI more effectively. The two companies report their collaboration will extend to others, and empower process industry firms to modernize infrastructure, improve efficiency, and achieve a more reliable and lower-carbon future.
Enable portable, non-invasive, hybrid capabilities everywhere
NINE CLAMP-ON ULTRASONICS
SINGLE-USE CORIOLIS FOR BIOTECH

Flexim Fluxus/Piox 731 ninemodel series modular, clamp-on, ultrasonic flowmeters provide non-intrusive, precise volumetric and mass flow measurement with no media limitations. They feature disturbance correction for accuracy in challenging conditions. Flexim Fluxus/Piox 731 also has Advanced Meter Verification to validate performance, Wet Gas Correction to ensure accurate gas stream measurements, and Dynamic Gas Metering for mass and volume correction.
EMERSON www.emerson.com/en-us/catalog/flexim-sku-fluxusf731-non-intrusive-ultrasonic-liquid-flow-meter
VORTEX WITH UP TO 400 MM CONNECTIONS AND 450 °C

VY vortex flowmeters provide reliable flow measurement for liquids, gases and saturated and superheated steam with connection sizes up to 400 mm and temperatures up to 450 °C (high-temp models) and pressures to ASME Class 1500. It features advanced digital signal processing (SSP) for stable, vibration-resistant readings, self-diagnostics, remote maintenance, and support for HART7, Foundation Fieldbus and Modbus. VY’s removable shedder bar design enables easy service and long-term operation.
YOKOGAWA 800-888-6400; https://tinyurl.com/z2hcp7ec

d·flux multiparameter, mass flowmeter and controller for higher flows provides precise measurement outputs for five process variables, including mass flow, volumetric flow, temperature, pressure and density. It features accuracy up to ±0.3% of user full scale and ±0.5% of measured value, the ability to measure and control flow rates up to 1,508 slpm (air), and sensor stability with less than 0.2% of measured value/ year after tare.
SIERRA INSTRUMENTS
www.sierrainstruments.com/products/dflux.html
Proline Promass U 500 for flow measurement in single-use applications is reported to be the first fully cGMP-compliant single-use Coriolis flowmeter for biotechnology. It consists of an installed or tabletop base unit with power supply, exciter, sensors and other electronics. Inserted into the base unit, the disposable component has four nominal diameters: DN 4 (1⁄8 in.), DN 6 (¼ in.), DN 15 (½ in.) and DN 25 (1 in.).
ENDRESS+HAUSER eh.digital/3BaGUD7

CLAMP-ON, ULTRASONIC FOR SINGLE-USE BIOPROCESSING
BCU Series clamp-on, ultrasonic flowmeter is designed for singleuse, bioprocessing applications. It delivers accurate, non-invasive flow measurement without sensors contacting the fluid path, which reduces contamination risks and installation time. BCU features integrated Ethernet/IP communications, fast setup and repeatable performance across common tubing sizes. It also supports real-time process monitoring, while eliminating gateways and external converters.
BROOKS INSTRUMENT

888-554-3569; www.brooksinstrument.com/bcu-series
FSZ S-Flow integrated, clamp-on, ultrasonic flowmeter measures liquids and gases in small pipes. It’s installed with four screws that avoid pipe modifications, while its integrated detector and flow transmitter save space, and simplify configurations. An optional, built-in temperature sensor enables simultaneous flow and temperature measurements.

FUJI ELECTRIC americas.fujielectric.com/products/instrumentation/flow-meters
Dynasonics
DXN-5P portable, hybrid, ultrasonic flowmeter streamlines process diagnostics by clamping onto the outside of pipes for easy installation. It can handle a variety of pipe sizes and conditions, including 12 in. to 48 in. size range, -40 °F to 250 °F temperatures, 0.7 to 33,000 gpm flow range, and ±0.5% ± 0.025 ft/s (0.008 m/s) accuracy. DXN-5P aids equipment diagnoses by delivering temporary flow readings, and identifying air pockets or sediment that can damage equipment.
BADGER METER INC.

DP TRANSMITTERS PROVIDE ±0.1% ACCURACY
1800DP differential pressure transmitters provide accurate measurement for flow applications ranging from -0.87 psid to +0.87 psid through -72.5 psid to 1,450 psid. With a standard ±0.1% accuracy, these conventional transmitters also feature: 4-20 mA with optional HART, 1-5 VDC low power or Modbus outputs; ±0.075% accuracy (FS); aluminum, explosion-proof housing; integral LCD display; EMC (EMI/RFI) protection; and ATEX/ IECEx and CSA certifications.

www.badgermeter.com/products/meters/ultrasonic/dynasonics-dxn-5p
ProSense FSC mechatronic digital flow sensors monitor liquid media, and provide flow sensing up to 50 GPM and up to 212 °F. They feature stainless-steel construction, are available with 3/4-, 1- or 1.5inch, female national pipe tapered (FNPT) process connections, and provide two analog, frequency or switch outputs based on flow or temperature. A pushbutton interface allows quick and easy setup, while a bright, two-color, digital display prominently shows process variable data.
AUTOMATIONDIRECT www.automationdirect.com/flow-sensors

SOR INC. https://tinyurl.com/mrxbj6za
RFO series flow-rate monitors from Gems combine a compact paddlewheel design with solid-state electronics, offering accurate flow rate measurement and integral visual confirmation. With brass, stainless-steel or polypropylene construction, they can handle diverse operating environments and pressures up to 500 psi. Featuring pulsed outputs of 4.5 to 24 VDC, RFO is suitable for water purification, chemical metering and semiconductor processing.
GALCO www.galco.com
Hydrogen (H2) ST Series flowmeters feature 100:1 turndowns; flow ranges from 0.25 to 1,000 SFPS (0.07 NMPS to 305 NMPS). Their transmitter can be integrally or remotely mounted at 1,000 feet (305 m), and they’re available in DC- or AC-powered versions. H2 ST displays flow rate and totalizer on an LCD touchscreen. They also provide digital bus communications, and arry global agency approvals for iDiv.1/Zone 1 with a NEMA 4X/IP 67 rated aluminum or 316 stainless-steel enclosure.

FLUID COMPONENTS INTERNATIONAL (FCI) 760-744-6950; www.fluidcomponents.com/products/mass-flow-meters

All in one device, VersaFlow Coriolis mass flowmeter measures mass flow, density, volume, temperature and solids content. It’s designed for liquids, gases and high/gradually varied flow (GVF), dual-phase fluid applications, and accommodates a wide range of temperature and pressure conditions. VersaFlow is available in straight and bent tube designs, and composed of materials like Hastelloy, titanium, or stainless-steel. It excels in both high-pressure and low-temperature cryogenic applications across various industries.
HONEYWELL
https://www.honeywell.com

Control ’s monthly resources guide
This 3-minute video, “Pushing boundaries of digital transformation,” demonstrates how polymers manufacturer Covestro used Aveva Process Simulation software to eliminate process design, engineering and simulation inefficiencies. It implemented a single model for its asset lifecycle and all departments involved with process simulation. It’s at www.aveva.com/en/perspectives/success-stories/covestro
AVEVA
www.aveva.com
TEN APPLICATION EXAMPLES
This website, “Digital twin applications,” covers basic aspects, but it also links to 10 online articles, including decoding asset systems, advanced process control (APC) at Boliden mining company, material handing chains, virtual drive tuning, simulation for electrical and process automation systems, interfacing pharmaceutical equipment with manufacturing execution systems (MES), and an electromagnetic flowmeter simulation. They’re all at new.abb.com/industrial-software/features/model-predictivecontrol-mpc/digital-twin-applications
ABB new.abb.com
This 42-page report, “Evaluation of digital twin modeling and simulation,” reports how digital twins can enable nuclear power plant (NPP) operations and lifecycles by contributing more data for improved monitoring, control supervision and security, as well as allowing better analyses, more accurate insights and predictions, and improved decisionmaking. It’s at www.sandia.gov/app/ uploads/sites/273/2024/11/SAND_Digital_Twins_Final.pdf
SANDIA NATIONAL LABS www.sandia.gov
This 56-minute video, “Digital twins for digital transformation,” discusses development of a gas-turbine digital twin for performance diagnostics and optimization by using Simulink RealTime and Simulink PLC Coder software. It’s at www.mathworks.com/videos/ digital-twins-for-digital-transformation-1637251097679.html, where there are several other simulation-related videos. There’s also a 9-minute “What is a digital twin?” video that’s Part 5 of a Matlab series on predictive maintenance. It covers modeling methods, including physics-based and datadriven, and is at www.youtube.com/ watch?v=cfbKR48nSyQ
MATHWORKS www.mathworks.com
This online article, “A comprehensive guide to digital twin simulation for beginners” covers basic definitions and concepts, historical development, applicable technologies and operations, IoT infrastructure and sensors, application in different industries, and compares the differences between digital twins and simulations. It’s at www.simio. com/a-comprehensive-guide-to-digitaltwin-simulation-for-beginners SIMIO www.simio.com
This 76-minute video, “Building agentic AI-powered digital twins for manufacturing operations,” covers how Sight Machine developed its Operator Agent solution with OpenUSD 3D scene-description and file-framework software, Nvidia Omniverse libraries, and Microsoft Azure. It also shows how Sight Machine and Kinetic Vision developers combine live production data, agentic AI recommendations, and physically
accurate digital twins to develop insights that let users spot issues faster and optimize production lines. It’s at www.youtube.com/watch?v=8KZiwhWUMa8 NVIDIA nvda.com
These two whitepapers, 12-page “Supercharging industry transformation with comprehensive digital twin,” and 15-page “AI and digital twin: turbocharging the digital enterprise,” cover simulation concepts, show how to design, reproduce and optimize their processes, integrate AI tools to contextualize data, remove bottlenecks, improve model fidelity, and speed up optimization. They’re at www.siemens.com/ global/en/products/automation/topic-areas/digital-enterprise/digital-twin.html SIEMENS www.siemens.com
This 60-minute webinar, “HescoRockwell Emulate 3D,” includes application areas, demonstrations, experiments, testing, emulation, open CAD and control system connectivity. It’s located at www.youtube.com/ watch?v=NPvUlyFJAao HESCO hesconet.com
This blog post, “How to implement digital twin technology in your manufacturing process: a step-by-step tutorial,” covers core components, assessing readiness, picking the right software, integrating with existing systems, construction and deployment, and optimizing processes. It’s at www. manufacturenow.in/blogs/digital-twinmanufacturing-tutorial MANUFACTURE NOW www.manufacturenow.in

Gregory K. McMillan captures the wisdom of talented leaders in process control, and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams, and (web-only) Top 10 lists. Find more of Greg’s conceptual and principle-based knowledge in his Control Talk blog. Greg welcomes comments and column suggestions at ControlTalk@endeavorb2b.com
Final control element problems create process variability that corrupts data
GREG: This is the fourth in a series of discussions about data integrity for process control and industrial automation with Mike Glass, founder of Orion Technical Solutions (oriontechnical.com), which specializes in hands-on training and assessments of I&E personnel.
MIKE: This topic doesn’t get nearly enough attention when people discuss data integrity. Most engineers focus heavily on measurement, obsess over transmitter accuracy specifications, worry about sensor drift, and invest time and resources in testing the calibrations of transmitters that really don’t drift much. They often treat output as a perfect actuator that instantly and precisely executes whatever the controller commands.
Output device problems corrupt data indirectly by creating real process variability that shows up in measurements.
GREG: Can you walk us through how that happens in practice?
MIKE: Consider a scenario I encounter regularly. An operator notices a small packing leak on a control valve, and tightens the packing to stop it. But they overtighten it, dramatically increasing friction on the valve stem.
Now you have stiction—static friction that the actuator must overcome before the valve moves. The valve only responds when actuator pressure builds enough to overcome those frictional forces, resulting in jerky, stick-slip movement, rather than smooth positioning. Here’s where the data integrity problem emerges. The positioner is essentially running its own PID control loop to maintain valve position at the commanded setpoint. With high stiction, it struggles to position accurately. Even if the process itself were inherently stable, the positioner’s constant hunting creates real variations in flow, pressure or whatever the valve controls.
MIKE: One of the most overlooked aspects of control valve behavior is that the positioner is itself a control loop. It has a setpoint (the command signal from your controller), a process variable (valve position feedback), and a controller algorithm driving the error to zero.
GREG: Unfortunately, many engineers don’t think about it that way. They treat positioner, actuator and valve as a single block in their control loop analysis.
MIKE: That creates problems because all cascade control rules apply. The inner loop— positioner controlling valve position—must be significantly faster than the outer loop. Classic cascade theory says three to five times faster. When that relationship breaks down, valve dynamics limit overall control performance rather than process dynamics You might have a process responding in two seconds, but if your valve takes four seconds to reach position, you’re no longer controlling the process—you’re being controlled by the valve. Its data signature is characteristic of “sluggish then overshoot” patterns. Someone analyzing this data might conclude the process has strange dynamics, when it’s simply valve response limiting performance.
GREG: Let’s discuss resolution and lost motion in valves, and how they affect performance and data integrity.
MIKE: Resolution is the gap between when a command signal changes and when the valve moves. If I’m at 50% position and command is 50.5%, the valve might not move until the command reaches 51% or higher. That creates a dead zone, where the process drifts uncontrolled.
GREG: Even small amounts of resolution create significant limit cycles.
MIKE: Here’s the data integrity problem: those limit cycles look like process oscillations. Someone analyzing the data might conclude you have a tuning problem, input noise or process interaction, when you actually have a mechanical problem with the final control element. Lost motion is related but different. It’s the difference in valve position between increasing and decreasing signals at the same command value.
MIKE: This creates data inconsistencies that confuse analytics algorithms.
GREG: What are other areas that most sites miss when it comes to control valve operation?
MIKE: A huge one is an air supply instrument. Air quality causes control valve problems affecting data quality. The positioner contains very small orifices in its I/P transducer. Any contamination migrates to these orifices and partially or completely blocks them. These problems also produce nonlinearities, and erratic and inconsistent behaviors. If you aren’t treating instrument air as critically as safety systems, you’re missing the big picture. Air supply problems cause other problems across dozens or hundreds of loops.
GREG: Everything we’ve discussed makes a strong case for testing and validating control valve performance, rather than assuming it’s adequate.
MIKE: Many plants never test control valve response. They install them based on sizing calculations, commission by verifying open and close, and then assume adequate performance.
GREG: I’ve personally witnessed a very serious decline in the performance of control valves. It can be largely traced back to the lack of any response requirement on valve specifications, and the desire to minimize cost and maximize flow and tight shutoff, which leads to on-off
valves posing as throttling valves. Smart positioners can’t fix dumb valves because the valve needs to have a different body and actuator designed for throttling. Consequently, they’re often lied to by feedback from positioner links not realizing what’s going on with the actual stem and internal-closure element.
I addressed the problems and solutions in ANSI/ISA-TR75.25.02-2024, “Annex A–Valve response and control loop performance–Sources, consequences, fixes and specifications.” In this annex, I give equations to predict limit cycles caused by stiction (officially termed resolution) and backlash (officially termed lost motion) in control systems with more than one and more than two integrators, respectively. The integrators can originate from integral action in positioners, controllers and processes. I also detail the loss in rangeability due to resolution that’s typically worse near the closed position, valve capacity oversizing, and severe deterioration of installed flow characteristic caused by decreased valve-to-system pressure drop ratio in a misguided attempt to reduce energy use.
There are also a lot of mistakes made in variable frequency drive (VFD) inverter and controller design and implementation, which leads to poor response and incredibly bad resolution. Most notable is the severe loss in rangeability caused by high static head, and lacking understanding of the need to go solely to torque control that omits speed control. The extensive results of studies by Peter Morgan, concluding with a list of good practices by me, are presented in the article, “Centrifugal pump control: implications of high-static head/system pressure in VFD applications” (controlglobal.com/ centrifugalpumpcontrol).
MIKE: The observation that digital positioners can generate misleading diagnostics because they measure actuator shaft position, rather than actual closure member movement, is a data-integrity problem hiding in plain sight.
James Beall’s finding that more than 30% of loop variability traces back to valve response deficiencies puts a number on something many of us have long suspected, but couldn’t quantify.

To read the full version of this column and to see the top 10 lessons learned from valve response to improve spouse response, visit controlglobal.com
Self-examination sets the baloney filter just right

JIM MONTAGUE Executive Editor jmontague@endeavorb2b.com
“Adjusting the holes in your sieve or net for what you genuinely require is the only way to separate the few useful nuggets from all the garbage.”
IF WE MOSTLY focus on process automation and control, we can miss that other fields evolve in eerily parallel ways. For instance, sometime in the mid-1960s, our wooden blocks, Lincoln Logs, Tinker Toys were joined by plain, cripplingly hard, plastic Lego bricks. They were quickly joined by gears, little turntables, and wheels with rubber tires, including some that could be attached to slightly larger bricks with battery-powered motors.
In later years, Lego added Bionicle action figures, programmable Mindstorms kits, and rigidly designed, co-branded, movie tied-in sets that, sadly in my opinion, forced kids to assemble exactly what was pictured, instead of building whatever they could think of. Most recently, these longstanding items were just joined by Lego’s new Smart bricks that are reported to be even more programmable with even more features. Sound familiar?
Hopefully, some coloring outside the lines is still possible, but I worry it’s not, and that even thinking about it may be discouraged because it doesn’t generate revenue. Beyond being free, the underlying advantage of cardboard boxes and sticks is their simplicity and abstractness make them blank slates that draw out the imagination, and ask creativity to fill in the details of whatever game we’re playing or world we’re building.
No flashy, brand-name distractions, simulation or reproduced realities can do this, even if they advertise that they can. In fact, that’s why they have to be flashy in the first place.
Likewise, there’s a generation-gap-related, information technology/operations technology (IT/OT) convergence problem that can make regular hucksterism even worse. It’s illustrated by the potential horror of taking elderly relatives to the Apple store, and trying to mediate between them and the typically young and always super-excited salespeople.
Older consumers, who often overlap with older engineers, just want basic tools for basic
tasks. The young staffers want to deluge them with their universe of endless features! I know many are trying to be helpful, and aren’t always upselling, but it can be hard to gauge the difference. This is why Tom Waits’ “Step right up” (www.youtube.com/watch?v=A2_snSkpULQ) remains so refreshing.
Given how this history unfolded, it’s no accident we ended up at the way-above-floodstage ocean of artificial intelligence (AI) and digitalization solutions. I did the best I could to report on and summarize all the new digitalized and AI-related lingo and software in the “New virtualization, AI and others toolbox” sidebar (p. 26) in this issue’s “Free to move” cover story. However, as usual, there are more popping up all the time like smiling and sadly indestructible Whack-a-Mole heads.
The only defense against all this crap is ignoring and discarding 99.99% of it, and then triaging, prioritizing and filtering out the few useful grains. Not surprisingly, going out and investigating helpful solutions is always more fruitful than waiting for them to arrive from helpful partners who aren’t either.
This is why it’s more crucial than ever to know your applications, and the requirements of your processes and people. You must know what you’ve got, what you know you need, and what problems you must solve, even if you’re unsure about some and perhaps clueless about others. This includes learning enough about the present state of your processes, equipment, personnel, software, networks, facilities, support services and infrastructure, management, enterprise and supply chains, as well as how they’re likely to evolve.
For even well-informed engineers and operators, there are doubtless some gaps in their processes that it would help to look into and fill. As far as I can tell, adjusting the holes in your filter, sieve or net for what you genuinely require is the only way to separate the few useful nuggets from all the garbage.




















































































www.Emerson.com/RosemountPressureTemperature










