The Industrial Internet of Things is using whatever network gets its data where it wants to go on time
Refinery digitalizes SIS management
Collaboration keys desert water treatment expansion
Real-time optimization realities explained
Always with You, Yesterday, Today, and Tomorrow
CENTUM, the world’s first distributed control system, has continued to evolve as a core monitoring and control system that delivers reliability, stability, and compatibility while driving productivity improvements in plants worldwide. Looking ahead, Yokogawa remains committed to preserving this legacy while pursuing sustainability and engaging in continuous innovation as we deliver cutting-edge technologies for the industries of tomorrow.
We express our heartfelt gratitude for the trust and support that our customers have shown us over the years. Together, we will strive to play an active role in creating a more prosperous and sustainable society.
Process improvement is like a trapeze act. You need a trusted partner who lends a hand at the right moment.
Just as athletes rely on their teammates, we know that partnering with our customers brings the same level of support and dependability in the area of manufacturing productivity. Together, we can overcome challenges and achieve a shared goal, optimizing processes with regards to economic efficiency, safety, and environmental protection. Let’s improve together.
22
COVER STORY IIoT tries new roles
The Industrial Internet of Things is using whatever network gets its data where it wants to go on time by Jim Montague
33
SAFETY INSTRUMENTED SYSTEMS
Phillips 66 digitalizes SIS management
Safety lifecycle software aggregates multiple data sources by Angela Summers
WATER/WASTEWATER
Inland Empire wastewater plant multitasks
Standardizes platforms for membrane-bioreactors and solids treatment—but maintains 10-hour daily staffing by Jim Montague
CONTROL (USPS 4853, ISSN 1049-5541) is published 8x annually by Endeavor Business Media, LLC. 201 N. Main Street, Fifth Floor, Fort Atkinson, WI 53538. Periodicals postage paid at Fort Atkinson, WI, and additional mailing offices. POSTMASTER: Send address changes to CONTROL, PO Box 3257, Northbrook, IL 60065-3257. SUBSCRIPTIONS: Publisher reserves the right to reject non-qualified subscriptions. Subscription prices: U.S. ($120 per year); Canada/Mexico ($250 per year); All other countries ($250 per year). All subscriptions are payable in U.S. funds.
Illustration: Derek Chamberlain / Shutterstock AI
Cutting through AI hype
Know the technology before you jump in
DCS celebrates 50 years
Distributed control systems continue to advance with open systems and other developments
Danger, Will Robinson! Will human operators be able to accept AI assistance?
IIoT connection considerations
How to avoid cybersecurity risks when incorporating IIoT into your network
POINT
Coriolis flowmeters: the early days (1975-2010)
The history and evolution of Coriolis flowmeters is a fascinating tale involving the contributions of 40 companies
Accessories elevate oxygen analyzer
Rosemount CX2100 in situ oxygen analyzer customizes to meet customers' unique requirements
Is your SCADA history ready to go to work?
Modern SCADA systems must ensure data is safe, relevant and easily shareable 18 IN PROCESS
Emerson, AspenTech unify and simplify
Chart and Flowserve merge to differentiate; 95% of manufacturers investing in AI: Rockwell; ABB motor sets new world record for energy efficiency 28
From obsolete to autonomous with open automation
How open automation can help lead industry into a more secure future
Gimme that old time flow Control's monthly resources guide
Identifying orifice tap locations
Orifice-based flowmeters are popular but their basics are still very much misunderstood
Face up to flexible interfaces
Touchscreens, industrial PCs and HMI software gain new sizes and shapes
CONTROL TALK
Smart manufacturing for process plant excellence—part 2 Real-time optimization is often misunderstood. So, here are the basics 42
IIoT isn't
A network by any name can fly right with UNS and PA-DIM
Jennifer George, jgeorge@endeavorb2b.com PUBLISHING TEAM
VP/Market Leader - Engineering Design & Automation Group
Keith Larson
630-625-1129, klarson@endeavorb2b.com
Group Sales Director
Mitch Brian
208-521-5050, mbrian@endeavorb2b.com
Account Manager
Greg Zamin
704-256-5433, gzamin@endeavorb2b.com
Account Manager
Jeff Mylin
847-533-9789, jmylin@endeavorb2b.com
Subscriptions
Local: 847-559-7598
Toll free: 877-382-9187
Control@omeda.com
Jesse H. Neal
Award Winner & Three Time Finalist
Two Time ASBPE Magazine of the Year Finalist
Dozens of ASBPE Excellence in Graphics and Editorial Excellence Awards
Four Time Winner Ozzie Awards for Graphics Excellence
Cutting through AI hype
Know the technology before you jump in
LET’S be clear, artificial intelligence (AI) can never be confused with human thought, but there's no reason to forget about it. Despite fears of surrendering control of our businesses, and lives for that matter, to a dystopian future dominated by a rise of robots, AI offers realistic benefits that can enhance industrial operations. We just need to understand how to make it work for us, and there’s the rub.
I recently participated in a series of video webinars with colleagues, and we sought to, albeit in broad strokes, break through the confusion about AI in process automation applications. For starters, all AI is not created equal, so we went beyond the buzz (controlglobal.com/AIbuzz) and examined the different types of AI technology, from generative AI to neural networks. We looked at how they're used for vastly different applications in the industrial sector.
We also weighed the safety concerns and risks (controlglobal.com/AIsafety) of integrating AI into process automation. Finally, we took a closer look at the pitfalls and promises of using AI (controlglobal.com/AIpitfalls) in control systems.
We talk a lot about AI these days, but do we really understand it? We're not the only ones trying to wrap our collective heads around it. Recent industry conferences show that AI is at the top of process automation professionals’ minds in many different sectors, and with good reason. The adoption of AI technology can change the paradigm of how control systems function, and how workforces in plants and factories are managed. There’s potential for increasing efficiency and safety. There’s even a chance to make a bigger difference in sustainable operations, if we can offset its expanded energy requirements. Some people even say AI is making operational technology "cool" again for up-and-coming generations of technicians.
Because there's a lot to grasp and understand, there's also confusion and consternation, both of which keep the introduction of AI into control systems under tight watch. But that’s not stopping it from gaining momentum.
So, how do you use, or plan to use AI, in your control systems? How should you use it? I invite you to take our most recent quiz at controlglobal.com/AIquiz, and you just may learn a thing or two about AI that you weren’t expecting.
LEN VERMILLION Editor-in-Chief lvermillion@endeavorb2b.com
“Because there's a lot to grasp and understand, there's also confusion and consternation, both of which keep the introduction of AI into control systems under tight watch.”
DCS celebrates 50 years
Distributed control systems are well-established and continue to advance, with open systems and other developments
“OPA accelerates data utilization and IT/OT convergence by standardizing interfaces among all system components.”
YOKOGAWA introduced the distributed control system (DCS) to the world 50 years ago, and continues its history of innovation with its most recent Open Process Automation (OPA) product line release and project implementation. Even as traditional DCS platforms continue to evolve, progress with open standards offers new opportunities.
Using its skills as the systems integrator (SI) for ExxonMobil’s Lighthouse Project, Yokogawa implemented its Open Automation SI Kit with third-party hardware, third-party control software and other non-Yokogawa software applications to prove the OPA concepts defined by the Open Process Automation Standard (O-PAS).
The system included implementation of OPA’s secure-by-design concepts, and secure onboarding of the devices. The recent article, “Plug-and-play punches in” (controlglobal. com/plug-and-play-punches) describes how the project uses O-PAS’ plug-and-play capabilities to integrate SI kit software with hardware components from various vendors. The resulting automation system controls about 100 loops and 1,000 I/O (Figure 1).
Per the article, “The system was fully powered up and finished hot cutover on Nov. 8, 2024. A cold cutover was completed on Nov. 17, 2024, and the commercialized, OPA-guided application began making product and generating revenue for ExxonMobil the next day.”
The Lighthouse project implemented Version 2.1 of O-PAS, with the factory acceptance test (FAT) completed in August 2024. This extended FAT process, which included rigorous testing under real-world scenarios, proved the control system was safe and reliable—critical for a facility that must operate continuously without unplanned downtime.
Under the hood with OPA OPA aims to achieve the interoperability and portability of systems by standardizing
communication methods and information models. Improved security is also a key benefit because OPA takes a secure-by-design approach, with OPA components supporting the IEC 62443-4-2 security standard’s Level 2 capabilities.
OPA accelerates data use and IT/OT convergence by standardizing interfaces among all system components. Furthermore, it provides a pathway to modernize existing systems by extending the OPA system to legacy equipment through gateways. For new, grassroots projects, OPA lets customers select fit-for-purpose and best-in-class system components, accelerates automation of system configurations, and provides services that save money and time.
The O-PAS standard and the application of OPA technologies in process automation are still developing, with early projects providing great feedback to the Open Process Automation Forum (OPAF). The market, implementation and certification of products are maturing, but at this stage, it’s important to proceed carefully.
Pick your OPA partner carefully
While some automation vendors enthusiastically support and participate in OPA (controlglobal.com/OPAF-keeps-plugging), others remain on the sidelines, offering only traditional DCS solutions. We believe end users are best served by suppliers that provide options, so they can choose the path that best fits their organizational and business needs. For some end users, this means continuing to use triedand-true DCS solutions to take advantage of the comprehensive, worldwide, supplier support commonly provided for DCS-based automation systems.
Other end users see value in a hybrid approach, choosing DCS solutions in some areas, while implementing automation systems based on OPA for other projects. OPA automation systems aim to provide much lower
Mitsuhiro Yamamoto VP and executive officer Yokogawa Electric
hardware and software costs up front and throughout system lifecycles because end users can mix and match components from many different vendors. Through extensive use of OPC UA and other open standards, connectivity to all types of components and software systems internal and external to automation systems is greatly improved with OPA.
While hardware costs are lower, multiple software and licensing options require more systems integration than is currently required for a DCS. While the goal of OPA is to have these components work together in a plug-and-play fashion, it usually takes time for this level of interoperability to reach its full potential. Some end users may decide to perform the required integration internally, while others may choose to work with an experienced SI.
To maximize the potential benefits of OPA, end users may need to learn a more IT-based approach to automation (controlglobal.com/interoperabilitytricks) in addition to their OT skillsets. Because OPC UA communications are integral to OPA, proficiency with it is also critical. End users should be aware that some automation vendors have fully bought into OPC UA, while others are taking a more cautious approach.
This makes it important for end users to carefully evaluate the SIs they partner with for OPA projects because the chosen company must understand all aspects of OPA component integration. This includes the specifications, security and characteristics of each device and software solution, as well as the supervisory software systems that tie everything together. Prior experience with OPA projects is critical when selecting a partner for these open-automation systems.
Open systems will advance the journey from industrial automation to industrial autonomy (IA2IA), as will advances with more traditional DCSs.
DCS advances drive IA2IA
As described in a recent Control magazine interview (controlglobal. com/YokogawaIA2IA) and summarized here, the journey from IA2IA continues with the latest iterations of the DCS.
Specifically, the modern DCS is evolving in three main areas:
• Operability improvement with laborsaving measures, such as effective alarm management, and other advanced measures, such as ergonomic engineering;
• Connectivity improvement with expanded project execution scopes, such as added capabilities for communication among subsystems; and
• Engineering efficiency improvement, resulting in reduced engineering costs, plus more timely and cost-effective project execution. The DCS itself is also evolving based on IA2IA concepts, with three notable milestones in this area over the past few years. First, the autonomous control AI protocol, Factorial Kernel Dynamic Policy Programming (FKDPP), was adopted for use at an ENEOS Materials Corp. chemical plant. Second, IT/OT convergence has been realized with the introduction of our information server platform, which created an integrated operations environment that covers an entire plant or even multiple plants by connecting
with ours and third-party control systems. The third milestone is using single-user-interface operation for different robots from multiple suppliers, and coordinating DCS operation with robots based on the aforementioned information server platform. We envision a future where distributed controls and robots work together.
Digital twins will also play a key role in IA2IA in three main ways. First, digital twins can supplement information that can’t be measured in the physical plant. This includes soft sensors that don’t perform physical measurements, but can predict them based on other measured or calculated parameters. Second, the ability to generate future, digital-twin information from a timeseries perspective makes it possible to act in advance against a future that may occur based on extrapolating current parameters into the future. Third, the scope of the digital twin will expand from unit-level processes and asset twins to plant-wide twins, and even to twins covering entire supply chains.
The DCS is evolving with innovations to traditional platforms, and with OPA, we’ll continue to support end users on their organization's path.
Mitsuhiro Yamamoto is a Yokogawa Electric’s VP and executive officer, and head of its systems business division.
Figure 1: This O-PAS aligned system is up and running
JOHN REZABEK Contributing Editor
JRezabek@ashland.com
“Will performance diminish when they become legion?”
Danger, Will Robinson!
Will human operators be able to accept AI assistance?
IF you recall a time with only three broadcast TV channels and a rotary dial, the phrase “Danger, Will Robinson!” might be familiar. Many of us who grew up in the 1960s may have imagined a future like the one depicted in the then-contemporaneous, broadcast television series Lost in Space. Surely by Y2K or soon thereafter, intrepid scientists would be loading up their families on a craft less reliable than the family station wagon.
After crash-landing on uncharted, and often hostile, planets, Robinson is frequently accompanied and advised by “Robot,” a robotic friend/babysitter. His pal is dependable and useful until it’s hacked by the nefarious Dr. Smith, whose bad intentions somehow remained invisible to the other characters.
A few years after the TV show was cancelled, the Three Mile Island nuclear accident happened. Its modest impact was exacerbated by the film, The China Syndrome , which was followed a few years later by the 1986 nuclear disaster at Chernobyl. Both accidents were attributed, at least in part, to human operators confounded by their human-machine interfaces (HMI), misinterpreting them, and disabling automation systems that may have prevented those accidents.
Present day, artificial intelligence (AI) capabilities and anticipated developments might compel some to think we’ll have a robot to be our friend and babysitter, or replace us.
After downloading Grok 3 Beta to my phone, I decided to try it. I asked Grok if it knew what a magnetic flowmeter is. Without delay, it listed an accurate and comprehensive answer, even explaining its principle of operation (based on Faraday’s Law of electromagnetic induction), key features (suitable for conductive fluids, high turndown, etc), and limitations.
I gave Grok a troubleshooting question— flow indication was incorrect/low. What could be wrong? Grok suggested plausible causes
and remedies—again instantly. It answered with 10 probable causes, as well as what to check for and next steps. Among its suggestions were improper installation, fluid conductivity issues, entrainment issues, fouling or coating, electromagnetic interference (EMI) and calibration.
I tried some process questions. Why is my steam turbine speed oscillating? Again instantly, the AI engine produced four categories of potential issues (control system issues, mechanical issues, process conditions and system interactions). Each category had two or three more specific possibilities, which included what I already knew to be the problem (deadband and hysteresis).
In each case, Grok offers in-depth analysis, next steps to troubleshoot the issue, and asked if I had more details to share. I typed specifics on the turbine inlet valve, actuator manufacturer and oscillation frequency. I received five pages of advice, such as eliminating high-frequency mechanical vibrations and focusing on the control system.
What isn’t obvious is how many ThreeMile Island power plants had to fire up to produce these results, or how many concurrent end users create cartoons and manipulate images for their amusement. Will performance diminish when they become legion? It isn’t a huge stretch to imagine a mobile platform—perhaps one of the anthropomorphic robots produced by the same company—becoming certified for hazardous atmospheres, and following operators around the plant on rounds. Their sophisticated sensors would hear frequencies amidst the cacophony for which we wear hearing protection, and their “eyes” would see UV and IR. They could work hours in extreme cold or heat. They could prioritize safety issues, warning their human companions of dangerous noise, heat, voltage or a restricted tube in a process heater that could burst into flame.
Remote wireless devices connected to the Industrial Internet of Things (IIoT) run on Tadiran bobbin-type LiSOCl2 batteries.
Our batteries offer a winning combination: a patented hybrid layer capacitor (HLC) that delivers the high pulses required for two-way wireless communications; the widest temperature range of all; and the lowest self-discharge rate (0.7% per year), enabling our cells to last up to 4 times longer than the competition.
Looking to have your remote wireless device complete a 40-year marathon? Then team up with Tadiran batteries that last a lifetime.
IAN VERHAPPEN
Solutions Architect
Willowglen Systems
Ian.Verhappen@ willowglensystems.com
“IIoT prioritizes security, reliability and real-time control in the operations technology (OT) domain.”
IIoT connection considerations
How to avoid cybersecurity risks when incorporating IIoT into your network
THE Internet of Things (IoT) and the Industrial Internet of Things (IIoT) are an important part of facility operations, particularly asset management. However, incorporating IIoT technologies into your network risks expanding the surface of your cybersecurity envelope, especially if added inside the control environment or outside the operations technology (OT) domain.
The default data path for most IoT sensors and actuators is through the cloud—greater than 80%. It’s the opposite when it comes to IIoT devices, where only around 20% are cloud connected—ironically, this is the Pareto Principle ratio. However, the number of IIoT applications and use cases continue to grow. As understanding about how to make effective use of this quasi-real-time data evolves, the number of legacy systems with limited ability to support IIoT data connections will decline, and overall adoption of the technology will continue to grow.
IIoT prioritizes security, reliability and real-time control in the OT domain. Cloud connectivity introduces unpredictable latency, which is unacceptable for real-time control loops. This isn't just between the sensor and the cloud platform, but also for getting cloud data to OT environments. The typical Purdue model requires transitioning every message through the IT domain, DMZ and OT control layer. There’s a minimum of three layers, and each associated cybersecurity device adds a small lag, along with the potential for misconfiguration.
This lag story played out when wireless sensor networks (WSN) were introduced, and no one was comfortable incorporating them into regulatory control. Some smart folks figured out how to compensate for it in the PID algorithms and tuning, so industry now uses WSN control loops.
Another way to address the lag issue is for edge computing devices or industrial
gateways to act as a secondary layer for analytics and optimization—almost always mediated by robust edge computing and secure network architectures, which pre-process the data before sending it to the cloud platform. These devices perform critical functions, such as decision-making, data aggregation and filtering. They also convert typical protocols, such as Modbus, to lightweight/low-overhead IoT protocols. These include MQTT (with TLS), AMQP, or secure HTTP and local analytics, ensuring that only necessary data is sent securely to the cloud. This process also reduces bandwidth, enhances security, and supports local decision-making.
Edge devices can also act on change of state, as part of their pre-processing, especially when images identify a change significant enough to warrant sending the data to the cloud. In the IoT world, this analysis is done in a security system by turning on the yard light, and sending an alert to yourself and your security company when something walks in front of the detector. Image analysis tools detect liquid-to-liquid interfaces in the process industries and factory automation to align objects.
Because of the widely distributed nature of IIoT devices, including edge computing platforms, device security must include:
• Device visibility and inventory of every connected device on the network to prevent additional invalid devices on the network, which is fundamental for managing risks;
• Vulnerability management to regularly identify and address known vulnerabilities, such as firmware and software updates for registered IIoT devices;
• Changing default passwords and implementing strong credential management; and
• Protecting IIoT devices and infrastructure from physical tampering or theft in locations that are only protected by the enclosure, which can be opened with a multitool.
Coriolis flowmeters: the early days (1975-2010)
The history and evolution of Coriolis flowmeters is a fascinating tale involving the contributions of 40 companies
WHILE most people associate the beginning of Coriolis flowmeters with Jim Smith and Micro Motion, there were several patents filed in the 1950s and 1960s that laid the foundation for Smith’s pioneering work. A patent filed in 1958 on behalf of American Radiator & Standard Sanitary Corp. appears to be the earliest patent that mentions the “Coriolis force.” The flowmeter is described as:
“The present invention relates to instruments for measuring the mass rate of flow of fluids and to an improved flowmeter of the type in which mass flow rate is made responsive to Coriolis force … In instruments of the class described, the fluid to be measured is subjected to tangential acceleration in a whirling tube, or impeller, the torque exerted on the tube in reaction to the Coriolis force of the accelerated fluid being measured as an indication of the mass flow rate.”
In May 1960, Yao Tzu Li patented an invention called “mass flowmeter” that involved rotating the flow:
“The present invention operates by causing the fluid to be rotated as it flows radially outward from an axis. This produces a Coriolis acceleration in the fluid, and therefore a Coriolis force is applied by the fluid to the member through which the fluid flows. This force is measured, and the mass rate flow of the fluid is obtained.”
Interestingly, Yao Tzu Li cites Ernest F. Fisher, who filed a patent in 1917.
In August 1972, Smith patented a “Balanced mass-moment balance beam with electrically conductive pivots.” The patent was filed in June 1975. Beginning in August 1978, Smith began patenting a series of devices that became the basis for the flowmeters produced by Micro Motion, which he founded in his garage in 1977. His August 1978 patent was filed in 1975. These patents explicitly evoke the Coriolis force. Smith’s patents substituted a vibrating and oscillating tube for a rotating tube, which
worked better than the earlier ones, and is still used today in many different forms.
Micro Motion debuted its first Coriolis flowmeter in 1977—an “A” meter for laboratory use. It was followed by the “B” meter in 1978. In 1981, Micro Motion introduced the first single-bent tube Coriolis meter, the “C “meter. Then, in 1983, the company added the first dual-bent tube Coriolis meter, the “D” meter, which had a 2-inch diameter. In 1984, Emerson acquired Micro Motion.
Heinrichs Messtechnik GmbH states on its website that it was the first European company to offer Coriolis flowmeters. Heinrich’s introduced its first Coriolis meter in 1986, which was the same year that Rheonik debuted its Omega-shaped Coriolis meter. It was also the same year that Endress+Hauser released its first Coriolis meter—the m-Point. It was a dualtube, straight-tube meter that evolved into the Proline Promass F. Krohne Group followed with a single-tube, straight-tube meter in 1994, following an earlier design from Schlumberger, now SLB, that was withdrawn from the market. One of the moving forces behind Krohne’s meter was Dr. Yousif Hussain, who holds several patents for Coriolis meters.
In 1984, Rheonik’s founder, Karl Küppers, began developing what became the company’s technology base and patent portfolio. The design that began the success of the firm was the patented, Omegashaped Coriolis flowmeter. Rheonik was founded in 1986, the same year it introduced its Coriolis meter. Continued growth caused the firm to move operations in 1993 to its current facility in Odelzhausen, Germany. In February 2008, GE Sensing and Inspection Technologies acquired Rheonik, but in 2015, Rheonik purchased the assets and business back from the GE unit.
Micro Motion introduced its Elite series of Coriolis meters in 1992. This line was designed for flow and density measurements of liquids,
JESSE YODER Founder & President Flow Research Inc. jesse@flowresearch.com
“While there was intense development of this technology from 1977 to 2010, development didn’t stop there.”
gases and multiphase flow. Micro Motion still carries its Elite series meters. In 1995, the company brought out its F-Series line, which has a robust design, and is compact and drainable.
During this same period, in 1993, the company Rota in Germany introduced the Rotamass, developed in Wehr, Germany. This dual-tube flowmeter featured a heavy wall designed to minimize the effects of vibration or pipeline stress, and provide increased reliability and output stability. In 1995, Rota became a subsidiary of Yokogawa Europe.
Straight-tube, radial-mode gas flowmeter
Direct Measurement Corp. (DMC) was founded in 1991 by seven ex-employees of Micro Motion. The company’s focus was on Coriolis flowmeters for the oil and gas industry. In 1996, DMC was acquired by FMC Technologies, now TechnipFMC. DMC’s main product was the straight-tube, radialmode Coriolis meter for measuring gas flow.
By 2000, FMC had three different Coriolis meters: SMass, Apollo A400, and the radial-mode flowmeter. Both the S-Mass and the Apollo A400 were Smith Meter brands designed for liquid applications. The S-Mass meter derived its name from the S-shaped design of its flow tube. Available models included the S25, S50, S100, S200 and S300. The S-Mass was designed for applications including custody transfer, blending, leak detection, batch control, online density measurement and petroleum production.
Apollo A400 was designed for use in petroleum applications. It measured mass fl ow at rates up to 20,000 lbs/min. A400 was available with the HART communication protocol. Its applications included loading rack terminals or bulk deliveries, transportation of crude oil and refined products, and LACT systems.
FMC had two models of the radial-mode flowmeter for measuring gas flow: R200 and R400. R200 was a 2-inch meter. R400 was a larger meter with greater throughput. While FMC was still selling the meter in 2003, it was discontinued a few years later.
Since the challenges of measuring gas flow with a straighttube meter are significant, it's worth considering the benefits of a straight-tube meter that measures gas flow. One of the benefits of a straight-tube meter is that liquid doesn’t build up at the curves. But gas does not collect at the curves. Also, bent-tube meters cause pressure drop. But gas causes minimal pressure drop, even in bent-tube meters. The benefits of using a straight-tube meter to measure gas flow are limited, although they may be better suited to hygienic and sanitary applications than bent-tube meters.
Why the dual-tube Coriolis flowmeter was Invented
Based on research into the patents underlying the fl owmeters introduced in the early 2000s, there are patents by Don Cage assigned to Micro Motion, Direct Measurement,
and FMC Technologies in the period from 1995 to 2004. He invented the radial, straight-tube Coriolis meter from Direct Measurement, which was purchased by FMC Technologies. He continued to be active in Coriolis design for other companies after this period. Cage was truly influential in the design and development of many Coriolis fl owmeters over the past 30 years.
One of the most interesting quotes from one of Cage’s 1995 patents explains the invention of the dual-tube meter. The patent is called “Coriolis mass rate flowmeter:”
“It's well known that a vibrating flow conduit carrying mass flow causes Coriolis forces that deflect the flow conduit away from its normal vibration path proportionally related to mass flow rate."
This effect was first made commercially successful by Micro Motion of Boulder, Colo. Early designs employed a single, vibrating, U-shaped flow conduit cantilever mounted from a base. With nothing to counterbalance the vibration of the flow conduit, the design was sensitive to mounting conditions, and so it was redesigned to employ another mounted, vibrating arrangement that functioned as a counterbalance for the flow conduit. Problems occurred, however, since changes in the specific gravity of the process-fluid were not matched by changes on the counterbalance, creating an unbalanced condition that could cause errors. Significant improvement was later made by replacing the counter-balance arrangement with a second U-shaped flow conduit identical to the first and splitting the flow into parallel paths, flowing through both conduits simultaneously. This parallel-path, Coriolis, mass-flow-rate meter solves the earlier balance problem, and became the premier method of mass flow measurement in industry today.
Today, more than 40 companies manufacture Coriolis flowmeters. While there was intense development of this technology from 1977 to 2010, development didn’t stop there. The story of the last 15 years is even more complex and fascinating than the story from 1975 to 2010.
A Coriolis flowmeter in use Source:
Accessories elevate oxygen analyzer
Rosemount CX2100 in situ oxygen analyzer customizes to meet customers' unique requirements
EVERYONE can use some help from their friends. Even zirconia oxide-based oxygen analyzers can get crucial support from accessories that greatly improve their performance.
These analyzers typically measure a millivolt differential across a cell or disk that correlates to the level of excess oxygen in the flue gas. These oxygen measurements can be used to optimize the efficiency of combustion processes. However, these longtime, standardized analyzers are historically difficult to install, commission and maintain, especially in hot, high-particulate and/or high-sulfur settings.
To shoulder some of the burden on oxygen analyzers and their users, Emerson has launched its Rosemount CX2100 in situ oxygen analyzer that provides a new, quick-connect feature for faster set up and service; calibration check and autocalibration features; guided and remote setup via a host such as a control or asset management system; commissioning in seven languages; interactive local operations interface (LOI); and robust components for extended maintenance life. Its accessories also include an oxygen cell that can serve five to 10 years on average in standard environments. There’s also an optional high-sulfur oxygen cell for added cell protection in high-sulfur or corrosive conditions. These Rosemount oxygen cells feature a robust design containing platinum beads that catalyze sulfur and lengthen their life.
“CX2100 combines traditional zirconia-analyzer technology with enhanced features, such as their quick-connect design to streamline maintenance and reduce process downtime, and remote configuration options to keep personnel off stacks and out of dangerous locations,” says Peyton Munoz, global product manager for analytical instruments at Emerson. “The new quick connect probe body allows easier setup and maintenance with a plug-and-play design compared to traditional screw-down
leads that take minutes to perform. This allows faster replacement or repairs with less rewiring.”
CX2100’s new LOI features capacitive-touch buttons and other customization options, inclusing a software-based calibration check process that helps monitor emissions for compliance with local regulations. It’s housed in a protective, metal enclosure to work in high-temperature settings, and simplifies the process of measuring calibration gas at two points to check for measurement drift.
Munoz reports CX2100 also features a wide range of accessories that enable it to be customized to meet the unique needs of customers’ installations, including:
• Abrasive shield that further protects the probe against high-particulate or high-sulfur settings.
• Bypass assembly that extends the probe, lets flue gases cool before reaching it, allowing its use in high-temperatures.
• Field-replaceable, snubber, ceramic or Alloy C-276 diffusers that protect the end of probe from degradation in a range of high temperatures.
• Probe mounting jackets made of a variety of insulating materials that protect CX2100 from very high temperatures or other harsh conditions.
“We worked closely with our users to tailor all of CX2100’s different accessories to precisely meet their individual requirements,” explains Munoz. “One such feature is the autocalibration and autocalibration-check features that regularly verify CX2100’s measurement drift. This means technicians don’t have to physically visit it as often. This frees them to work on higher-value tasks, and enhances safety by making sure the analyzer stays within its calibration limits. If the autocalibration-check finds that it isn’t within its recommended drift range, then CX21000 will run a recalibration process.
“The analyzer further improves safety with a flame-safety interlock that automatically powers down the probe’s heater when a flameout is detected, allowing for an additional layer of protection.”
For more information, visit www.emerson.com/RosemountCX2100
Rosemount CX2100 in situ oxygen analyzer
Source: Emerson
CHRIS LITTLE
Media Relations Director, Trihedral Engineering
Is your SCADA history ready to go to work?
SCADA applications are responsible for far more than facilitating real-time process monitoring and alarm management. The process history they compile over time is critical to providing the data-driven insights that industry relies on when optimizing their systems to control costs, maximize uptime and increase the life of infrastructure. Modern SCADA systems must ensure data is safe, relevant and easily shareable with a company’s own team or third-party reporting solutions, business systems and artificial intelligence (AI) platforms. Control talked to Chris Little, media relations director, Trihedral Engineering, about straightforward principles to ensure that your SCADA data is ready to go to work.
Q: What should people ask when they're setting up or updating their historian?
A : The first thing to ask is if your data is relevant? How do you ensure you're recording only what is necessary and not filling your database with noise? One simple way to do this is to use deadbands. For example, if you are measuring the level of a lake, the tiny changes caused by wind blowing across the surface are not helpful. Your SCADA software should allow you to set a deadband value so that only changes greater than this threshold get logged. Our VTScada software allows you to set deadbands at both the tag level and at the driver level, which also helps reduce network traffic. Also, look at the polling rate for your system. How often do process values get logged for each I/O point? Often, this rate is set high by default or because the developer assumed more data was better. Ask yourself if you really need to poll a particular sensor once per second. Odds are you don't. VTScada provides a rapid polling mode just for situations where you need to troubleshoot a problem.
Q: That makes sense, but what else do we need to know?
A: Make sure your history is complete. Obviously, to capture all your process data, your SCADA system must be running. For that, you need at least one redundant server. For many for small- to medium-sized systems, people are reluctant to move to a multi-server environment because of the perceived cost and complexity. Often, this is because they don't consider the cost of a system failure. In addition to losing access to real-time monitoring and critical alarms, you lose a chunk of your history that you can't get back. Comparatively, the cost of another server and software license is negligible.
VTScada makes it easier to configure robust multi-server failover for any number of servers without writing a single line of code. We also have discounted multi-server bundles to help smaller systems benefit from Enterprise redundancy.
If you are replacing your SCADA system, another way to ensure your history is complete is to export historical data to a CSV file and import that into your new application. VTScada provides operators, third-party analytics and business systems an uninterrupted view.
Q: Now that the data is there and relevant, how do you keep it safe?
A: The answer is to ensure your history is backed up on multiple servers in real-time. Again, this requires redundant servers with automatic synchronization. Bear in mind that if your hot backup servers are on the same desk in the same office, you have a problem. A flood, fire or loss of power at that location instantly negates the benefit of redundancy. Make sure you host at least one backup server at a different location. For critical systems,
consider having two since an event serious enough to take down one location can easily take down another.
Similarly, if your back up servers are virtual machine instances running on the same physical computer, you again have a single point of failure. Ask your integrator how data gets synchronized between servers. Many SCADA products require a separate third-party historian requiring its own failover and synchronization methodology. Most support only one level of historian redundancy and that must be in the same physical location as the server, a single source of failover.
VTScada is built around its own enterprise historian, which means every backup server in your application can bidirectionally synchronize with every other one. Only VTScada can do this. The primary server polls the I/O, the backups sync with it. If the primary goes offline, the secondary takes over, and so on (VTScada supports any number of backups). The key is that when the primary returns, it automatically backfills the data it missed. If the whole network goes offline, each isolated server can log its own data then bidirectionally sync with each other when the connection returns. This process is optimized to ensure the free flow of real-time data. That's the bestway to keep data safe, make sure it's backed up on multiple servers.
Q: How do you share the data safely?
A: Whether you're using standard interfaces like ODBC, REST, and OPC or newer publish/subscribe protocols like MQTT or Sparkplug B, moving data
VTScada is built around its own enterprise historian, so every backup server in an application can be bidirectionally synchronized. Source: Trihedral Engineering/AI
out of your secure network is serious business. Fortunately, there are some simple ways to do this without compromising your firewall. One is to host a read-only SCADA server in your demilitarized zone (DMZ)—a part of your network that acts as a buffer between your internal, private network and the open Internet. It allows external access to specific services while isolating the rest of the network. For example, VTScada servers have a read-only mode that prevents any control actions on that computer. When installed in the DMZ, this server can sync with the rest of the application servers and make that data available to third parties. If that server is compromised, it has no ability to control the application or corrupt its data.
Q: What can people get started?
A: If you are adopting a new SCADA system, ask your integrator or consultant how many levels of server failover
are there to ensure that your system's always collecting data? How does synchronization across redundant servers take place? How is the whole thing backed up? Also, if you want to play with redundancy yourself, you can actually go to our website (VTScada.com/start) and download VTScadaLIGHT, which is being used by thousands of people around the world in industrial settings or in people's basements and garages. This free perpetual license never expires and can handle up to 50 I/O. It has all our communications drivers, trends, reporting, alarm management, and page drawing tools. Install it on up to 10 computers and see how easy it is to set up redundancy and synchronization without any custom code. Use it to monitor the solar panel on your house, your bee hives, or your beer making setup. Manufacturers and utilities can use VTScadaLIGHT for pilot programs and software comparisons. There are even quick start videos to get you up and running right away. I encourage people to do that now.
Emerson, AspenTech unify and simplify
Project Beyond combines a data fabric, AI orchestration and cybersecurity at Emerson Exchange 2025
TO help end-users see around corners and navigate today’s accelerating challenges, Emerson (www.emerson.com) introduced its Project Beyond software-defined enterprise operations platform to more than 2,800 visitors on May 20 at its Emerson Exchange 2025 conference in San Antonio. The new platform is designed to let its industrial customers add value to their operations by simplifying process automation, control and data management tasks.
Project Beyond is crucial because Lal Karsanbhai, president and CEO of Emerson, cautioned the process industries are facing the fastest transformation they’ve ever experienced, and everyone must adapt, optimize and innovate to continue to compete. “Each person in this room has adjusted to new ways of working,” said Karsanbai. “Everyone has already been tested, but this is not the finish line. This moment demands the ability to see what’s ahead, and Emerson is in the business of shaping what’s next."
Claudio Fayad, CTO of Emerson’s Aspen Technology business, added, “Our innovations include working with DeltaV to enable sensors and instruments and take advantage of AI. We’re excited about these innovations because they’re born from our community, and they’re making the future of automation autonomous, simpler, safer and sustainable.”
Fayad reported that Emerson is introducing Project Beyond as its commitment to building an enterprise operations platform that’s software-defined and ready to enable operations technology. Project Beyond is designed to let existing solutions surpass what they can do presently, bridge their present and future capabilities, protect users’ investments, combine existing systems, and deliver actionable insights. It accomplishes these goals by combining and delivering value along six primary dimensions, including:
• Providing scalable computing power at the edge and in the cloud for greater scalability and flexibility, which enables processes to be managed quickly, securely and safely.
• Secure networking that combines field devices, historians, optimization software and other OT tools, and seamlessly ties them to enterprise levels and IT architectures.
• Unified data operations deliver a single source of truth for contextualized data and knowledge about users’ unique operations.
• Catalog of apps curated for the needs of users, which give them agility by combining available applications in one platform.
• AI orchestration allows users to access a range of industrial AI tools, which can assist in fulfilling the requirements of their process operations—reliably, safely and sustainably.
Camilo Fadul, DeltaV market director at Emerson, shows how Project Beyond’s enterprise operations platform uses two servers to run DeltaV's SDN and DeltaV IQ controllers.
• Zero-trust security A new cybersecurity approach that goes beyond the usual perimeter protections and provides security for every network and connection in the Project Beyond platform.
Solutions sparkle
One of many highlights at the three-day event was its 90 technical booths that exhibited hundreds—if not thousands—of products, software and services divided among the major industries and seven product segments:
• To help satisfy swelling, worldwide demand for electricity, Ovation 4.0 automation platform was set to be released on May 30, along with OMC 100 Grid Edge controller with high-speed I/O and cybersecurity software.
• Showed how DeltaV Life Sciences software enables drug development from ‘lab to life,’ while users also get an assist from DeltaV Process Knowledge Management (PKM) and DeltaV Manufacturing Execution System (MES) software.
• Following its acquisition by Emerson a year and a half ago, Flexim exhibited Fluxus F731 non-intrusive ultrasonic liquid flowmeter that can switch from transit-time to Doppler measurement; WaveInjector for Fluxus F731 and
Source: Emerson
transmitter that can withstand and serve in temperatures from -320 °F to 1,300 °F; and Fluxus F601 portable nonintrusive liquid flowmeter.
• Scheduled for release in October, Synchros IIoT fit-for-purpose sensors integrate seamlessly with control networks and enable more intelligent operations, They’ll include temperature sensors with WirelessHART protocol or long-range, wide area network (LoRaWAN) communication modules
• Further expanding on its DeltaV DCS, Emerson evolved its control system for IT-ready service, including software-defined networking (SDN) and control on a server that it plans to launch shortly. It includes an integrated, virtualized environment based on DeltaV Virtual Studio control software.
• Emerson is adding Bluetooth links and readily understandable alerts to its FieldVue DVC Series digital valve controllers starting in October.
• Emerson exhibited the Anderson Greenwood 400 Series pilot-operated valves that modulate, only releasing as much material as needed, rather than simply snapping open.
• The industrial AI exhibit featured Guardian AI enterprisesupport software and a new virtual advisor that users can log into to view asset performance. The exhibit also featured on-screen avatars, who could be asked questions.
• At its Project Beyond booth, Emerson demonstrated how its enterprise operations platform integrates with its multiprotocol, electronic-marshaling functions, and also works with its DeltaV IQ software-defined controllers that will be launched in July.
“In the 1990s, a typical processing unit might need 100 cabinets. In 2011, electronic marshaling reduced that number to 15 or 20. Now, everything can be run redundantly on two pairs of servers. The first pair runs the workstations and plant, and the second runs DeltaV SDN and the DeltaV IQ controllers,” said Camilo Fadul, DeltaV solutions market director at Emerson. “This demonstration completely integrates field devices, such as our Fisher valves and positioners, networks them via HART protocol to DeltaV DCS, and helps configure them for data exchange and control.”
For more coverage of Emerson Exchange 2025, visit www. controlglobal.com/show-coverage/emerson-exchange
Chart and Flowserve merge to differentiate
Chart Industries Inc. (www.chartindustries.com) and Flowserve Corp. (www.flowserve.com) agreed June 4 to combine in an all-stock merger, creating a differentiated leader in industrial process technologies. Pending shareholder and regulatory approvals, the deal is expected to close in 4Q25.
The combined company is expected to be worth approximately $19 billion based on the exchange ratio and June 3 closing share prices.
The combination brings together Chart’s expertise in compression, thermal, cryogenic and specialty solutions and Flowserve’s capabilities in flow management. Their merger will enable further opportunities to differentiate solutions, and provide a digital overlay, including monitoring and predictive capabilities. (A joint website dedicated to the merger is at www.ChartFlowserve.com.)
“Combining Chart and Flowserve creates a comprehensive solutions platform, with the financial strength and resilience to continue driving growth and long-term value,” says Jill Evanko, president and CEO of Chart. “Together we’ll provide a complete system of capabilities from front-end engineering design to mission-critical equipment through aftermarket and servicing, delivering high-quality, value-added solutions to an expanded, global customer base."
With an installed base of more than 5.5 million assets in more than 50 countries, the combined company will address the full customer lifecycle from process design through aftermarket support. It generated net revenue of approximately $8.8 billion on a combined LTM basis by the end of 1Q25.
After closing, the combined company will be headquartered in Dallas, Tex., and expects to maintain a presence in Atlanta and Houston, supported by a global footprint across more than 50 countries. The combined company will also assume a new name and brand following the closing.
“This merger will create a differentiated leader with the scale and resilience to meet significant demand for comprehensive industrial process technologies and services,” adds Scott Rowe, president and CEO of Flowserve. “Chart and Flowserve’s complementary businesses will strengthen our ability to meet customers’ needs, empower innovation and drive long-term, sustainable growth.”
95% of manufacturers investing in AI: Rockwell
Rockwell Automation Inc. (www.rockwellautomation.com) released June 3 results of the global study that makes up its 10th annual “State of Smart Manufacturing Report.” Conducted in March 2025, this year’s study surveyed more than 1,500 manufacturers in 17 leading manufacturing countries.
As manufacturers face uncertainty driven by economic shifts, the report highlights how companies are turning to smart manufacturing manage risks, improve performance, and support their workforces. It also examines adoption of emerging technology, including artificial intelligence (AI), machine learning (ML) and cloud-based systems.
The study’s findings include:
• 81% of manufacturers say external and internal pressures are accelerating digital transformation, with cloud/SaaS, AI, cybersecurity and quality management ranking as the top areas of smart manufacturing investments.
• 95% of manufacturers have invested in, or plan to invest in AI/ML over the next five years.
• Organizations investing in generative and causal AI increased 12% year-over-year.
• Cybersecurity ranks as the second biggest external risk, with 49% of manufacturers planning to use AI for cybersecurity in 2025—up from 40% in 2024.
• 48% of manufacturers plan to repurpose or hire additional workers due to smart manufacturing investments. Additionally, 41% are using AI and automation to help close the skills gap and address labor shortages.
• Quality control remains the top AI use case for the second year in a row, with 50% planning to apply AI/ML to support product quality in 2025.
The report's full findings are at www.rockwellautomation. com/en-us/capabilities/digital-transformation/state-of-smartmanufacturing2.html
ABB motor sets new world record for energy efficiency
ABB (go.abb/motion) reported May 28 that it’s broken its own world record for energy efficiency in large, synchronous, electric motors with a new motor that's achieved a 99.13% efficiency rating during testing. The company reports this is a substantial improvement over ABB’s previous world record of 99.05% set in 2017.
Destined for a steel plant in India, the motor will drive an air separation unit (ASU) that will liquify atmospheric air, so that oxygen and nitrogen can be separated out to provide pure gases for the steelmaking process.
Opting for a Top Industrial Efficiency (TIE)-optimized motor, rather than the standard design with 98.64% efficiency level will let the steel plant save about 61 GWh of energy and $5.9 million in electricity costs over 25 years. This is equivalent to four days of peak output from the world’s largest offshore wind farm. It will also prevent 45,000 tons of CO2 emissions, comparable to removing 10,000 cars from the road for a year. The scope for savings and avoided emissions is even greater where electricity is more expensive.
Designing
With up to seventeen I/O channels, built-in voting and enhanced math/logic capabilities typically found in costly and complex safety PLCs, the SLA can handle everything from simple alarming to more complex logic schemes including 1oo2, 2oo3 or even 5oo8 voting architectures. Call 1-800-999-2900 or visit www.miinet.com/sla-control for details.
SIGNALS AND INDICATORS
• E Tech Group (etechgroup.com) announced May 28 that it’s acquired JSat Automation (www.jsatautomation.com), a Pennsylvania-based system integrator that specialized in automation, IT/OT convergence and compliance. Terms were not disclosed. JSat Automation will operate under the name “JSat an E Tech Group company.” JSat is the third acquisition for E Tech Group since 2023, after the purchases of E-Volve Systems and Automation Group.
• The Control System Integrators Association’s (CSIA, www.controlsys.org) board of directors has appointed Adrienne Meyer as its new CEO effective on June 30. Meyer succeeds Jose Rivera, who is stepping down after a decade of leadership. Meyer has worked for ODVA Inc. (www.odva.org) for more than 21 years, most recently as VP of operations and membership at its headquarters in Ann Arbor, Mich.
• Tetra Tech Inc. (tetratech.com), a provider of consulting and engineering services in water, environment and sustainable infrastructure, reported May 1 that it’s agreed to acquire SAGE Group Holdings Ltd. (gotoSAGE.com), an au-
tomation solutions provider of municipal water and industrial manufacturing automation, smart infrastructure, and systems integration. The acquisition will provide advanced electrical and instrumentation design, engineered control systems, cybersecurity and cloud integration.
• Trihedral (www.vtscada.com/application-security) reports that it’s developed a security manual for its VTScada software to help users protect their systems against evolving cyber-threats. It provides guidance necessary to install, commission, verify and maintain the cybersecurity-certified capability of VTScada in accordance with applicable IEC 62443 cybersecurity standards, and guidance about other referenced documents. The manual is part of every license including the free industrial version of Trihedral’s software.
• Fortifi Food Processing Solutions (www.FortifiFoodSolutions.com), Woodlands, Tex., announced June 2 that it’s completed its purchase of Area 52 (area52.ca) of Moncton, New Brunswick, Canada, which produces automated crustacean-processing equipment, and concentrates on patented solutions for lobster and crab production.
RELIABLE MEASUREMENT SOLUTIONS
SHOWING up in the right place at the right time is more important than how we get there. This is just as true for moving data as it is for keeping human appointments. Consequently, as if the Internet wasn’t already broad and inclusive enough, many reliable sources report
by Jim Montague
IIoT tries new roles
The Industrial Internet of Things is using whatever network gets its data where it wants to go on time
it now includes all types of networking. They claim the Industrial Internet of Things (IIoT) is no longer limited to its usual transmission control protocol/Internet protocol (TCP/IP), hypertext transfer protocol (HTTP) or other traditional methods. Apparently, other Ethernet protocols, fieldbuses, wireless, and probably even serial communications are all fair game. Just apply IIoT or whatever network that can run where it’s needed, maybe don’t forget about the sensor, instrument and device level, oh and remember to apply suffi cient cybersecurity, too.
“We think of IIoT as using approved devices to get real-time data, so we can perform predictive analytics that will enable maintenance, optimize our processes, and reduce downtime,” says Chad Paxson, process control analyst at the Cobb County-Marietta Water Authority (ccmwa.org). “We also use IIoT to enable edge computing and planned artificial intelligence (AI)-aided analyses.”
The utility is the state’s largest producer of drinking water. Its two main plants have a capacity of 173 million gallons per day (mgd), though it usually only needs to produce half of that to meet average daily demand of 79.8 mgd from its nine wholesale customers, including five counties that buy its water. The utility runs 10,000–15,000 data tags at each plant, and operates about 30 primary PLCs, which will be reduced as it implements hot standby, control and remote PLC zones in cyber-secure network segments.
To gain more benefits from IIoT, Paxson reports the water authority began working with Texas-based Specific Energy (specificenergy.com) about a year ago, and implemented its software and edge hardware,
which can relay data to a cloud-computing service. They also deployed network segmentation and firewalls for greater cybersecurity.
“We send flow data and pump curves, and Specific Energy’s solution lets us select pumps, performance levels, times and energy use,” explains Paxson. “It’s software and edge device tie to our flowmeters and SCADA information, and link with Georgia Power’s rate calculator. We also switched our HMIs to Inductive Automation’s Ignition web-based SCADA software. These tools let us monitor pump performance, wear and tear, determine if running two pumps is better than running three pumps, and seek the most cost-effective combinations and schedules.”
Consistency with standards
Though they’re playing catch-up, too, standards and common practices can help keep IIoT grounded in reality and practicality.
“Many standards measure success by level of adoption, but they must be worth more in practice than the paper they’re printed on,” says Tom Burke, global strategic advisor for the CC-Link Partner Association (www.
cc–link.org). “One method enabling IIoT lately is Unified Name Space (UNS), which has become a popular way to get data from devices and networks. It uses a common naming strategy that establishes a single source of truth across applications. This lets users easily discover their devices by making them all the same, which lets them quickly access what they need and understand it.”
UNS is defined as a softwarebased framework that organizes, centralizes and standardizes information from multiple sources in an organization. It’s often used with Discord (discord.com) servers, which provide virtual spaces for many communal activities. Burke reports there are many ways to implement UNS, including roping in OPC Unified Architecture (UA) networking strategy and Message Queuing Telemetry Transport (MQTT) publish-subscribe messaging protocol, which already brokers communications between IIoT applications and devices.
“A centralized MQTT broker enables preprocessing of content before it reaches UNS,” explains Burke. “Once there, this preprocessing allows users to understand formerly
of the many facilities operated by the Cobb County-
data tags and 30 PLCs at each of two plants, and can produce as much as 173 mgd of drinking water per day. The utility relays flowmeter data, pump curves and local grid calculations to Specific Energy’s edge device and software, which recommends energy-saving combinations of pump performance and scheduling.
Figure 1: A spillway at the Hickory Log Creek reservoir dam in Canton, Ga., is one
Marietta Water Authority, which runs 10,000-15,000
raw data, and use it in real time, which can help simplify their entire data strategy.”
Burke adds the primary characteristics of UNS include:
• Standardized data model that harmonizes information from many sources into a common structure.
• Presentation layer, such as MQTT's data hub, where users can publish data and subscribe to it.
• Allowing multiple networking protocols on the same wire, such as timesensitive networking (TSN), CC-Link, Profibus/Profinet, EtherNet/IP, OPC UA, EtherCAT and others.
• Simplification that lets users access and secure data from anywhere in a network hierarchy.
“TSN enables communications, such as giving historical data to the presentation layer, and letting communications go back to help make comparisons. However, its rules also preserve the determinism of TCP/IP operations traffic by preventing interference by non-real-time communications,” adds Burke. “This lets users process data and perform tasks without affecting control functions.”
Burke adds that UNS’s common naming and standard formatting can simplify communications and data access so much that they’ll likely enable device-level interoperability. “UNS is like a universal translator. It lets anyone publish whatever by going to a broker like MQTT, and letting anyone else understand that data,” says Burke. “For example, UNS can take Open Process Automation Standard (O-PAS) data models, such as those for pharmaceutical processes, and publish them for consumption.”
Rounds in the jungle
Similarly, manually collecting and entering data into Excel is hard enough. Doing it twice a day—in the Amazon jungle—is even less of a picnic. Smart Energy Applications had been dealing
with these difficulties ever since it started implementing its Gas to Grid in a Box (G2G_B) generators and support system units for gas-to-grid (G2G) operations at remote oil-and-gas facilities in the Amazon. The units make electricity using the untreated gas that accompanies crude oil extraction. G2G_B consists of a generation unit from Waukesha, control and synchronism from Woodward Easygen, and load shedding from Multilin.
However, because they’re in remote, hard-to-reach locations with no permanent staff or continuous monitoring, Smart Energy’s generators required daily or twice daily, in-person visits by operators, who manually recorded their information in an Excel spreadsheet at the end of each shift. This input was analyzed the next day at each client’s headquarters, and used to build weekly and monthly Excel reports that combined process indicators and financial data.
To automate and alleviate this timeconsuming and costly process, Automation Solutions Ecuador (asecuador. com) developed a hybrid architecture using Inductive Automation’s Ignition Edge IIoT and Ignition Cloud Edition in Microsoft Azure. These software packages were added to Opto 22’s groov RIO in Smart Energy’s RTU box. These devices are networked via Modbus TCP, and connected with the Ignition driver, while the Modbus RTU devices are connected via a Moxa adapter to use the RTU/TCP driver. Each G2G_B unit requires one Ignition Edge installation.
For historical storage, these gas-togrid systems run an Azure database for MySQL and an Ignition Cloud Edition subscription using Azure. Ignition Edge publishes collected data using MQTT, and subscribes with Cirrus Link's Distributor and Engine modules for Ignition, with Starlink's satellite Internet for data transmission.
Consequently, Smart Energy automated data collection on its G2G_B
units, which can now collect and process it without human intervention. Operators and managers can monitor the generators and related components in real time, including CO2 emission reductions, and generate alerts for interventions.
Overcoming pushback
As IIoT ventures into new areas, it’s not surprising that it’s encountering some snags due to differences in networking protocols, data formats, and transmission requirements. What is surprising is that many IT-level network and their users still don’t know how to deal with OT-level counterparts on plant floors or in the field..
“IIoT is all about aggregating data to make more intelligent decisions. However, IT’s perspective is often lacking because its users talk about collecting data from gateways and controllers as if they were the final source—and they’re not,” says Paul Sereiko, marketing, and product strategy director at FieldComm Group (www.fieldcommgroup.org). “The real source is the sensors, actuators and other devices doing the actual work, and they still use different protocols, such as HART, Modbus, EtherNet/ IP, Profinet and others. Each handle data definition in different ways. Because these protocols are still often used only in OT environments, OT host systems and associated users understand how to work with them. But IT systems generally don’t understand automation protocols, so at the presentation layer, information like ‘the temperature from sensor ABC is 30 °F’ is often impossible for an IT system to interpret. What’s needed is common information that bridges the gap between IT and OT.”
This is where FieldComm Group and its co-ownership of the Process Automation – Device Information Model (PA-DIM) standard can help. Developed by 10 member organizations, PA-DIM’s purpose is to provide
a protocol-agnostic way to present device information using OPC UA’s information-sharing model to reach IT-level systems, and deliver data reliably in a format that users can act on. It’s sometimes described as a UNS for process control and networks. Its members include, FieldComm Group, ISA 100 WCI, NAMUR, ODVA, OPC Foundation, Profibus/Profinet International, VDMA and ZVEI.
PA-DIM makes sure that valuable measurements can be interpreted by IT systems at plant or enterprise levels, regardless of which protocol their instrument initially used to communicate. It accomplishes this by integrating the IEC 61987 standard’s common data dictionaries (CDD), including the units its devices are likely to need. This is done by using the proper International Registration Data Identifiers (IRDI) for those units.
“Consequently, no matter how HART, Profinet or EtherNet/IP, etc. represent a measurement and its units, it must map to this IEC code. And, when an IT device sees that string of numbers, it knows it’s reporting a measurement and what it is,” adds Sereiko. “This is done without the IT device knowing the underlying, OT-level network protocol.”
Though it hasn’t quite accomplished plug-and-play interoperability between devices, PA-DIM can help them communicate and coordinate their efforts. For example, an enterprise-level operational-efficiency analyst can combine data from a pressure device with HART and a motor controller with Profinet to determine if the motor controller is responding properly to pressure changes without any knowledge of the protocols used by the devices on the plant floor.
“IT components and systems typically don’t have the architecture to work with process automation,” concludes Sereiko. “PA-DIM also uses OPC UA to allow IT devices to receive data from the plant floor without
requiring them to understand process automation protocols. This could conceivably be done by providing information in a different way without PA-DIM, but it wouldn’t be consistent, and it would be extremely costly.”
Plant of fields
Likewise, process control professionals often talk about working in the field, but sometimes those fields have flowers and plants.
For instance, Costa Farms (costafarms.com) grows 1,500 varieties of houseplants, flowers, succulents, cacti on 5,200 acres at production sites in Florida, North Carolina, Viginia and the Dominican Republic. To grow its products, the company manages complex networks of greenhouses, shade houses and open fields. These include equipment with no direct interface for realtime data collection, and lacked the visibility to finds inefficiencies, such as:
• Transplanters that automate planting seedlings;
• Robotic systems that move pots and transfer plants;
• Conveyors that move plants through production stages;
• Irrigation systems that automatically water plants; and
• Climate control systems that regulate temperatures and humidity to ensure optimal growing conditions.
“It’s a mixed bag of machinery. Some use traditional PLCs for control, but without expensive software licenses and in-house platform-specific expertise, we couldn’t access production data without jumping through hoops,” says Karl Yeager, automation and technology manager at Costa Farms. “We realized that we couldn’t manage or optimize our operations effectively without accurate, real-time data.”
Figure 2: Experimenting with lighting and temperature at an indoor test garden, Costa Farms grows 1,500 varieties of houseplants on a total of 5,200 acres at production facilities in the southeastern U.S. To implement a standardized PLC and SCADA system that was easy to deploy and maintain without specialized expertise, the grower adopted Opto 22’s groov RIO Ethernet-based I/O modules and also opted for Node-RED program. In this case, a Node-RED flow detects a USB drive in a groov RIO, and transfers applicable data without user intervention. Source:
Costa Farms also required a SCADA system that could integrate easily with its Sage ERP platform, so it started experimenting with a customized, homegrown solution. “We needed a reliable, standardized PLC system that was easy to deploy, and could be maintained without specialized expertise,” explains Yeager. “We evaluated several PLCs, and though many were inexpensive, they had unreliable support, poor compliance and lacked durability, while others were costly, and required expensive software licenses and highly specialized training.”
Eventually, Costa Farms settled on Opto 22’s groov RIO independent, intelligent Ethernet-based, edge I/O modules, which are designed for IIoT and automation applications, and provide:
• Physical I/O, such as digital inputs that can connect to machines with closed controls, and analog inputs that can monitor flowrates, temperatures and humidity.
• Connectivity and communications via OPC UA that helps move data to the grower’s Microsoft Azure
cloud-computing service, which relies on MQTT and sometimes HTTPS SQL that writes to Azure’s Microsoft SQL cloud server.
• Software support and programming options, such as custom Linux programs and IEC 61131-3-compliant PLC development programs like CoDeSys.
To simplify programming, speed up deployment of small applications, and easily expand with extensive coding or expertise, Yeager reports that his team selected Node-RED flow-based programing tool for IIoT applications, which runs natively on all groov RIO devices. Also, groov RIO’s edge computing functions let Costa Farms process data locally, reducing latency and contextualizing data before sending it to the cloud.
Consequently, Costa Farms presently has 30 groov RIOs running at multiple facilities, each customized to meet specific operational needs. They’re set up using dynamic host configuration protocol (DHCP) that automatically assigns IP
addresses. Once a groov RIO device comes online, one of Costa Farms’ homegrown Node-RED flow functions pulls the devices’s DHCP network configuration, and writes to a SQL database, alerting Costa’s IT department about the new installation (Figure 2).
Since adopting groov RIO and Node-RED, Costa Farms gained efficiencies, such as saving two manhours per product changeover by grouping products with similar production settings. It also improved safety by reducing manual interventions during machine adjustments.
“One of the largest deciding factors in choosing groov RIO was that our maintenance man with hardly any PLC education can deploy a groov RIO in the field,” adds Yeager. “In addition, if workers don’t have to physically change equipment and parts as often, they reduce their chances of getting hurt. Plus, using Node-RED programming flows is much simpler. Copying and pasting code that I already wrote for less than $1,000 lets us deploy a new groov RIO in just a few minutes.”
Back in the water
Possibly using some savings from its electricity rate-shaving, Paxson reports that Cobb County-Marietta is also investigating how AI can help it achieve further efficiencies, such as improving leak detection.
“We already check for leaks by monitoring flow and pressure at remote sites throughout 190 miles of pipeline in the county, and we use some modeling software. However, it’s mainly a DIY solution that relies on manual entry of remote SCADA data into Excel to produce trends for the past two weeks,” adds Paxson. “This data also goes to Badger/Syrinix Radar’s website that helps monitor the 14-16 pressure transmitters, seven or eight flow transmitters, and a handful of analytical transmitters. Once flow and other measurements are taken at the pressure stations, they come back through the SCADA system and are written down. We think that automation and AI could help us model and identify useful trends faster and more effectively. We’ll probably also try some generative AI (genAI) later, but those results would be advisory only.”
Paxson reports that Cobb County-Marietta Water Authority’s IIoT-based, energy-saving projects and data-modeling efforts have been aided by the utility integrating its IT and OT departments into a single technical group about a year and a half ago. They were also greatly supported by upper management‘s approach of bringing everyone into the same room for meetings that are still held monthly.
“IT often had no idea what OT’s job was and vice versa, so it was important to get communications open and get on the same page,” says Paxson. “We also filled one new position for an IT/OT guy, who serves as the bridge point between departments, replaces Ethernet switches, and maintains much of our network infrastructure.”
For other water/wastewater utilities and process users that want to use IIoT and perhaps AI to improve their data models and save energy, Paxson recommends assessing their operations, evaluating existing latencies, deciding how far they want to go, and digging into standards like ISA/IEC 62443 and ISA 100.
“You also have to see how clean your data is, so you can trust what you’re seeing, and make sure your trending is valid,” adds Paxson. “We also proactively and aggressively upgraded our PCs. This is because our critical mission is to provide safe, sustainable and reliable drinking water, so we also have to maintain certain levels of redundancy. Consequently, for us to implement IIoT and possibly AI, we also had to have human backups and controls, who could take recommendations, whether they’re coming from our automation, IIoT or AI, and make sure that what they’re seeing and saying makes sense about what needs adjusting or fixing. While automation technologies bring immense benefits, such as improving safety, efficiency and productivity across industries, these systems fundamentally depend on a diverse range of skilled humans to design, operate, maintain and continually improve them.”
ANDRE BABINEAU Director, Strategic Initiatives, Schneider Electric
STAN WOODY Senior Manager, Product and Business Development, Intel
From obsolete to autonomous with open, software-defined automation
MOST proprietary and closed automation systems are 20-30 years old and quickly becoming obsolete. However, by bridging information technology (IT) and operational technology (OT), open automation can help lead industry into a more secure future.
A collaboration between Schneider Electric and Intel focuses on the integration of Intel Policy Engine technology with Schneider’s EcoStruxure Automation Expert. The collaboration is an example of how industry can move from proprietary systems to open automation and avoid hitting the brick walls of obsolescence and experience.
To learn more, Control spoke with Andre Babineau, director of strategic initiatives at Schneider Electric, and Stan Woody, senior manager of product and business development at Intel.
Q : How does this collaboration highlight open, software-defined automation benefits?
AB: We started this collaboration with Intel when we were exploring and defining the Open Process Automation Forum (OPAF). The collaboration is more on the software side, and we leverage Intel Policy Engine technology in EcoStruxure Automation Expert, which is our open software-defined automation solution.
SW: This collaboration between Intel and Schneider gave us an opportunity to demonstrate the value of software defined by optimizing the solution, both from a hardware and a software standpoint that delivered workload consolidation, manageability, security and was scalable for future use cases going forward.
Q: Andre, how does EcoStruxure Automation Expert represent a new era of software automation based on shared runtime?
AB: It helps end users identify digital continuity. One of the challenges most companies have is keeping their data alive. From the day they design a new plant or an addition to a new plant, the construction data must be able to be carried through the lifecycle of their plan. EcoStruxure Automation Expert leverages IT technologies to create the data continuity from the early design phase to commissioning and through operation and lifecycle evolution. The data continuity during along the lifecycle of the automation project creates higher efficiency and higher quality for the end user. When we start execution and controlling the process, all this continuity is a key element for the efficiency end users need.
Also, a key is hardware independence, a great empowerment for the end user because in our solution we don't enforce a specific hardware solution. The end user is empowered to decide on architecture, distribution or centralization. If they want specific controllers from specific vendors we can run it transparently as long as it’s part of the Universal Automation organization and leverages shared runtime.
Another unique point is application centricity in comparison to a lot of other systems in the market today. We're looking holistically at what is the control problem and defining it, instead of initially defining the architecture, which is a limiting factor.
Q: Stan, how does the ability to decouple software from hardware provide advantages for operators, such as hardware upgrades, protecting intellectual property and investments?
SW: Decoupling hardware and software has several advantages including increased flexibility, improved stability, increased agility and increased efficiency of a system. It reduces the total cost of ownership. O verall, it enables
easier upgrades, reuse of opensource libraries, quicker development and continuous innovation.
Q : How does this solution support an open process automation strategy and vision?
A B: As mentioned earlier, a lot of systems are 20-30 years old and facing obsolescence. Schneider Electric's solution is open and software-based. It removes the roadblock and continues to keep your IP running. Also, Schneider Electric, is a founding member of Universal Automation, a nonprofit organization that creates products for the market. The end user benefits from this openness and portability.
SW: Like Schneider, we understand the problems within the industry and are working hard to resolve them. Intel's been a longtime contributor to the open process automation strategy and vision. We continue to support open software-defined automation through OPAF, Margo and many industry efforts to provide truly interoperable automation to the industry.
Q: Does open, software-defined automation pave the way for true autonomous operations?
AB: Open, software-defined automation is the foundation of autonomous operation, and we've been perfecting the art of automating the process machine. We have developed and integrated different solutions over the year like advanced process control,
digital twin etc., to bring a high level of autonomy to the control facet. Bringing autonomy to the process or the control facet is not enough to achieve autonomous operations. What we must do today, with the help of some of the IT technology, is really start automating the control system, because the systems are increasingly complex and take a lot of maintenance and insight. As mentioned, there are a lot of people retiring, moving away with all this knowledge about how to make the system work, because without the system you cannot really have an autonomous operation.
We've been working with Intel and one of their founding members of a technology that allows us to do cybersecurity device onboarding without any human intervention. You can plug in a device on your network and a cyber-secured connection occurs. Then, you can onboard those devices
automatically, knowing they're not being tampered, a key element in a system that wants to be autonomous.
This system allows us to move the needle toward autonomy, and we are working with Intel on that policy engine we brought through EcoStruxure Automation Expert, another level of autonomy.
SW: Software-defined automation is key to autonomous operations, and automation with Intel industrial processors with the CPU, NPU and integrated GPU optimized specifically for industrial use cases, provides an optimized hardware foundation to develop AI-enabled autonomous solutions.
It enables manageability, orchestration, security and real-time onboarding. Combined with EcoStruxure Automation Expert, we’re creating a software-defined solution to build an autonomous factory.
A display shows the integration of Intel's Policy Engine technology into Schneider Electric's EcoStruxture Automation Expert at the 2024 ARC Industry Leader Forum.
Source: Schneider Electric.
Phillips 66 digitalizes SIS management
Safety lifecycle software aggregates multiple data sources, and makes real-time updates accessible to all authorized users
by Angela Summers, president, SIS-Tech
WHEN it comes to digital transformation, refining and process operators face unique challenges. Unlike other industries already leveraging digital twins to improve design, enable predictive maintenance, and boost operational efficiency, the refining and process sectors often struggle with data management.
Many facilities have accumulated vast quantities of data over decades, some dating back more than 50 years. This information is often scattered across departments, locked in outdated formats, and managed by tools from different generations. Unfortunately, inconsistent, incomplete, incorrect and unclear data contribute to operator and maintenance mistakes. When they aren’t sure what’s right, confirmation bias can lead to poor decisions. In addition, data fragmentation makes digital transformation more complex because updating one document can render others outdated, creating uncertainty about which version is accurate.
“Everyone is looking to digitalization, but in refining, it’s not that simple,” says Nagappan Muthiah, PE, CFSE, safety instrumented systems (SIS) lead for industrial control systems at Phillips 66 (www.phillips66.com).
This complexity occurs because refining has long history of siloing data in different file formats with local storage. Plus, true digitalization is more than uploading files to a centralized platform. It allows data to be shared in real-time with all stakeholders to support effective decision-making.
Consequently, Muthiah and his team at Phillips 66 were recently tasked with digitally transforming its SISs. Their mission was to eliminate data silos, deliver enterprise-wide visibility, clarify to safety lifecycle management, and consolidate decades of dispersed and inconsistent safety data into a smarter, more practical system.
Digitalizing refining needs ROI
Headquartered in Houston, Phillips 66 operates nine refineries, and initially set out to pursue full lifecycle digitalization, when it began its digital journey more than five years ago. Lifecycle digitalization traces safety information from
the cradle of the hazard registry to the grave of decommissioning. This would allow Phillips 66 to digitally replicate its safety system from front-end design through to commissioning. In theory, the process would deliver enormous savings in time and money by designing one unit, pressing a button, and reproducing it at other sites (Figure 1).
“It was intellectually satisfying to conceptualize digitalizing the entire process, including front-end design, operation and maintenance,” adds Muthiah. “However, we quickly realized the return on investment (ROI) wasn’t there.” This is because the true opportunity wasn’t in digitalizing front-end design data, but in focusing on the operations, maintenance and safety performance data of existing assets.
ROI is also difficult to track. While it’s obvious that giving staff useful, trustworthy, safety-critical details is worthwhile, it isn't easy to quantify without identifying vulnerabilities and correlating them with historical losses. Investigating why safety layers were triggered is vital to understanding the bottom-line costs of mis-operations. For example, an event with no damage can still result in costly business interruptions. ROI also comes from reducing the risk of spurious trips and losses, identifying vulnerabilities in designs, and reducing over-work by settling on common, standard designs. These include production’s technology, architecture and requirements, and modifying them to align with business objectives.
Consequently, Muthiah and his colleagues decided to reorient their digitalization around SIS during operations and maintenance (O&M), where proactive decisions impact reliability, uptime and safety. This approach aligns with guidance from the American Petroleum Institute’s (API) Recommended Practice 754 that classifies safety metrics into four tiers. While Tier 1 and Tier 2 reflect incidents that already occurred, Tier 3 metrics act as leading indicators—revealing if a safety protection system was activated to prevent a potential event.
From documentation to real-time decisions
Phillips 66 adopted SIS-Tech’s SIL Solver Enterprise+ (SSE+), Version 2.6, functional safety lifecycle tool almost two years
Source:
ago for its instrumented safeguards, such as alarms, BPCS protection layers, burner management, rotating equipment protection, SIS, high-integrity protection, fire and gas, and equipment protection systems.
Originally created to calculate probability of failure on demand (PFD) and spurious trip rates (STR), SSE+ has evolved into an integrated, safety-management platform. It supports the SIS process from design and documentation to compliance and governance, enabling digital transformation, eliminating data silos, and providing visibility across enterprises. SSE+ also moved onto a web-server, browser-based platform to expand beyond safety management calculations. This lets it provide clause-by-clause analyses, showing compliance with IEC 61511, ISA 8491.03 and ISA84.91.01’s management system requirements.
“Before we started using SSE+, our safety data lived in siloed, digital formats,” explains Muthiah. “You could read it, but tracking, comparing or integrating this safety design basis was much harder.”
SSE+ makes new data visible in forms, tables and graphics appropriate for each target stakeholder in a standard data format. This information is aggregated to generate specifications, I/O lists and maintenance procedures based on the latest data. Every authorized user can access their print tables, forms and documents from one reporting interface.
Figure 1: To digitally transform its safety instrumented systems (SIS) at its nine refineries, Phillips 66 needed lifecycle digitalization that would replicate its SISs from design through commissioning among multiple sites, so it adopted SIS-Tech’s SIL Solver Enterprise+ (SSE+), V.2.6, functional safety lifecycle tool, which aggregates data from multiple processes, units and sites, make it accessible to all authorized users, and updates it in real-time on all stakeholder screens and documents.
For key metrics, SSE+ uses metadata to control data entry, which lets users see how many functions are classified as BPCS protection layers, SIS, fire and gas, etc.
SSE+ makes comparisons and data integration easier by eliminating barriers to up-to-date process safety information, including the hazard registry, functional requirements, design configuration, performance verification and maintenance procedures. It has four levels of data security for access and editing, and improves work efficiency and reduces errors by flagging missing information on the system list. Data can be changed in one place, and it’s updated in real-time on all stakeholder screens and documents as soon as it’s entered. It also supports concurrent users, so multiple stakeholders can access information simultaneously, and make necessary adjustments much sooner than they could before using Excel spreadsheets on other documents in multiple places.
Beyond making device changes immediately visible to all users, SSE+ can also update related documents more quickly than traditional methods. While updating four or five documents about instruments, interlocks, fire and gas systems, or equipment and environmental protection can typically take an hour or more, SSE+'s access to content enables it to make similar updates in just a couple of minutes.
To contribute to lifecycle digitalization, SSE+ also traces safety design, engineering, operation and maintenance procedures to specified loss events. When users evaluate changes to existing systems, they can easily access, review and update needed information. This shortens the time needed to update documentation, and makes executing functional safety assessments easier. SSE+ also supports comparing corporate, regional and site data, allowing optimization of safety system investments and reducing maintenance costs.
Centralized data reveals patterns
The secure, cloud-based architecture of SSE+ lets Phillips 66 centralize all SIS data from its refining assets. Rather than managing static reports in disconnected systems, the company’s teams now work in a dynamic environment, where safety data can be filtered, analyzed and compared across units and facilities. Updates made in one area are automatically reflected across all related documentation, ensuring accuracy and alignment from field operations to corporate safety audits.
With its SIS data structured and centralized, Muthiah reports that Phillips 66 is uncovering other patterns that were previously hidden. “It was a paradigm shift—going from exchanging documents to exchanging data,” says Muthiah. “Suddenly, we could slice, dice and act on our safety insights.”
Users can also optimize design and maintenance by comparing how similar equipment hazards are addressed at other sites. For example, SSE+’s dashboards compare safety
integrity level (SIL) ratings, function architectures, device technologies or test intervals across systems, sites, regions or organizations (Figure 2). Likewise, it lets Phillips 66 compare SIL ratings across similar systems, standardize processes, and ask deeper operational questions, such as why does one refinery require more SIL 2 functions than another? Is the risk profile accurate or are assumptions misaligned? What about this outlier data?
Facing risks aids performance
Phillips 66 reports that ongoing efforts to access, analyze and visualize SIS data in aggregate are helping it become increasingly proactive in its safety management. As this work progresses, its teams are beginning to identify systemic issues and risk clusters, moving beyond merely addressing isolated failures.
“When you zoom out and leverage relevant data, you move beyond addressing isolated issues and start resolving root causes within the entire system,” adds Muthiah.
Phillips 66’s SIS digital transformation aligns with the Industry 4.0 Maturity Index developed by the National Academy of Science and Engineering (en.acatech. de) in Germany. Its framework outlines six stages of digital maturity:
• Computerization—digitization of analog systems
• Connectivity—systems and data connected across departments, enabling communication
• Visibility—real-time insights into what’s happening
• Transparency—root-cause analysis that explains why things are happening.
• Predictability—anticipating outcomes of future issues or performance
• Adaptability—autonomous response to changing conditions
After several years of focused effort—and with the right tools in
Figure 2: SSE+ centralizes, filters, analyzes and compares Phillips 66’s SIS data from its refining assets, and automatically reflects it across all related documentation, while its dashboards compare SIL ratings, function architectures, device technologies or test intervals across systems, sites, regions or organizations.
place—Phillips 66 sees itself firmly in Stage 2 and advancing toward Stage 3. At that level, SSE+ will enable comparison of evergreen, static safety design data, which reflects how systems should operate, with real-time operational data from the field, to generate Tier 3 metrics aligned with API-752. The next step, Stage 4, is where Muthiah believes more efficiencies will emerge.
“We believe Stage 4 will be a sweet spot, where digitally mapped data helps us make decisions, not just based on theoretical analysis, but on real-world analytics that further improve our safety and reliability,” adds Muthiah.
Digitalizing safety = excellence
Future goals for Phillips 66 include extending insights from SIS into equipment protection systems (EPS), where greater digital transparency can enhance safety and plant efficiency. As operational data continues to mature, the company expects to make even more impactful, real-time decisions.
“The next step would be to expand applications to include asset protection and production loss,” concludes Muthiah. “If a piece of equipment is
tripping, there’s a safety aspect, but also a commercial one. If your unit is down, you’re not making money. By looking at the metrics, we expect to improve uptime.”
Rather than attempting a sweeping overhaul, Phillips 66 took a targeted, outcome-driven approach to digital transformation. In doing so, the company transitioned from fragmented, document-heavy, SIS management to a streamlined, data-centric platform. The results were greater efficiency, compliance validation and enterprisewide visibility. Its journey offers a practical model for how legacy-heavy industries can evolve with clarity, purpose, and measurable impact.
“We’re not digitalizing for its own sake,” explains Muthiah. “We’re focused on what improves safety and reliability.”
Angela Summers is president of SIS-Tech and a member of Control ’s Process Automation Hall of Fame. She’s a licensed professional engineer with more than 30 years of experience in SIS and contributes to standards from ISA, IEC and others. She can be reached at 713-909-2100, info@sis-tech.com, or via sis-tech.com
Source: SIS-Tech
Inland Empire wastewater plant multitasks
Southern California utility standardizes on platforms for membrane-bioreactors and solids treatment—but maintains 10-hour daily staffing and lights-out operations by Jim Montague
HOW do you maintain and preserve water in the desert? Cooperation.
This was the most important strategy used by the seven municipalities in southwestern San Bernardino County, Calif., when they joined the Inland Empire Utility Agency (IEUA) after it was founded in 1950. They banded together because water resources are so limited in southern California that its residents had to create IEUA as an independently elected district, which could import water from the state’s northern regions and collaborate on solving wastewater treatment issues.
Thankfully, this collaborative spirit not only continues today, but is more important than ever in enabling IEUA to overcome ongoing changes in its population, demographics, influent profile and the technologies it uses to fulfill its responsibilities.
“IEUA is unique compared to other utilities because we do it all, including importing water, wastewater treatment,
recycling water, groundwater recharging, renewable energy generation and composting,” says Alyson Piguee, external and government affairs director at IEUA. “However, IEUA relies on importing 30% of its water California’s state water project, so we’re very dependent on local supplies, so we’re always seeking new ways to develop and store it.” (Figure 1)
The utility’s staff provided a tour of their Chino headquarters and Regional Plant 5 (RP-5) upgrade presently under construction during Automation Fair 2024 in Anaheim. RP-5 presently serves 200,000 residents and will take over RP-2’s solids processing duties once RP-5’s upgrade is complete.
Just the facts
The 243-square-mile district has 935,000 residents, and includes the cities of Chino, Chino Hills, Fontana, Montclair, Ontario, Upland and the Cucamonga Valley Water District. Its typical performance includes:
• Importing more than 50,000 acrefeet (AF) of water annually in nondrought years;
• Using more than 1,200 connections to produce more than 34,000 AF of recycled water per year;
• Treating more than 51 million gallons of wastewater per day (mgd);
• Operating 19 groundwater recharging sites with 46 basins;
• Generating capacity of 6 megawatts (MW) of renewable energy; and,
• Producing more than 230,000 cubic yards (cy) of compost per year.
“RP-5 came online in 2003, while RP-2 started operations in the 1960s on land leased from the U.S. Army Corps of Engineers. This site is also located close to the Prado Dam’s spillway that the Corps is altering by 10 feet, which will put RP-2 in a 180year floodplain,” adds Brian Wilson, PE, principal engineer at IEUA. “Other changes driving the need for RP-5’s present revamp include the fact that
Inland Empire Utility Agency (IEUA) recently undertook a $330 million project to expand its Regional Plant-5 in Chino, Calif., from 16.3 mgd to 22.5 mgd, build a new biosolids facility, and shift part of its treatment process to a membrane-bioreactor (MBR). To balance its water importing, desalting, recycling, groundwater recharging and other tasks, RP-5 standardized on Rockwell Automation’s PlantPAx DCS, power solutions and ThinManager software on VMware virtual servers.
Source, IUEA, Jim Montague and Endeavor Business Media
land use in our service areas are transitioning from mainly dairy and agricultural to residential, commercial and industrial, which require more and stronger water treatment.”
Solids treatment revamps
However, for RP-5 to assume RP-2’s role, it’s expanding its existing plant from 16.3 mgd to 22.5 mgd and building a new biosolids facility. Construction on the $330 million project began in 2021 and is expected to last another year and a half.
To further strengthen RP-5’s water treatment beyond expansion, Wilson added the revamp is also shifting part of its treatment process from traditional, gravity-based, secondary clarification and tertiary filtering to a membrane-bioreactor (MBR) process that’s more efficient. This will require adding some new influent pumps, debris screens, power centers, two primary clarifiers, new odor controls, and added channels and other modifications to its aeration basin.
This will let RP-5 ramp up its treatment capability from 6,000 milligrams per liter (mg/L) concentration of solids to 8,000 mg/L. Likewise, it’s new solids dewatering and sludge processing applications are adding blowers, boilers, power centers, digesters and a solids-thickening building.
Keeping the lights out
Even though it’s expanding treatment volumes and adding solids processing tasks, IEUA also wants to maintain present staffing levels. Since 2006, process automation has allowed RP-5 and the utility’s other plants to be manned by humans for only 10 hours per day.
“Process automation needs continuous updating, including some devices at their end of life and others that are increasingly unsupported,” explains Wilson. “Either way, they have to be switched over.”
Loren Shipley, account manager for IEUA at Rockwell Automation, agrees,
“RP-5’s legacy distributed control system (DCS) wasn’t supportable, didn’t provide enough training, and made the utility struggle to run lights out for 14 hours. Standardizing on our PlantPAx DCS, power solutions and ThinManager software on VMware virtual servers resolved these difficulties, while also allowing for future changes and expansion, and even adding artificial intelligence (AI) capabilities.”
Consequently, Wilson reports that RP-5’s additional process automation components include:
• 64 PLCs including ControlLogix for Process;
• 105 variable-speed drives (VFD);
• 148 network switches;
• 2,603 digital inputs;
• 2,002 digital outputs;
• 1,000 analog inputs;
• 712 analog outputs;
• Six virtual machines (VM); and,
• 11 thin clients.
“We’ve got 5,300 new pieces of equipment, such as pumps, valves, VFDs and air-conditioning units, but many others consist of panels that have 15-20 components inside,” explained Wilson. “We’re increasing instruments and other equipment by 200% each, and increasing I/O and PLCs by 250% each. This is a lot more process unit intensity and facility complexity.”
This automation hardware and software helps RP-5 and IEUA maintain its 10-hour daily staffing program, even though many maintenance, service and cleaning tasks are still manual and must be performed directly onsite. Wilson reports that many users can receive alerts remotely, check chemical parameters, dial in to fix many problems, and even automate some on-off tasks.
Facilities in the field
When RP-5 starts its new solids processes, Wilson reports they’ll be controlled by seven more thin clients on 12 more screens. The six new power
centers energizing them will use three thin clients on a dozen screens.
The two MBRs under construction at RP-5 will replace much of its traditional, secondary, aerating and clarifying with a new tertiary treatment method that uses finer, fiber-filtering screens, balances the bacteria’s environment for optimal performance, and reportedly produces 10 times higher quality effluent.
Meanwhile, the new solids area at RP-5 will take in wastewater at 6% solids; digest it at 100 °F; produce methane to power onsite boilers; release it at 2% solids; spin and dewater it until it’s 20-26% solids; and further process it into sludge that can be used for composting.
Duality for reliability
Thanks to IEUA’s duality policy, as many as half of RP-5’s new controls and components are held in reserve, so the overall system can maintain continuous reliability, and even perform upgrades without interrupting normal operations and expected performance. For example, the expanded plant will have 87 PLCs, but will usually only need to operate 43 or 44 for typical conditions. This duality strategy also relies on some redundant switches, servers and virtual servers running parallel redundancy protocol (PRP).
Cybersecurity is maintained via a virtual local area network (VLAN), firewalls and data diodes. The thin clients and Thin Manager also restrict unauthorized access, while Rockwell’s FactoryTalk Directory software performs authentication and authorization, and FactoryTalk Asset Centre software contributes similar functions.
“I think we’ve done about 100 tours for representatives of our other municipalities and service agencies, and for many other community members,” said Wilson. “One retired police chief said she never thought about where wastewater went before, but was glad to know it now.”
Gimme that old time flow
Control ’s monthly resources guide
SIZE GUIDE, VORTEX VIDEO
This website, “Vortex flowmeters,” has multiple content sources, including a selecting and sizing guide, a video on the vortex flow measuring principle, and articles on flow measuring for liquids, gases and stream, and a steam generation and distributions handbook. It’s at www.us.endress.com/en/ field-instruments-overview/flow-measurement-product-overview/vortexflowmeters
ENDRESS+HAUSER www.us.endress.com
VIDEO AND SHEDDING HISTORY
This online article, “What is a vortex flowmeter?,” starts with a short video on how they work, and traces vortexshedding history and Theodor von Kamen’s principles, as well as flowmeter design, style, accuracy and rangeability, and installation recommendations. It’s at www.dwyeromega.com/en-us/ resources/vortex-flow-meter
DWYER OMEGA www.dwyeromega.com
CURVES FOR FLOAT SHAPES
This five-minute video, “Flow measurement (rotameters),” is part of a process-engineering lecture series by Kevin Harding at the University of Witwatersrand, Johannesburg, South Africa. It addresses float issues, orifice plate equations, and performance results for different float shapes. It’s at www.youtube.com/ watch?v=4vEqlQF2G2A
PROCESS ENGINEERING FUNDAMENTALS (KEVIN HARDING) www.youtube.com/@kgharding29
CALIBRATING ROTAMETERS
Because rotameters or variable area flowmeters employs floats and other
mechanical components, they need routine calibration to maintain accuracy. This online article, “Rotameter calibration,” shows how to do it, what equipment is needed, how to connect/ adapt flow paths, several scenarios, and a video about specific solutions. It’s at www.fluke.com/en-us/learn/ blog/calibration/rotameter-calibration FLUKE
www.fluke.com
BUBBLERS, WEIRS, FLUMES
This hour-long video, “Open-channel flow measurement” by Darrell Kuta, flowmeter business development manager at Teledyne ISCO, covers sensor types, ultrasonics, radar level, bubblers, submerged pressure probes, flow calculations, primary devices, weir and flumes, In addition, it covers the sensors that serve them, such as velocity and non-contact radar sensors. It’s located at www.youtube.com/ watch?v=TCTS53JeV9I
TELEDYBE ISCO
www.teledyneisco.com/en-us
CHANGING
OPEN CHANNELS
This online article, “Measuring open channel flows” by Andrew Helbling, product specialist at Tracom, covers the Manning formula, time gravimetric, area-velocity, dilution and other issues. It’s at tracomfrp.com/measure-flowsopen-channels
TRACOM INC. tracomfrp.com
TURBINE BASICS AND RATES
This online article, ‘What is a turbine flowmeter?,” answers some basic questions, and covers high-flow-rate models for wastewater, advantages and limits, application recommendations, comparison with other methods,
and installation and accessories. It’s at apureinstrument.com/blogs/what-isturbine-flow-meter
APURE
apureinstrument.com
COMPARE TO PADDLE WHEEL
This multi-part blog post, “The ultimate guide to turbine flowmeters: operation, design and applications,” covers functions, construction, common applications and sensors, and a comparison with paddle-wheel flowmeters. It’s located at iconprocon. com/blog-post/the-ultimate-guide-toturbine-flow-meters-operation-designand-applications
ICON PROCESS CONTROL iconprocon.com
MAGNETICS EXPLAINED
V This online article, “Magnetic flowmeter explained/working principles,” covers Faraday’s law, signal transmission, conductive fluids and installation considerations. It even comes with a complementary 10-minute video on the same principles. They’re both at www.realpars.com/blog/magneticflow-meter
REAL PARS www.realpars.com
PROS, CONS, COMPARISONS
This webpage, “Understanding magnetic flowmeters,” covers functions, advantages, accuracy, straight-pipe requirements, turndown ratios, sizing, limitations and applications, as well as comparisons with Coriolis, mass flow and ultrasonics flowmeters. It’s at koboldusa.com/articles/type-of-flowmeters/understanding-magnetic-flowmeters
KOBOLD
Koboldusa.com
This column is moderated by Béla Lipták, who also edits the Instrument and Automation Engineers’ Handbook, 5th edition , and authored the recently published textbook, Controlling the Future , which focuses on controlling of AI and climate processes. If you have a question about measurement, control, optimization or automation, please send it to liptakbela@aol.com
When you send a question, please include your full name, job title and company or organization affiliation.
Identifying orifice tap locations
Orifice-based flowmeters are popular, but their basics are still very much misunderstood
Q: The software I use asks for pressure tap locations, but gives no guidance on what location is ideal for which application. Yet, if I change the tap locations, the algorithm gives a different flow measurement, even if nothing else is changed. The same happens if I change the assumed beta ratio (orifice bore diameter divided by pipe diameter). I would appreciate your guidance.
G. BALACHANDRAN instrument engineer bala46@gmail.com
A1: Greg Shinskey would have called such a software package one that operates on "garbage in, garbage out.”
The first study of orifice behavior was reported in 1913 in the Handbook of Natural Gas by the U.S. Geological Survey. More than a century later, orifice-based flowmeters are very popular. Today, their measurements are transmitted by a variety of sophisticated methods, including wirelessly. Their signals are displayed on fancy screens, and used by sophisticated, model-based control (MBC) loops. Yet, as this question shows, their very basics are still not fully understood.
If differential pressure (DP) flowmeters are calibrated and installed properly, the total error in the measurement will be about 1% upper range limit (URL) over a flow range of 3:1. For obvious reasons, some manufacturers advertise their accuracies as if they were better by giving accuracy values based, not on the URL, but on some lower value. Another way to overstate accuracy is by not mentioning that claimed accuracy is correct only if the meter is calibrated, otherwise it will not to be.
For these reasons, I will not limit my answers to the question concerning the options available for tap location options, but will also make a few additional comments using Figure 1.
The listings include:
• Corner taps. In this configuration both up and downstream pressures are detected at
the corners on the two sides of the plate. In the U.S., this design is used only if the pipe diameter is under 2 inches. In Europe, this configuration is popular for all sizes. However, in larger pipes, the upstream tap is often moved D = 1 inch upstream from the plate.
• Flange taps. In the U.S., these are popular when the pipe diameter exceeds 2 inches, but not below. When used, both taps are located half the flange thickness from the orifice.
• Vena contracta taps. The location of the VC point is the point where the diameter of flowing fluid is the minimum. This distance varies with the beta ( ) ratio multiplied by the pipe diameter, which can range from beta = 0.35 D to 0.85 D. This is the distance between the upstream tap and the orifice. Because this distance is not constant but varies with the beta ratio, such taps can only be used if the pipe diameters exceed 6 inches because the tap location could fall under the flange.
• Radius taps. These are tap locations that have both of their taps at half a pipe diameter (D/2) from the orifice plate. As such, distances don't change as a function of the location of the VC point.
• Pipe taps. In this configuration, the upstream tap is located at 2.5 D, while the downstream one at 8 D. The measurement error of this configuration is about 5% greater than the other locations.
I asked some of the highly experienced members of my group of experts to also make their comments concerning this question
BÉLA LIPTÁK liptakbela@ao.com
A2: This is a good question pointing out a possible weakness in software instructions. When we measure compressible fluids, it's necessary to correct for expansion of the fluid. The DP sensor knows only the difference across the primary element. If the operating pressure
is much greater than the measured differential, then selecting upstream or downstream actually makes little difference. In real life, pressure is often not measured, but is assumed to be known and constant. It's important to correct the calculation if the operating pressure will change, or where the value of the flowing fluid is high. Some modern DP transmitters also report operating pressure, and some provide temperature measurement for density determination.
CULLEN LANGFORD, PE instrumention specialist CullenL@aol.com
A3: Flange taps located 1-inch upstream and downstream of the face of the orifice plate are the most common tap configurations recognized by American Gas Association (AGA) specifications. Corner taps at the face of the orifice plate are normally used in line sizes smaller than 2 inches. Pipe taps are usually located 2.5 inches from the orifice plate. The upstream vena contracta taps are located one pipe diameter upstream of the orifice plate, and at the vena contractas on the downstream side of the orifice plate. They're not recommended when a variety of orifice bore sizes are required to meet flow requirements.
A4: The pressure downstream of an orifice plate (or any restrictor) will recover from its minimum (vena contracta) to some portion of the upstream values. The pressure upstream of the plate will be a relatively constant (minus line loss) right up to the orifice. In other words, downstream tap location must be chosen closely to match the calibration in use, but upstream tap location normally doesn't need to be. It just needs to be in an area free of flow profile disturbance.
AL PAWLOWSKI, PE CEE staff, retired
A5: The ISO 5167.1:2003 standard uses density 1 at the upstream tapping, and for compressible fluids, uses the pressure at the upstream tapping ( 1) to calculate density. Some earlier standards used the downstream tapping point for the reference pressure, and some used an average value between 1 and 2. Naturally, the pressure location at 2 or pave gives a value that includes some or all of the ∆ from the flow through the orifice as well as 1. Consequently, the orifice coefficients differ, not only due to differences in pressure recovery depending on tap geometry, but also depending on the location of the reference pressure tap. You don’t have identical process parameters if you're measuring reference pressure at different points.
IAN H.GIBSON process, control and safety engineering consultant gibs0108@optusnet.com.au
Figure 1: Listing for selecting the location options for orifice pressure taps
Face up to flexible interfaces
Touchscreens, industrial PCs and HMI software gain new sizes and shapes
HEADLESS HMI FOR ANY DISPLAY SIZE
HMI FOR HAZARDOUS LOCATIONS
CM5-RHMI headless, HDMIenabled HMI has the functions of C-more CM5 touchpanel HMIs, but without display size limits. Users can also skip the local display, and use the remote-access feature that supports any Windows PC with browser, Apple iOS, Android smartphones or tablet PCs with the C-more Remote HMI mobile app. CM5-RHMI supports many screen resolutions, provides an SD card slot for log files, project memory or graphics, and offers 90 MB of user memory.
AUTOMATIONDIRECT
www.automationdirect.com/headless-hmi
NEMA 4X AND IP68 IN 4-, 7- AND 10-INCHES
Visual Panel 200 HMIs withstand tough conditions, and help the equipment they support operate at peak performance. With 4-, 7- and 10-inch screen sizes, as well as a daylight-readable, 10inch model, these panels are NEMA 4X rated, IP68 certified, and meet international compliance standards for CE, UL and cUL. They’re panel- or VESA-mountable, have a fanless design, no battery, and are maintenance-free for installation in most control panels.
WAGO www.wago.com
VisuNet FLX HMIs from Pepperl+Fuchs consist of touchscreen displays paired with computing modules for Zone 2/22 (Div. 2) and non-Ex applications. Display sizes include 21.5 in. (1,920 x 1,080), 19 in. (1,280 x 1,024), and 15.6 in. (1,920 x 1,080), featuring chemically resistant LCD screens with up to 178° viewing angles. Units offer IP66/Type 4X-rated stainless steel bezels, passive cooling, and vibration resistance per EN60068-2-6. They're also ATEX, IECEx and UL approved.
PEPPERL+FUCHS
www.pepperl-fuchs.com
IPC COMBINES ATX BOARD AND GPU
7-INCH, COLOR TOUCHSCREEN
CR1000 automation HMI from Red Lion provides protocol conversion and connectivity choices, along with a 7-inch, TFT LCD color touchscreen, serial, Ethernet and USB connectivity, and 24 V DC power. It combines an ever-expanding list of more than 300 industrial drivers with the Crimson 3.1 development platform, so it can easily scale and adapt as requirements. CR1000’s applications includes factory automation, OEM machines, food and beverage and water/wastewater.
NEWARK tinyurl.com/4xns8ddu
C6675 control-cabinet, industrial PC (IPC) combines its ATX motherboard and GPU cards in one unit, giving it properties typical of an industrial server. It many expansion slots, including two PCIe-x1, two PCIe-x4, one PCIe-x16 and two PCI for full-length plug-in cards with a total of up to 300 W power. Two removable SSD or hard disk frames, in conjunction with the on-board RAID controller, form a RAID-1 system with two mirrored hard disks or SSDs.
BECKHOFF www.beckhoff.com/c6675
HANDHELD HMI HAS 5.7-IN VGA DISPLAY
GT25 graphic operations terminal (GOT) from Mitsubishi Electric Automation is a compact, portable HMI that lets users operate machines while standing next to them. It’s designed to enhance user operation, complements Mitsubishi PLCs, and at 0.79 kg is light enough to be held in one hand. GT25 also features a builtin, high-resolution 5.7-inch, VGA display, 32 MB ROM and 80 MB RAM, and an available GT14H-50ATT wall-mount attachment.
MISUMI us.misumi-ec.com/vona2/detail/222300118672
IPC WITH HI-RES WIDESCREEN
NYE industrial PCs are designed using Sysmac principles for creating machine performance, while empowering users with data. They have a powerful CPU, breadth of connectivity, and proven housings. NYEs feature widescreens in all models including 7, 9, 12 and 15 inches; more than 16 million available colors; 1,280 x 800 high-resolution display for the 12- and 15-inch models; Intel Atom Quadcore E3940 processor; and 512 GB Cfast 3D triple-level cell (TLC) flash memory.
Panel-mount, 23.8-inch monitor offers a rugged display with 1,920 x 1,080 resolution and 350 nit brightness. Designed for harsh environments, it features a slim, 46 mm depth, and is available with either a flush front or a stainless-steel bezel. The monitor is NEMA 4/4X/IP65/IP66 and IP69K rated, making it ideal for washdown areas. Options include resistive or PCAP touchscreens and tempered glass. They’re Class I, II, III, Div 2 certified, and a five-year warranty is included.
HOPE INDUSTRIAL SYSTEMS
678-762-9790; www.hopeindustrial.com
PAPERLESS RECORDER + MULTI-POINT PANEL
CP600 visualization panels provide multiple options to control and monitor automation applications. They function as either standard HMIs or web-based panels. Pre-developed faceplates are included, allowing CP600 to talk directly to drives for status and configuration. Options range from 4-inch, economy panels to 21-inch, extreme-temperature models. They provide scalable, robust and flexible HMIs for fulfilling multi-system visualization requirements.
ABB
new.abb.com/plc/control-panels/cp600 for more information
SOFTWARE LINKS AUTOMATION AND CONTROL
DeltaV automation platform is expanding with Version 15’s Feature Pack 3 that improves availability of Ethernet-based field device networks by supporting Profinet S2 devices and system redundancy for increased resilience during controller switchovers. It also provides more operator instruction with enhanced DeltaV Simulate training and development suite, and enables robust change management for human-machine interfaces (HMI).
Touch Screen GX10/GX20 paperless recorder has a multi-point touch panel. With features like scrolling, panning, zooming and even writing freehand messages, its dustproof and waterproof display ensures durability. GX series also provides custom graphics, and a range of communication protocols for full compatibility with network infrastructures. Operators can easily access and retrieve past data, with automatic email and FTP notifications.
YOKOGAWA tinyurl.com/4t4tm2p2
REDEFINED SCADA SOFTWARE
Genesis, Version 11, SCADA automation and digitalization software has unlimited licensing and scalability to handle any system size, as well as visualization, alarm management and system control. It also features an historian, security, universal connectivity and integration with Mitsubishi devices. Enabling rapid deployment, Genesis, V.11, also provides future-proof support, extensibility, asset modeling with unified namespace (UNS), and supports 3D graphics in the browser.
ICONICS INC., A GROUP COMPANY OF MITSUBISHI ELECTRIC iconics.com/en-us/Products/GENESIS-version-11
GREG MCMILLAN
Gregory K. McMillan captures the wisdom of talented leaders in process control, and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams, and (web-only)
Top 10 lists. Find more of Greg's conceptual and principle-based knowledge in his Control Talk blog. Greg welcomes comments and column suggestions at ControlTalk@ endeavorb2b.com
Smart manufacturing for process plant excellence—part 2
Real-time optimization is often misunderstood. So, here are the basics
WE gain insightful practical guidance from Umesh Mathur, P.E., Houston, Texas, on the use of real-time optimization to achieve excellence in process plant automation.
GREG: What is real-time optimization (RTO)?
UMESH: Unfortunately, “optimization” is one of the most misused terms in our industry. Most engineers think improving operations by trialand-error (i.e., hit-or-miss, heuristic schemes) optimizes their process, but it’s hardly ever true. Each individual attempting process improvement will reach a different result.
However, the term optimization refers to a system in which a first-principles mathematical model is created to accurately describe process behavior. This model should mirror the independent (operator set) and dependent (unit output) variables. Reflux flow rate and reboiler heating medium flow in a distillation column are independent variables (IV). Product purities are dependent variables (DV).
For large applications, such models are expressed as an open equation, where all variables are moved to the left side for each model equation, and the right side is identically zero. Large processes can have a million equations. However, most equations have only a few variables, so the incidence matrix is generally extremely sparse. These modeling systems are also called equation-based models. Open-equation process modeling enables simultaneous solving of all unit operations, including material or energy recycle streams.
Non-linear mathematical, RTO software can be connected to an open-equation model. The software is designed to work with a sparse matrix representation of the process model. The optimizer seeks iteratively to maximize an objective function, such as overall plant profit, by systematically manipulating the IVs. At each iteration, the RTO optimizer checks the model outputs to ensure all DVs
(safety, environmental, process, equipment and product-quality constraints) don’t violate their respective upper and lower allowable limits. It adjusts for IVs at each iteration to ensure that DV constraints aren’t violated, while also seeking to maximize profit.
This process stops when all variables are within their limits and no further IV changes will improve profit because the profit has been maximized, and the optimized values of all IVs and DVs are now defined. This is the optimal, feasible solution because all variables are within their respective upper and lower limits, and profit can’t be increased further.
Commercially successful closed-loop, realtime optimization (CLRTO) software examples include AspenTech’s RT-OPT, Aveva’s ROMeo, Honeywell’s Nova and Yokogawa’s Dynamic Real-Time Optimizer (RT-OP).
CLRTO refers to further refinement where:
• The plant model is first recalibrated against actual plant data whenever steady-state conditions are found to exist;
• The upper and lower limits (set by operations or engineering) for all IVs and DVs are taken from the DCS;
• Economic variables used to define the profit function (unit values of products, feedstocks and utilities) are updated;
• The optimization is run to maximize profit, while observing all constraint limits for the IVs and DVs; and
• These optimized values for the IVs and DVs are downloaded to the lower-level multivariable controllers, such as AspenTech’s DMC, Honeywell’s RMPCT and Emerson’s DeltaV PredictPro. These controllers drive the process, minute-by-minute, to the optimal condition by manipulating underlying DCS setpoints. All real-time safety, environmental, process, equipment and product quality constraints are observed when computing the future trajectory of the controlled variables.
Accordingly, the term CLRTO is restricted to systems where a rigorous optimization of the first-principles mathematical model is first carried out, and the optimized results are downloaded to the underlying, multivariable control layer for real-time execution.
GREG: Where have CLRTO applications been deployed successfully?
UMESH: CLRTO applications have been deployed successfully for more than 35 years in petrochemical and refining industries.
GREG: What are the benefits of CLRTO applications in the process industries?
UMESH: Refinery and petrochemical CLRTO applications have shown documented paybacks from less than a month to two years. It depends how well the underlying layer of multivariable controls was implemented. Longterm success requires a high level of commitment by managers and owners to train and retain engineering staff capable of maintaining or enhancing such systems.
Any RTO system that doesn’t connect to an underlying layer of multivariable, model-predictive controllers (MPC) is called an open-loop or advisory system. Open-loop applications hardly ever provide sustained economic benefits. Closing the RTO loop with the MPC layer is, therefore, an essential requirement for CLRTO.
GREG: What ensures long-term success for CLRTO applications?
UMESH: This is a crucial question, often insufficiently emphasized during initial project approval. Open-equation process modeling for large units requires using proprietary software that lets users:
• Generate model equations;
• Connect models to sparse-matrix CLRTO optimization software. All
underlying unit operations must first converge tightly, so the overall, openequation model (including all recycle streams) can converge successfully;
• Define economic objectives; and
• Define upper and lower limits of all IVs and DVs.
Over time, process parameters, such as heat exchanger fouling coefficients, kinetic terms in reactor models, and compressor and pump efficiencies, can change gradually. Before commencing an optimization run, CLRTO software must detect steady state, and adjust process parameters automatically to match current process conditions (data reconciliation and parameter estimation). These tasks are most conveniently performed using one model that employs an open-equation modeling framework for simulation, data reconciliation/parameter estimation, and economic optimization.
Open-equation modeling and optimization requires much more user training than sequential-modular simulation software. It’s important for managers and owners to retain staff who have developed systems for any given CLRTO application because modeling
changes may be required to reflect plant configurations or equipment changes that may occur over time. Often, the initial project is implemented by the CLRTO vendor’s staff. Owners should assign in-house process engineering staff to participate in all aspects of project execution, so they can maintain these applications.
GREG: How easy is it to maintain CLRTO applications?
UMESH: In general, CLRTO applications are connected to a lower-level MPC that itself is connected to the DCS. Over the years, maintaining CLRTO applications requires close collaboration between the process modeling and optimization team and the MPC team for every RTO application.
It’s important to maintain staff expertise on both teams. The economic benefits of successful CLRTO/MPC projects are substantial. They justify investments in training and retaining engineering staff to keep applications running reliably in closed-loop mode, even as they typically stay online 99% of the time.
JIM MONTAGUE Executive Editor
jmontague@endeavorb2b.com
“As logical as consistent definitions can be, it might be more useful to focus on what works in reality, rather than what label is used.”
IIoT isn’t
A network by any name can fly right with UNS and PA-DIM
MAYBE you’ve heard the famous history-class joke that during much of its existence the Holy Roman Empire wasn’t holy, Roman or an empire. It’s a well-known reminder that many descriptions and labels live on like zombies, long after they’ve lost all applicability and credibility, and become positively unhelpful.
It’s probably apparent where I’m going. As I’ve said before, I really dislike the name Industrial Internet of Things (IIoT) because it’s just more Internet. Its saving grace was that its participants had to use Internet protocol (IP) or something close to it like HTTPS or TCP/IP. This was the only guardrail that gave the IIoT topic a solid definition and outline.
Unfortunately, while interviewing for this issue’s “IIoT tries new rolls” cover story (p. 22), several sources claimed that IIoT didn’t have to use the Internet. News to me, but when several people say it unbeknownst to each other, I start paying closer attention.
I know I should hold it firm in demanding that IIoT actually use Internet, or it just should be called networking or something. It’s logical and reasonable. Sadly, many if not most humans are immune, and situations and practices arise that make no sense.
However, the good news is that even chaotic change will eventually happen upon something positive, even if it’s small, travels indirectly, and comes at a high price.
Plus, even though my education in many fields is superficial and limited to the content I generated, I’ve learned that changeable names and descriptions can mean that what I think I’m seeing is actually something else.
For example, a few Saturday nights ago, I was at Gwangalli Beach in Busan, South Korea, for the “Cyberpunk” edition of the weekly, 12-minute Gawngalli M Drone Light Show (www.gwangallimdrone.co.kr/en/home). That night’s “Cyberpunk” edition was an astounding event with hundreds of small drones flying in huge and incredibly tight formations, and
using their brightly colored lights to construct and animate all kinds figures and characters (www.youtube.com/watch?v=ROA9uc9eNM4). They zipped around above the water round in an area that seem to fill much of the space between the beach and a full-sized suspension bridge. And at the end of the show, the drones dutifully lined up in vertical rows, and landed at one end of the beach.
Given the scale of their animations, I figured the drones must use enormous amounts of wireless networking, communications and data processing to coordinate their moves in relation to each other. However, I was quickly disabused of that assumption by two young aerospace engineers in the crowd, who informed me the drones aren’t linked and don’t communicate with each other. They told me every move and flash of light they make during their performance is programmed in ahead of time, and that each drone completes its own flight path unaware of the others. It’s only because their individual and subtly different paths occur in the same place and time that it looks like they’re actively working together.
Similar to most magic tricks, this revelation seemed like a bit of a cheat, but it’s a cheat that gets its job done.
This reminded me of how IIoT developers and users are employing Unified Name Space (UNS) uses its common naming strategy to get data from different devices and networks by making them the same, or how Process Automation - Device Information Model (PA-DIM) standard uses a protocolagnostic way to present device information using OPC UA’s information-sharing model to reach IT-level systems. Uniform documentation isn’t truly open networking, but it gets the communication job done.
Consequently, as logical as consistent definitions can be, it might be more useful to focus on what works in reality, rather than what label is used.
CONTROL AMPLIFIED
The Process Automation Podcast
Control Amplified offers in-depth interviews and discussions with industry experts about important topics in the process control and automation field, going beyond Control's print and online coverage to explore underlying issues affecting users, system integrators, suppliers and others in the process industries.
Check out some of the latest episodes, including:
Coriolis technology tackling green hydrogen extremes
FEATURING EMERSON'S GENNY FULTZ AND MARC BUTTLER
Ultrasonic technology takes on hydrogen, natural gas blends
FEATURING SICK SENSOR INTELLIGENCE'S DUANE HARRIS
Asset-specific insights to transform service workflows