Control – September 2024

Page 1


Across the analytics lake

Useful data analytics requires infrastructure, intelligence, digitalization, standardization, flexibility containerization and content — and artificial intelligence, too

MSG process plant adds dash of optimization

Realism for dynamic simulation of control testing

The perils of bad level transmitter settings

Process improvement is like a trapeze act. You need a trusted partner who lends a hand at the right moment.

Just as athletes rely on their teammates, we know that partnering with our customers brings the same level of support and dependability in the area of manufacturing productivity. Together, we can overcome challenges and achieve a shared goal, optimizing processes with regards to economic efficiency, safety, and environmental protection. Let’s improve together.

24 COVER STORY Across the analytics lake

Useful data analytics requires infrastructure, intelligence, digitalization, standardization, flexibility containerization and content — and artificial intelligence, too by Jim Montague

30 LOOP CONTROL MSG process plant adds a dash of optimization

Ajinomoto spiced up production by amplifying its technology investments by Meg Lashier and Ziair DeLeon 33 DEVELOP YOUR POTENTIAL Realism for dynamic simulation for control testing

How to add environmental effects to your simulations by R. Russell Rhinehart

CONTROL (USPS 4853, ISSN 1049-5541) is published 10x annually (monthly, with combined Jan/Feb and Nov/Dec) by Endeavor Business Media, LLC. 201 N. Main Street, Fifth Floor, Fort Atkinson, WI 53538. Periodicals postage paid at Fort Atkinson, WI, and additional mailing offices. POSTMASTER: Send address changes to CONTROL, PO Box 3257, Northbrook, IL 60065-3257. SUBSCRIPTIONS: Publisher reserves the right to reject non-qualified subscriptions. Subscription prices: U.S. ($120 per year); Canada/Mexico ($250 per year); All other countries ($250 per year). All subscriptions are payable in U.S. funds.

Photo: Derek Chamberlain, generated with Shutterstock AI

The APL of your eye?

Ethernet-APL's promise is intriguing. Will reality match?

Ethernet-APL becomes a reality

The future is a single EthernetAPL cable supporting all kinds of field devices, not just instrumentation

ON THE BUS

A complex DCS quandary

How PID loops were enhanced, and why it's beneficial but confusing

What the future holds for distributed I/O

The architecture makes economic sense, but changes will make it better

Industrial robot fleet management

Best practices for large-scale robot deployment, support and maintenance

16 INDUSTRY PERSPECTIVE

The evolution and impact of today's temperature transmitters

Control talks with Endress+Hauser's Greg Pryor about the reliability, safety and useability of modern units 18 IN PROCESS

MxD shows profit in sustainability Sick AG, Endress+Hauser merging process automation sales, service; BinMaster unfazed by F3 tornado

22 FLOW POINT

Life in the fast lane: Coriolis and ultrasonic flowmeters

How do they compare to each other in history, industries and applications?

35 RESOURCES

Apparatus for data analytics Control's monthly resources guide

36 ASK THE EXPERTS

Can bad level transmitter settings cause accidents?

The Fukushima Daiichi nuclear accident is one example, so how can others be avoided?

38 ROUNDUP

Cables, connectors and buddies hit the links

Protective covers, couplers, assemblies, strippers and labels get wires where they need to go

40 CONTROL TALK

Simulation scope and funding, part 1

A multidisciplinary perspective on real-time process simulation

42 CONTROL REPORT

Oh the humanities!

Facetime with partners is crucial for sustainability and other epic transitions

Endeavor Business Media, LLC

30 Burton Hills Blvd, Ste. 185, Nashville, TN 37215

800-547-7377

EXECUTIVE TEAM

CEO Chris Ferrell

President

June Griffin

COO

Patrick Rains

CRO

Paul Andrews

Chief Digital Officer

Jacquie Niemiec

Chief Administrative and Legal Officer

Tracy Kane

EVP/Group Publisher

Tracy Smith

EDITORIAL TEAM

Editor in Chief Len Vermillion, lvermillion@endeavorb2b.com

Executive Editor

Jim Montague, jmontague@endeavorb2b.com

Digital Editor Madison Ratcliff, mratcliff@endeavorb2b.com

Contributing Editor John Rezabek

Columnists

Béla Lipták, Greg McMillan, Ian Verhappen

DESIGN & PRODUCTION TEAM

Art Director

Derek Chamberlain, dchamberlain@endeavorb2b.com

Production Manager

Rita Fitzgerald, rfitzgerald@endeavorb2b.com

Ad Services Manager

Jennifer George, jgeorge@endeavorb2b.com

Operations Manager / Subscription requests Lori Goldberg, lgoldberg@endeavorb2b.com

PUBLISHING TEAM

VP/Market Leader - Engineering Design & Automation Group

Keith Larson

630-625-1129, klarson@endeavorb2b.com

Group Sales Director

Amy Loria

352-873-4288, aloria@endeavorb2b.com

Account Manager

Greg Zamin

704-256-5433, gzamin@endeavorb2b.com

Account Manager

Kurt Belisle

815-549-1034, kbelisle@endeavorb2b.com

Account Manager

Jeff Mylin

847-533-9789, jmylin@endeavorb2b.com

Subscriptions

Local: 847-559-7598

Toll free: 877-382-9187

Control@omeda.com

Jesse H. Neal Award Winner & Three Time Finalist

Two Time ASBPE Magazine of the Year Finalist Dozens of ASBPE Excellence in Graphics and Editorial Excellence Awards

Four Time Winner Ozzie Awards for Graphics Excellence

The APL of your eye?

Ethernet-APL's promise is intriguing. Will reality match?

FOR many people, spring is the time of renewal, hope and optimism. Me? Well, I’ve always been one for autumn. Maybe it’s the nostalgia of my school days and the promise of a new year? Who knows, but this time of year always gets me looking toward the immediate future.

While U.S. voters get ready to attach their future hopes to one candidate or the other as the presidential election reaches the height of what I can only describe as the strangest election of my lifetime, I’ve turned my attention to more tangible promises of better days ahead. There's optimism around emerging networking technologies in the process control industry, and companies are preparing to make their choice for the future in that regard.

In August, I moderated a webinar with a pair of experts from Beckhoff USA on “How smarter I/O can help optimize operations,” and as you might expect, the subject of remote I/O was prominent in the discussion. Of course, central to future optimization plans is the ability to more efficiently network and manage field devices from nearly anywhere, and one emerging technology that's excited operators is Ethernet-Advanced Physical Layer (APL) protocol.

As Emerson’s Jonas Berge states in this issue’s Other Voices column (p. 11), many industrial operators can envision a single Ethernet-APL cable infrastructure that supports all kinds of field devices, not just instrumentation. But, as with all good things, they’ll have to wait. The transition to pure APL is still some years away, and Berge reports even greenfield sites are still being built with a mix of APL and 4-20 mA instrumentation.

However, this hasn’t stopped vendors from getting the word out. This month, Endress+Hauser, Pepperl+Fuchs, Phoenix Contact and Vector will host an educational summit in Houston, Texas (bit.ly/EthernetAPLHouston), proclaiming Ethernet-APL “not just another fieldbus.”

Will Ethernet-APL ultimately prove be the solution that operators and vendors think it will? Only time will tell. EthernetAPL isn’t the only game in town, but its promise is intriguing. While expectations for Ethernet-APL were tempered since it first came into the spotlight and the realities of its full adoption sunk in, it seems primed to see another bump in its approval ratings.

" Will Ethernet-APL be the solution that operators and vendors think it will? Only time will tell."

Ethernet-APL becomes a reality

The future is one Ethernet-APL cable supporting all kinds of field devices, not just instrumentation.

THERE are several bus technologies such as Profibus, DeviceNet, HART and Modbus/RTU using different cables, interface hardware and data formats. They can be frustrating for users and automation vendors.

Each bus technology has its strengths and weaknesses. Profibus and DeviceNet are fast, and ideal for motor controls. The original HART coexists with 4-20 mA, and is ideal for smart field instruments, but media is slow. Modbus/ RTU is easy for device vendors to implement in products, and ideal for miscellaneous devices like vibration monitors and weighing scales. Many bus technologies tried to be the one fieldbus for all vendors to implement and deploy, but since the technologies are good at different things, it didn’t happen. It must be stressed that, within their niches, each bus successfully enables interoperability.

Based on product type, device manufacturers implement one of these bus technologies. When two bus technologies have similar capabilities, device manufacturers tend to focus on one of them. Similarly, each user tends to prefer one bus technology over the other for technologies with comparable capability.

Multiple bus technologies are frustrating for users, who must deploy multiple, parallel infrastructures across their plants to support a mix of field instruments, motor controls and miscellaneous devices. They must also add coaxial-cables for process video. Multiple bus technologies are also frustrating for vendors, who must support multiple product

lines because bus technologies use different interface hardware.

Both users and vendors want a solution. Users want one infrastructure that can support multiple protocols for connecting field instruments, motor controls and other devices. Vendors want one instrument model that can support multiple protocols and satisfy all users.

Ethernet's multi-protocol solution

The solution is the same as in the office and at home—Internet protocol (IP) and Ethernet carrying multiple application protocols. IP and Ethernet carry the HTTP protocol for web browsing, SMTP for email, FTP for file transfer, RTSP for IP cameras, and many more at the same time. In plants, control systems have been using IP and Ethernet for almost 30 years, so it’s well established in automation.

However, regular Ethernet isn’t suitable for field instruments due to limited cable length, separate power requirements and hazardous-area restrictions. IEEE recently created single-pair Ethernet (SPE), providing power and communication over just two wires and reaching up to 1 km. The four leading, industrial-protocol, standards-development organizations (FieldComm Group, ODVA, OPC Foundation and PI), which previously had different bus technologies, recently came together to create a common industrial-grade, two-wire, intrinsically safe, advanced physical layer (APL) based on the new IEEE standard. Referred to as

"Because HART-IP is an instrument protocol, not a PLC protocol, it makes APL less complex, so the learning curve to adopt APL is less steep."

Ethernet-APL, it’s suitable, not only for field instruments such as transmitters and valves, but also for other kinds of field devices. APL simultaneously supports IP application protocols from all four organizations, namely HART-IP, EtherNet/IP, OPCUA, Profinet and others like Modbus/ TCP. APL also supports information and communication technology (ICT) protocols, such as FTP, HTTP and SMTP. Users’ longtime desire for one network infrastructure using a mix of protocols is becoming a reality.

Lower-cost infrastructure

The vision for the future is a single Ethernet-APL cable infrastructure that supports all kinds of field devices, not just instrumentation. EthernetAPL instruments will not be lowercost than 4-20 mA instruments, but one infrastructure will cost less than multiple infrastructures. APL is fast at 10 Mbit/s, which is sufficient for highbandwidth applications like streaming process video or bulk-material LiDAR on the same APL network as field instruments like transmitters and valves.

Converged Ethernet everything

One APL infrastructure can support a distributed control system (DCS) communicating with a Coriolis flowmeter using HART-IP protocol. At the same time, it can communicate with analyzer management and data acquisition systems (AMADAS) communicating with a gas chromatograph (GC) using Modbus/TCP, or a programmable logic controller (PLC) communicating with a variable speed drive (VSD) about an electric motor using Profinet. In addition, it can enable pump condition monitoring software on a server to communicate with a vibration monitor using OPC-UA. Beyond classic instrumentation, a human machine interface (HMI) panel PC can communicate with a PTZ video camera streaming process video using ONVIF or RTSP—on the same network at the same time.

Converged IP everywhere

A key success factor for IP, Ethernet, Wi-Fi and the Internet is they’re all designed to carry multiple application protocols. No single application protocol attempts to perform all functions. There's a specialized protocol for each function. At home and in the office, we use multiple protocols, and software makes the protocol transparent, so most don’t notice which protocol is used. Similarly, in a plant, HART-IP can be used for all instrumentation, Profinet for all motor controls, and Modbus/TCP for miscellaneous things. Automation systems will have to make the protocol transparent to users. But don’t take multiple protocols too far because some flowmeters use HARTIP, while other flowmeters use Profinet, and some use Modbus/TCP. Using all of these protocols would be difficult to support. Try to stick to one protocol for instrumentation, one for motor controls, one for cameras, and so on.

Transition to APL

The transition to pure APL will take years. Even new plants will be built with a mix of APL and 4-20 mA instrumentation because it will be a long time before all devices are available in an APL version, and the additional cost of APL may be hard to justify for simple devices. Wireless sensors will also show up in ever-increasing numbers. Sure, 4-20 mA/HART devices can be fitted with an Ethernet-APL adapter, but some 4-20 mA devices will be hardwired to small, field I/O systems.

The good news is that 4-20 mA/ HART, WirelessHART and future Ethernet-APL devices with the HARTIP protocol will all have one thing

in common—the HART application protocol. They’re all part of the same protocol ecosystem. This means instrument technicians can manage all instruments from the same software. Instrument technicians are already familiar with HART terminology, and they get consistent look-and-feel for all instruments regardless of instrument signal. Because HART-IP is an instrument protocol, not a PLC protocol, it makes APL less complex, so the learning curve to adopt APL is less steep.

HART-IP is not the old 1.2 kbit/s HART used with 4-20 mA devices for commissioning, configuration, calibration and diagnostics. HART-IP can still do commissioning, configuration, calibration and diagnostics, but it will run at 10 Mbit/s over Ethernet-APL, and even faster across regular Ethernet. In fact, it’s more than 8,000 times faster. Therefore, HART-IP also supports real-time communication of process variables from transmitters and setpoint outputs to valves in monitoring, control loop and safety instrumented functions.

HART-IP is not an entirely new protocol. Many plants already use HART-IP over regular Ethernet in wireless gateways, HART multiplexers, safety logic solvers, remote-I/O, intelligent device management (IDM) software, equipment condition and performance monitoring software, vibration monitoring software and smart instrumentation.

Automation designers will have to think less about cable selection for various buses. Instead, see what new field devices are available with APL, and select a control system that supports multiple application protocols to meet the needs of instrumentation, electricals and beyond.

A complex DCS quandary

How PID loops were enhanced, and why it’s beneficial but confusing

RUNNING on refinery fuel gas, the temperature control loop for a fired heater needed to react to changes in BTU value (the heating value of fuel) as the plant started up. The hydrocarbons in the mix drum varied from nearnatural gas to fuel gas that was rich in higher BTU fuels, including ethane, propane and hydrogen. Such was the case when a particular control loop nearly burnt its fired heater.

When the measured temperature went offscale high, the controller was supposed to automatically switch to “shed mode,” which was the default behavior of the distributed control system's (DCS) proportional-integral derivative (PID) control block. In the busy startup, no one noticed the controller mode had gone to “manual” and was holding its last output— enough fuel to keep the temperature rising.

The PID loop of the everyday DCS has been enhanced over the years, largely at the behest of process plant end-users, who desire options, such as setpoint tracking in manual mode and bumpless transfer for cascade loops. One of these enhancements is mode shedding. Users don’t want their loops to react to a detectably bad feedback signal. If the thermocouple failed and the 4-20 mA transmitter pegged upscale, they don’t necessarily want the PID to dramatically cut the fuel off. For such analog signals, some percentage above 20 mA or below 4 mA is interpreted as bad, and the PID option can be set to shed mode—degrade to manual—if such a condition is detected. It’s a viable strategy, especially if you seek to protect operations from upsets caused by faulty instruments.

It's the law of unintended consequences, revisited. A sagacious process supervisor told a newly minted chemical engineer that whatever improvement he was making will have another probably unforeseen negative consequence. So it is, when we add numerous options and settings to a microprocessor-based algorithm. They all have a purpose, but if you

don’t know why and where, you might not find them until after the fact.

I recall a time where a measurement would increase beyond its configured full scale and continue reading—accurately—until the sensor or the transmitter reached its physical limits. It was a tremendous benefit for startups and scale-ups, where process folks might guess wrong about the likely span of a flow or pressure, for example. Somehow, the suggestion took root that exceeding the configured full scale should flag the reading as uncertain—even though there is nothing uncertain about it. You could still rely on the signal to be flagged bad if the transmitter or sensor full scale was reached.

What didn’t change was an option in the PID block to “use uncertain as good,” which is unchecked by default. So now, PID blocks shed mode to manual whenever full scale (or zero scale) is exceeded. If you exchange a 1999-vintage device for a 2024 model, you should ensure that box is checked.

In the early days of this technology, all engineers who wanted such features were engaged and present for factory acceptance tests, commissioning and startups. They knew about the clever tweaks in the blockware, and might be on hand if one setting or another caused some befuddling behavior. Whoever is left after those veterans move on or retire is often stuck with befuddling behavior with no clear remedy. DCS support roles fall to nonprocess disciplines like network specialists and active domain administrators. These are valuable skills and roles, but how many know the purpose or even existence of a setting such as “use uncertain as good”?

In our digital world, we should employ all the power at our command to ensure safety, stability and reliability of controls. However, everyone clicking a mouse should be trained in all the options, including all the gotcha moments that will someday arise.

"I recall a time where a measurement would increase beyond its configured full scale and continue reading— accurately—until the sensor or the transmitter reached its physical limits."

IAN VERHAPPEN

Solutions Architect

Willowglen Systems

Ian.Verhappen@ willowglensystems.com

"Demand for data to better understand our processes and the equipment to run them continues to grow."

What the future holds for distributed I/O

The architecture makes economic sense, but changes will make it better

WIDELY distributed I/O has become the de rigueur, default design for new facilities and facility expansions by all major control platform (DCS and PLC) suppliers offering distributed/ remote I/O as part of their system. In most cases, they use one of the available industrial Ethernet protocols to connect with the controller and remainder of the control system.

This type of architecture makes economic sense because it saves costs during construction. It reduces, and potentially eliminates, multiconductor cables—at least the traditional long and expensive home run cables from the field junction box to the interface room. The intermediate step to this architecture was using remote interface rooms throughout the facility. In most cases, these accompanied the electrical motor control center, and the fiber-optic rings to the main control center. A second economic advantage of distributed I/O is expansion. It’s easy to add more I/O to a smart junction box, but not as easy for multiplex signals on a multiconductor cable. Earlier in my career, I had more than one project that had to be canceled because we didn’t have a home run cable back to the control center.

Distributed I/O requires power, which normally means running at least one, and preferably two, power supplies to the field cabinet. Fortunately, AC power is usually nearby. If you source from different buses, the reliability of different uninterruptible power supplies (UPS) will still be high. The market also offers field cabinet-mountable (area-classified), 24-VDC transformer/UPS systems that can provide required power for the controller and field signal-loop power (analog signals) or wetting voltage (discrete signals).

When fully implemented, Ethernet Advanced Physical Layer (APL) will combine home run and power in one twisted pair, and take distributed I/O a step further. It let users mount an I/O cabinet adjacent to each device with multiple signal points. Since these new

sensors will be smart, they’ll likely transmit additional information about their own health via the network. This will create the future challenge of too much information, and the need to be selective about data required.

The next progression of this model is the Industrial Internet of Things (IIoT), which is the distributed I/O concept in a single device supporting one or more signals over wireless. IIoT is still has several hurdles to overcome including:

• Security —concerns about, not only physical access, but also cybersecurity for individual devices outside the organization’s networks and potentially physical location. Zero-trust is supposed to provide cybersecurity protection to a granularity of one device or service. If properly implemented, it addresses cybersecurity concerns, but not associated mindsets of those operating a facility.

• Cloud integration —though modern SCADA systems use cloud systems. Many industries continue to have concerns about how to integrate data from outside the OT domain into the control environment. Evolving cybersecurity standards help, but since most facilities aren’t in a rush to be first, new integrations will take time.

• Protocols —though a compelling case can be made for MQTT/Sparkplug B, there isn’t an agreed-on, lightweight protocol for IIoT devices. ISO JTC1 SC41 is developing a suite of IoT standards, including architectures, reference models and others, including APIs. Demand for data to better understand our processes and the equipment to run them continues to grow. Fortunately, technology continues to respond to the challenge with lower-cost options having higher data density and increased capabilities, again changing the relationship from one of too little data to likely too much data, and managing all associated physical and data assets. The I/O evolution saga continues.

Industrial robot fleet management

Best practices for large-scale robot deployment, support and maintenance

ONE way to break through and speed up large-scale robot deployment, support and maintenance is to think in terms of a new technology introduction in the organization. Practically all companies in the process of digital transformation have tackled this issue. Managing a fleet of robots includes monitoring robots and their execution, managing abnormal situations, making sure robots are ready to do their work with the appropriate sensor payload and manipulating capability, and ensuring all necessary parts are in stock locally or otherwise available in a timely manner to ensure operation and maintenance.

Robot manufacturers and integrated solutions or service providers offer more specific guidance including modular solutions and flexibility. For the end-user, a key to selecting among providers is how they match with business needs from end-to-end, the cost of deploying such technologies, and alignment with digital transformation plans.

End-users are actively pushing to improve fleet management. Some examples include robot-as-a-service, data-as-a-service, inventory management, remote operation and maintenance and storage of the robot/asset. These have become routine matters, which are supported by manufacturers and integrated solution providers.

Early on, management of the batteries that power robots appeared to be a significant issue. However, battery life, operating practices to maximize battery power efficiency, and charging practices have advanced and are well known.

Key issues continue to be mission management and data compatibility with other systems throughout the enterprise. In this column, we'll dive into mission management.

If they don’t seek guidance from robot manufacturers or integrated solutions providers, end-users could struggle with mission management, which ranges from ensuring

robots received their assignments to confirming their tasks are completed. Under some abnormal situations, management must reassign unfinished tasks to a different robot.

Most often, robots perform routine inspections that simulate operator rounds. A simple start is to assign robots to the exact rounds that human operators or technicians would otherwise perform. On the other hand, we've observed many opportunities for improvement. Missions can be streamlined to minimize their duration and battery power use. Rounds can also be designed to make the most of each robot’s capabilities. Their ranges in terms of hearing and sight far exceed those of humans.

Proof-of-concept (PoC) tests have shown that doors and stairs could present problems for robotic rounds. Though some models can handle such obstacles, it's still best to minimize them during missions. The more stairs a robot must climb, the greater the risk that it will encounter a problem.

PoCs have also shown how to minimize the involvement of people, who prepare robots for missions, transport them to mission starting points, perform simple local repairs, and return them to storage following missions.

In other cases, operators found they prefer to see updates to some process variables more often than daily or weekly rounds allow. Process operations are better served by sensors that provide more frequent updates. Robots are typically used for asset management and maintenance purposes.

Finally, managing payloads is another aspect of robot fleet management. Typically, there will be a variety of camera/sensor arrays that depend on the purpose of the missions. Users have found it's best to run all missions using one payload before changing it for a different type of mission. In this case, multiple robots carrying different payloads run in the facility for different purposes.

"Key issues continue to be mission management and data compatibility with other systems throughout the enterprise."

Endress+Hauser USA

The evolution and impact of today’s temperature transmitters

TEMPERATURE transmitters are vital to many of today’s process control systems. However, not all units are made the same, and ensuring you pick the suitable device for the proper operation can be a complex decision. To understand the important aspects of modern temperature transmitters—reliability, safety, and useability, in particular— Control talked with Greg Pryor, temperature product marketing manager at Endress+Hauser USA.

Q: How have temperature transmitters evolved, and what is their impact on various industries?

A: At their core, most temperature transmitters convert sensor output to a 4-20 mA analog signal. I'd say the biggest evolution in the last several years is the addition of other communication methods. HART protocol has been around for a long time, and then digital protocols (Profibus, Foundation Fieldbus, etc) began to come into play. In addition, more recently, we see newer technologies like IO-Link and, even now, Ethernet-APL making its way into transmitters. This allows more flexibility for users to gain additional functionality on the back end by seamlessly integrating the incoming transmitter signals into different system architectures. The other big evolution is customers adopting Bluetooth functionality for re-parameterization in the field. Bluetooth was a little slow to be accepted, primarily because of security risk questions, but that technology is safe, reliable and a considerable time-saver now. It makes it so much easier than climbing to the top of a tank, getting on a ladder, or opening an enclosure to hardwire a communicator to a transmitter to change a temperature range, for example.

Q: Why do they require a strong focus on safety?

A: Temperature is the most common process variable measurement in almost every

industry, and a considerable part of the reason is safety. So many processes are temperaturerange-dependent, and must be controlled to stay within that range. In heavier industries, like chemicals or oil & gas, temperature spikes above the desired range can cause combustion, unwanted reactions or significant safety risks to plant personnel. In the food & beverage industry, almost every food safety process must reach a specific temperature—think pasteurization of milk, CIP or SIP processes, etc. Failure to achieve the proper temperature can cause unsafe food products to be sent out for consumption, and people can get seriously ill. Accurate temperature control is key to keeping everyone safe across industries, so the focus there is a huge priority.

Q: What factors determine a temperature transmitter’s usability?

A: Having the correct configuration for the transmitter is the key to high usability, but specifying the wrong fit can quickly cause usability challenges. For example, maximum and minimum temperature ranges must be programmed correctly, the transmitter needs to be set for the correct input type, and the wiring must be done properly according to the manufacturer’s instructions. If you receive those details right on the front end, then your transmitter performance during use becomes quite simple. Using Bluetooth technology also makes modifying parameters on installed transmitters much simpler in the field.

Q: What makes today’s temperature transmitters more cost-effective than in the past?

A: I think various factors come into play for this question. Most of the major transmitter manufacturers have gotten to a volume efficiency at the larger scale at this point, so the actual

cost to build simple transmitters (and the respective purchase price) has gone down as transmitter use has increased. Because their use has become so much more commonplace in temperature assemblies, most facilities are set up for taking 4-20 mA inputs into their control systems. This allows I/O cards to be more universal than thermocouple or RTDspecific, allowing control systems personnel to realize cost savings just on I/O cards alone. Being able to use the same type of I/O card for temperature, pressure, level, etc, allows for adding the same type of input regardless of the variable input—they're all coming in as 4-20 mA. In addition, they can better control the process due to the increased amount of information they can pull from the transmitter. This allows them to realize higher process efficiencies, greater yields and better product quality. All those benefits add to the bottom-line profitability for any customer.

Q: How accurate are today’s units?

A: Today’s technology allows manufacturers to achieve incredible accuracy. Determining accuracy on a transmitter is complex and is based on formulas derived from various factors, and most manufacturers have specific tolerances posted for each transmitter in their portfolio. One thing to always keep in mind, if accuracy is critical in your application, is that the transmitter is only one piece of the temperature path. The sensor, the wiring, and several other factors come

into play when you read the output from the transmitter. You need to add the tolerances of all the components along that path together to achieve true accuracy. That's one reason many users take advantage of sensor-transmitter matching, which allows a transmitter to offset to a specific sensor that's been calibrated to determine its Callendar Van-Dusen coefficients. This virtually eliminates the error from the sensor itself, increasing the entire assembly's accuracy.

Q: What advice can you offer for companies purchasing temperature transmitters? What should they understand about today’s temperature transmitters before buying?

A: The best advice I can give a customer is don’t pay more for features you won’t use. There is a wide range of available transmitters, from very basic functionality to every bell and whistle available. If you're using single-element sensors and going one-to-one to transmitters, you don't need a dual-input transmitter. If you don’t use HART, buy a simple 4-20 mA transmitter. If nobody ever looks at the display, and you'll only read the output in the control room, why get a display? Fit for purpose is the key. Whatever your needs are, a transmitter will probably cover it; don't overspend for functionality you don't need. If you aren’t sure what your options are or what functions you would use, just partner with a reliable vendor that will walk you through to help find the right fit for your application.

Today's temperature transmitters, such as this portfolio of units from Endress+Hauser USA, have become cost-efficient while realizing incredible levels of accuracy.

MxD shows profit in sustainability

Experts demonstrate how to simultaneously improve environmental and manufacturing performance

JUST as process safety and cybersecurity used to be viewed as expenses before users understood and added up their benefits, the same goes for sustainability. To change minds and convince potential participants that going green can improve productivity and profitability at the same time, the Digital Manufacturing and Cybersecurity Institute (www.mxdusa.org) staged its third “Win-win of sustainability” workshop on Aug. 13 at its Chicago headquarters.

“It’s natural for sustainability to combine with manufacturing growth and profitability for an economically viable future,” says Billy Bardin, PE, global climate transition director at Dow (www. dow.com) and MxD board chair. “It just takes some new ways of measuring with digital transformation and sustainability analytics that can include carbon-reduction with business practices.”

MxD's 22,000-square-foot facility on Goose Island near downtown Chicago contains more than $300 million worth of manufacturing equipment, which its 310 members and other partners use for collaborative R&D, problem-solving workshops, testbeds, demonstrations and training. It concentrates on predictive analytics and

maintenance, resilient supply chains, cybersecurity, augmented reality (AR) and digital twins.

Overcoming myths

In the event’s keynote address, Katheryn Scott, general and market analysis engineer in the U.S. Dept. of Energy’s (DoE.gov) technology transitions office, presented ample evidence that sustainability can generate billions of dollars in revenue for participants in addition to reducing carbon footprints and hitting net-zero emissions targets. DoE’s five primary decarbonization pillars are energy efficiency, industrial electrification, law-carbon energy, carbon capture, and external activities like recycling, which it tracks in eight industrial sectors, including chemicals, refining, iron and steel, food and beverage, cement, pulp and paper, aluminum and glass. However, even if all contemplated sustainability efforts are carried out, Scott reported they’d only address 1% of current emissions. She added full decarbonization will require $700 billion to $1.1 trillion by 2050. Plus, several myths about sustainability must be overcome, such as:

• “Sustainability technologies aren’t ready” isn’t true because many could be deployed immediately in the DoE’s eight industrial sectors;

• “Investment doesn’t cancel carbon” isn’t true because investing in net-positive decarbonization technologies now could achieve 15-20% abatement and prevent 800 billion tons of CO2 emissions for existing assets in sectors like oil and gas and cement; and

• “Users don’t know where to start” isn’t true because each industry has multiple pathways to achieving commercial liftoff of sustainability projects and programs.

“Approximately 27% of CO2 emissions from the chemicals sector, about 14% of emissions from refining, and 32% from cement could be abated with net-positive, decarbonization levers,” says Scott. “Europe already gets 50% of its energy for cement production from alternative sources, while the U.S. only gets 15% from alternative sources. There are pathways like this for all eight sectors, but there are adoption challenges, and users have questions about how to make sustainability work for their businesses. However, adopting sustainability isn’t a technical problem. It’s a willingness problem. The good news is that achieving net-zero targets can be done, and there are ways for companies, people and their governments to pursue a sustainable future.”

Many of these challenges and solutions for all eight industrial sectors are detailed in the DoE’s 117-page Pathways to Commercial Liftoff: Industrial Decarbonization overall report, which was published in September 2023 and is downloadable at https://liftoff.energy.gov/industrial-decarbonization.

By-sector sustainability support

To further jumpstart these efforts, Scott reports the recently enacted U.S. Inflation Reduction Act (IRA) has made $6 billion in funding available for about 30 sustainability projects. She adds that DoE’s Office of Clean Energy Demonstrations (www.energy. gov/oced/office-clean-energy-demonstrations) is likewise tracking more than $20 billion in federal and private investments to transform advanced but hard-to-abate sectors by:

Katheryn Scott, general and market analysis engineer at the U.S. Dept. of Energy.
Source: Jim Montague

When we named our industrial application software “Ignition” fifteen years ago, we had no idea just how fitting the name would become...

fit ting th

TR NS

DIGITAL TRANSFORMATION

Ignition’s industry-leading technology, unlimited licensing model, and army of certified integration partners have ignited a SCADA revolution that has many of the world’s biggest industrial companies transforming their enterprises from the plant floor up.

With plant-floor-proven operational technology, the ability to build a unified namespace, and the power to run on-prem, in the cloud, or both, Ignition is the platform for unlimited digital transformation.

Platform, Unlimited Possibilities

Visit inductiveautomation.com/ignition to learn more.

• Solidifying a first-mover advantage for U.S. industries with low- and net-zero carbon manufacturing processes;

• Substantiating the market for clean products with highimpact, replicable solutions; and

• Building broadly shared prosperity for U.S. workers and their communities.

These investments are expected to prevent 14 million metric tons of CO2 emissions per year, reduce criteria air pollutants by 85% across 28 projects, and create tens of thousands of jobs. Breaking out one sector, iron and steel has six projects and a $1.5 billion federal investment that’s expected to avoid 2.5 million metric tons of CO2 emissions per year. Its individual projects include:

• Cleveland-Cliffs Steel Corp. received $500 million to work on hydrogen-ready electricity for its melting furnace in Middletown, Ohio. The company also received $75 million to use electrified induction to upgrade a steel-slab, reheat furnace in Lyndora, Pa.;

• SSAB received $500 million to develop electrolytic hydrogen for zero-emissions steel production in Mississippi and Iowa;

• Vale USA received $283 million to develop low-emission, cold-agglomerated, iron-ore briquette production; and

• U.S. Pipe and Foundry Co. that received $75.5 million to replace a coke-fired furnace in Bessemer, Ala., with electric induction melting furnaces, and reduce the carbon intensity of pipe production by 73%.

“In short, there are great opportunities for decarbonization, and for making money with sustainability,” adds Scott.

Sick, E+H

merging process automation sales, service

To better support users and increase their efficiency and sustainability, Endress+Hauser reported Aug. 20 that it’s taking over worldwide sales and service of Sick’s (www.sick.

com) process analyzers and gas flowmeters. The two longtime, family-owned partners are also launching a 50-50 joint venture in 2025 to further develop the two technologies. Sick’s core business of factory and logistics automation, which accounts for more than 80% of its sales, won’t be affected by the partnership. Pending regulatory approval, the deal is expected to close by the end of this year.

About 800 sales and service staff in 42 countries will transfer from Sick to Endress+Hauser. This will let Endress+Hauser’s sales network access more customers, reach more industries, and develop new applications. The joint venture will employ about 730 people at several locations in Germany. It will work on innovations with Endress+Hauser’s competence centers.

“This strategic partnership opens opportunities for growth and development for Sick and Endress+Hauser,” says Peter Selders, CEO of Endress+Hauser Group. “By collaborating and networking, we can achieve more together in a reasonable amount of time than either could alone. All of this is for the benefit of the customers and employees of both companies.”

Mats Gökstorp, chairman of the executive board at Sick AG, adds, “Our aspiration is to drive the sustainable transformation of the process industry, and support our customers in leveraging the opportunities presented by decarbonization. This is why Sick and Endress+Hauser are combining their technological and market expertise. In the interest of our customers and employees, we look forward to the strategic partnership and shaping the future of process automation together.”

BinMaster unfazed by F3 tornado

It only took 20 seconds on Apr. 26 for an F3 tornado to decimate BinMaster’s (www.binmaster.com) 115,000-squarefoot facility in Lincoln, Neb., but what’s even more remarkable is the company was back to assembling its level sensors and wireless devices at a new location within just three weeks. And, in mid-July, BinMaster moved into its new offices at the University of Nebraska at Lincoln’s (UNL) Innovation Campus.

Likewise, the company’s new plant is only a few miles from its prior location in Lincoln. This is where BinMaster set up new production lines, while its supply-chain staff replenished inventory of almost 30,000 parts in a new 40,000-square-foot warehouse and manufacturing facility. The new offices are just minutes away from the new plant on UNL’s 249-acre public/private research campus that houses businesses, a conference center and the Nebraska Innovation Studio.

Mats Gökstorp, chairman of the executive board at Sick AG, and Peter Selders, CEO of Endress+Hauser Group

SIGNALS AND INDICATORS

• ABB (new.abb.com/us/us-motion-business-area/motorsand-generators/abb-nema-motors) reported Aug. 19 that it’s equipping advanced technology manufacturing professionals by partnering with Arkansas Tech University’s Ozark campus (www.atu.edu/ozark) to launch a career-readiness program. This initiative will offer curricula focused on automation technology, air conditioning and refrigeration.

• To help users implement Open Process Automation (OPA) systems, Yokogawa Electric Corp. (www.yokogawa. com) released Aug. 13 its OpreX Open Automation System Integration (SI) Kit and OpreX OPC UA Management Package. OPA lets users select and integrate preferred components by providing configuration and application portability and interoperability for components from different suppliers that comply with the Open Process Automation Standard (O-PAS).

• Sodium-ion battery manufacturer Natron Energy Inc. (natron.energy) announced Aug. 15 that it will create more than 1,000 jobs by investing $1.4 billion to establish a sodium-

ion battery factory at the Kingsboro CSX Select Megasite. Located just outside Rocky Mount in Edgecombe County, N. C., the 2,187-acre, shovel-ready site is one of six with 1,000 acres or more of contiguous land in North Carolina.

• Flexware Innovation LLC (www.flexwareinnovation. com), a Hitachi Ltd. company, announced Aug. 19 that it’s purchased Castle Hill Technologies LLC (castlehilltech. com), which is a 25-year-old pharmaceutical-engineering services company. Flexware reports the acquisition will sharpen its life sciences focus, and accelerate its shopto top-floor connections.

• Core States Group (www.core-states.com), a 25-yearold architecture, engineering and construction (AEC) firm in West Chester, Pa., reported Aug. 12 that it’s purchased Barghausen Consulting Engineers Inc. (www.barghausen.com) in Kent, Wash. Together, the two firms will have 25 offices nationwide and more than 600 employees.

• High-speed, rotating equipment specialist Fox Innovation & Technologies (foxinnovation.com) reported Aug. 5 that it’s acquiring rotating machinery services provider Cotter Holdings Group to enhance its U.S. field operations and footprint.

RELIABLE MEASUREMENT SOLUTIONS

"Because they were not well understood, ultrasonic flowmeters were often misapplied."

Life in the fast lane: Coriolis and ultrasonic flowmeters

THE Coriolis and ultrasonic markets are the two fastest growing flowmeter markets. Both flowmeter types are in demand from end-users. Both are versatile and widely used in the process industries. Yet, how do they compare to each other in history, industries and applications?

Ultrasonic flowmeters

Tokyo Keiki first introduced ultrasonic clamp-on flowmeters to commercial markets in Japan in 1963. In 1971, Badger Meter brought clampon ultrasonic flowmeters to the U.S. by reselling Tokyo Keiki’s meters. In 1972, Controlotron began manufacturing its clamp-on ultrasonic flowmeters on Long Island, N.Y. In the late 1970s and early 1980s, Doppler flowmeters began to be used. Because they weren't well understood, ultrasonic flowmeters were often misapplied. As a result, many users got a bad impression of them. In the 1990s, transit-time emerged as the leading ultrasonic technology, and ultrasonic meters began growing significantly in popularity and capabilities.

In the early 1980s, both Panametrics and Ultraflux experimented with ultrasonic meters for gas flow measurement. In the mid-1990s, a group called Group Europeen de Recherches Gaziers (GERG) published a technical monograph on using ultrasonic flowmeters for gas flow measurement. A monograph out of GERG led to increased European ultrasonic flowmeter use during 1996-99.

The GERG monograph laid the groundwork for the publication of AGA-9 by the American Gas Association. AGA-9 lays out criteria for using ultrasonic flowmeters for custodytransfer applications. Since its publication in June 1998, ultrasonic flowmeters have become widely used for custody transfer of natural gas. They're especially suited for measuring gas flow in large pipelines, easily handling flow in those above 20 inches in diameter, as well as smaller pipelines. Its main

competitors for custody transfer of natural gas are differential-pressure (DP) orifice meters and turbine flowmeters.

The history of ultrasonic flowmeters since 2001 can be viewed from multiple perspectives. One way is to look at the product development that occurred during this time. There have been several advances in custody transfer meters, developing multipath meters, gas flow measurement, developing diagnostic capabilities, and calibrating ultrasonic meters.

Coriolis flowmeters

Coriolis flowmeters get their name from a French mathematician named Gaspard Gustave de Coriolis. In 1835, he wrote a paper describing how objects behave in a rotating frame of reference. Coriolis studied the transfer of energy in rotating systems like water wheels. When some people talk about how the Coriolis principle works, they give examples of the Coriolis effect. The Coriolis effect isn't the result of a force acting directly on an object, but rather the perceived motion of a body moving in a straight line over a rotating body or frame of reference.

It's not completely clear how we got from Gustave Coriolis’ analysis of the rotation of water wheels to Coriolis flowmeters. There seems to be some confusion between Coriolis force and Coriolis effect. Some people, who attempt to explain how Coriolis flowmeters work, appeal to the Coriolis effect as an analogy. Yet, there seems to be little relation between the Coriolis effect, which is acknowledged to be “fictitious,” and the operating principle of Coriolis flowmeters.

One possible explanation is that early inventors designed instruments that rotated the fluid, and so they called them Coriolis flowmeters. This makes a connection between rotational motion and the workings of Coriolis meters. Later, inventors abandoned the idea of rotating

the fluid in favor of oscillating tubes. However, because their patents cited the earlier patents that appealed to fluid rotation, they kept the name “Coriolis” to describe their meters.

In August 1972, James E. Smith patented a “Balanced mass-moment balance beam with electrically conductive pivots.” In August 1978, James (Jim) Smith began patenting a series of devices that became the basis for the flowmeters produced by Micro Motion. The patent was filed in June 1975 and explicitly evoked the Coriolis force. Smith founded Micro Motion in his garage in 1977. In 1981, Micro Motion introduced its first single bent-tube Coriolis flowmeter, although the company introduced its first Coriolis meter designed for laboratory use in 1977. In 1983, Emerson brought out its first dual-tube Coriolis meter.

In 1984, Emerson changed the course of flowmeter history forever, when it acquired Micro Motion. It allowed the company to expand globally and continue to innovate. Emerson has never lost its grip on the Coriolis market that it obtained when it acquired Micro Motion.

Endress+Hauser introduced the first straight-tube Coriolis meter in 1987. This meter had dual tubes and later evolved into the ProMass. Up until 1994, nearly all Coriolis meters were bent-tube meters. While bent-tube Coriolis meters have advantages over many conventional meters, they do introduce pressure drop into the system.

Schlumberger was next to introduce a straight-tube flowmeter in 1993, but withdrew this product after several months. Krohne introduced the first commercially successful single straight-tube Coriolis meter in 1994. Since that time, this type has become increasingly popular with Coriolis users. Straighttube meters address the problem of pressure drop because the fluid doesn't have to travel around a bend, making them better able to handle high-viscosity fluids.

Straight-tube meters can be drained more easily, which is important for sanitary applications. Liquids don't have to negotiate a bend or curve that residue can be left on. Straighttube meters are popular in food processing and pharmaceutical applications.

Industries and applications

Both ultrasonic and Coriolis flowmeters are sold to the process industries. However, ultrasonic flowmeters are stronger in the oil and gas industry than Coriolis meters in terms of revenue. The oil and gas industry, including refining, accounts for more than 50% of ultrasonic flowmeter revenues, while oil and gas with refining makes up less than 40% of Coriolis sales worldwide, according to data from Flow Research’s "Volume X: the world market for flowmeters" (www.flowvolumex.com).

Coriolis meters have a wide variety of sanitary application uses, such as food and beverage.

There are several reasons why ultrasonic meters do better in oil and gas than Coriolis meters. One is line sizes. Coriolis meters are expensive and unwieldy in line sizes above four inches. Even though Coriolis meters are manufactured in line sizes up to 16 inches, those above six inches in line size account for less than 10% of Coriolis revenues. Ultrasonic meters, by contrast, do especially well in line sizes 12 inches and up because it gives them more distance to send a signal across the pipe. Ultrasonic meters are widely used for custody transfer of natural gas, and many of these pipes have diameters above 12 inches. Ultrasonic meters excel in pipes with 20- to 42-inch diameters.

There are many types of pipelines used in oilfields. Gathering pipelines are typically under 18 inches, though some can be as small as 2-4 inches. Transmission pipelines are used to carry crude oil, natural gas and refined products long distances. They typically vary between 12-42 inches in diameter. Distribution pipelines carry natural gas to homes and businesses. Their diameters range from 2-24 inches.

While Coriolis meters are ruled out of many oil and gas applications, where they shine is in the sanitary industries that need flow measurement in smaller line sizes. Coriolis meters do extremely well in the chemical, food and beverage, and pharmaceutical industries. They far outpace ultrasonic flowmeters in these industries in terms of revenues. In percentage terms, they also exceed magnetic flowmeters in the chemical and pharmaceutical industries, and come close to magnetic flowmeters in food and beverage.

Source: Shutterstock.com

Across the analytics lake

Useful data analytics requires infrastructure, intelligence, digitalization, standardization, flexibility containerization and content — and artificial intelligence, too

YOU can’t have analytics without data. To deliver its promised optimizations, improved decisions and profitability, data analytics must be built on a solid foundation of networking and efficient information processing—but it must also have signals, readings, measurements and every other kind of input to process into useful results. However, faced with sorting through increasingly huge information sources and repositories, users must contextualize and prioritize to find the few nuggets that can add value, and they rely on software and digitalization to remove former hurdles, make analytics less cumbersome, and streamline its capabilities for everyday use. Artificial intelligence (AI) can help, too, but only if it delivers solid, plant-floor benefits.

Unplugging nickel production

Of course, the most-direct guide about what data analytics method and tools to adopt is having a specific, immediate problem to solve. For instance, to maintain its position as the world’s largest producer of ferronickel, Groupe Eramet’s Société Le Nickel (sln.eramet.co) recently implemented Rockwell Automation’s (www.rockwellautomation.com) FactoryTalk Analytics Pavilion8 model-predictive control (MPC) software to boost tonnage and increase uptime (Figure 1).

The 140-year-old company mines nickel at five sites in New Caledonia, about 750 miles east of Australia, but legacy, fuzzy-logic controls in its ore calcination process were too

slow. Varying ore content and heating values triggered temperature spikes and frequent electrical trips because the product was too hot, which also compromised product quality. The legacy system also wasn’t user-friendly and was difficult to maintain, triggering uptime issues.

SLN reports that processing ores requires stable control of the rotary furnaces’ temperature profiles and automating operations across operating ranges. Feed ore undergoes calcination as it’s heated along the length of the rotary kiln. Heated air is supplied for combustion, but if there’s too much, more fuel must be burned to maintain the same product temperature, which decreases energy efficiency. Excess oxygen must be minimized to a safe level which also reduces costs and greenhouse gases. However, if the calcined ore isn’t hot enough, its quality and the energy efficiency of the processing plant can be compromised.

“The fuzzy logic was unable to reduce fuel fast enough to prevent trips from occurring, and during manual operation, operators sometimes couldn’t react fast enough,” says Leslie Hii, one of the advanced process control (APC) engineers at Rockwell responsible for delivering the SLN project. “Maintaining required furnace temperature can be complex and challenging given the variables that need to be managed.

Fuel types can be oil, coal or a mixture, each with unique thermal characteristics. Also, material feeding rates impact furnace temperatures, so they must be carefully managed.”

Photo: Derek Chamberlain, generated with Shutterstock AI

Finally, the legacy, expert system could only run when the rotary furnace was operating normally. If any instability occurred, SLN’s operators had to turn it off and take control. This is why SLN upgraded to Pavilion8 to control its rotary kilns in real-time, provide an intelligence layer on top of its automation systems, and continuously assesses current and predicted operations. The software compares these results to desired outcomes, and drives new control targets to reduce process variability, improve performance, and autonomously boost efficiency in real-time.

“MPC shows how Rockwell applies AI to achieve better results with available data,” explains Hii. “This project also used machine learning (ML), process knowledge and other data to develop kiln models tailored to SLN’s operations.”

SLN completed the initial phase of its upgrade by implementing Pavilion8 in five rotary furnaces in just 13 months, which was far quicker than the years it took to install the fuzzy-logic controls. The MPC software lets operators opt to minimize consumption of costly fuel oil, while maximizing inexpensive, pulverized coal during mixed-mode operations.

“The MPC can handle variable ore feed and heating values, and prevents trips, which lets the furnace run at a higher rate and operate longer,” says Mickael Montarello, process control manager at SLN. “Calcined product temperature error was reduced by 6%, while furnace temperature profile variability was reduced by 16.1%. The average uptime of Rockwell’s MPC is 83% compared to 70% with the earlier, fuzzy-logic controls. Users appreciate MPC's userfriendliness and flexibility. In the event of a problem with one element of the process, operators can easily intervene with it, while allowing the MPC to continue controlling the other manipulated variables. Thanks to this tool, new opportunities for optimizing control and management at SLN are opening up that weren’t possible with the old fuzzy-logic controller. Our target for 2024 is to achieve a 90% utilization rate.”

Escape from clunky with SaaS

Faced with so many data analytics options—and high prices for failing—it can be welcome to have some guidance.

“Where they previously stayed mostly at the manufacturing execution systems (MES) layer, many clients are now going to cloud-computing services for analytics, so we’re working with more software suppliers and IT departments to coordinate specifications, requirements, features and functions,” says Pradeep Paul, manufacturing intelligence director in the IT division at E Tech Group (etechgroup.com), a system integrator in West Chester Township, Ohio, and certified member of the Control System Integrators Association (controlsys.org). “For example, when they employ or plan to offer software as a service (SaaS), it can give us data, too. Previously, we’d install a reporting package, but now these capabilities require much less customization.”

Just as it matches end-users’ operational requirements with suppliers’ suitable hardware and networking, E Tech works with clients to determine what features their SaaS needs to perform. However, because there are so many SaaS providers, it can be hard for any single user to sufficiently vet them all. “We bridge that gap. We have use cases and demos that can put clients ahead in the SaaS evaluation game,” says Paul. “This lets them review different suppliers and their capabilities to acquire suitable services.”

Even though different mission-critical applications employ various equipment and software in different arrangements, Nick Hasselbeck, VP of IT in E Tech’s IT group, reports its projects at multi-megawatt data centers or building-management systems still share some common threads and strategies as they seek to optimize or upgrade. “Many applications begin with a SQL database, and apply classic, web-based Ignition SCADA/HMI software to monitor energy consumption,” says Hasselbeck. “Our group’s engineers will then assemble a Microsoft Azure application to process time-series information about local energy use, and locate it in a central area that authorized users can access, including exposing to the company’s management and business levels.”

Similarly, E Tech constructs applications for its consumerpackaged goods (CPG) clients consisting of a customized middleware layer that links enterprise resource planning (ERP) software to data from downstream production lines. “This connection lets users assign or adjust recipes from the enterprise level, manage changeover processes manage recipes, and send related documents to PLCs or other devices affected by the changeover,” explains Cole Switzer, advanced software engineering manager in E Tech’s IT group. “In the past, these tasks were mostly manual or just had some preprogrammed setpoints.”

Paul reports that Ignition’s digitalized, web-based SCADA functions make it easier to deploy, configure and interact with data from batch software and DCSs, such as Emerson’s DeltaV and Rockwell’s PlantPAx. “Because Ignition is web browser-based, it lets the Internet of Things (IoT) participate in a system-agnostic SaaS that can use HTML 5 to run on tablet PCs, smart phones and other mobile devices,” explains Paul. “This means users are no longer stuck at desks or other fixed locations. More specifically, it used to take six weeks to changeover a typical CPG production line to accommodate different infeed materials or produce different products, but now that time is down to one week.”

Intelligence delivers know-how

Once users realize the amount and variety of data today’s applications can provide, it only fuels their appetite for more—even if they need to organize and prioritize it later.

“The biggest shift in optimization and reliability in the process industries is that instruments and other devices

are pulling and pushing far more data than ever before. More is expected by users and clients from their suppliers and system integrators because devices from flowmeters to weigh scales are now Ethernet-ready with built-in microprocessors,” says Heath Stephens, PE, digitalization leader at Hargrove Controls & Automation (www.hargrove-ca.com) in Mobile, Ala., a division of Hargrove Engineers & Constructors, and also a certified CSIA member. “Regardless of the variety of instruments and non-traditional sensors—a vibration or power meter, a stick-on temperature sensor, or some other clamp-on instrument—control engineers are expected to know how to pull information from them. We need to know more details about different types of devices, including how they’re networked and what communication protocols they use. For instance, one client uses electronic data recorders to monitor a handful of tags in a single-loop control setup. These recorders were previously panel-mounted with a couple of wires, but now they’ve got Ethernet ports and register/ parameter tables, which let users pull data and amalgamate it with the rest of their information.”

Stephens reports that digitalization’s shift from hardware to software is reflected in the designs, specifications and estimates for Hargrove’s many projects and how they unfold. “The I/O counts change as we move from classic, wired I/O via 4-20 mA or 120 V to software-centric I/O via Modbus TCP/IP, Profinet, OPC UA and other Ethernet-based protocols,” says Stephens. “In the past, we’d see 80% hard I/O and 20% soft I/O, and now we’re often seeing about 50% of each. Years ago, we’d bring soft I/O points to distributed control systems (DCS) via different serial or bus cards that could

talk with Modbus, Profibus or Foundation Fieldbus. In the 1990s, these tasks were done by add-on or even third-party modules, which later started delivering data via regular Ethernet TCP/IP and versions like EtherNet/IP and Profinet.”

To acquire data in a more native way, Stephens adds that Emerson developed a built-in Ethernet I/O card (EIOC), which it launched in the mid-2010s as part of its S-Series platform. This was followed a few years later with the release of Emerson’s PK controller with onboard Ethernet connectivity for third-party data, such as skid units, IIoT devices or HVAC systems. Other system vendors are increasingly supporting native Ethernet communications for third-party systems. However, even though collecting information has become easier, Stephens adds its usefulness still has to be ensured by using accurate sensors and instruments that reflect actual process conditions and environments, and how sensor accuracy decays over time due to clogs, corrosion and other physical factors.

“Data scientists sometimes forget that information comes from operations and equipment that are subject to change,” says Stephens. “They tend to think that all data types and sources stay the same, so they don’t suspect when conditions may have changed. Just because some results are repeatable doesn’t necessarily make them right in the first place. If your information says the sky is green, it’s time to question that data and the devices that produced it.”

Filling the experience gap

To help users find insights faster with available software tools, Kelly Kolotka, analytics engineer at Seeq (www.seeq.

Source: Société Le Nickel (SLN), part of Groupe Eramet, and Rockwell

com), reports it recently developed several generative artificial intelligence (gen AI) assistants for its industrial analytics software by using large-language model (LLM) software in the Chat Generative Pre-Trained Transformer (GPT) chatbot. These tools include:

• Data Lab Assistant to generate code, debug programs, and review and assist users with Python code development;

• Formula Assistant that lets users employ plain language to develop formulas;

• General Assistant that shows users just starting with AI tools how to clean signals, find periods of interest, and complete other basic tasks; and

• Actions Assistant that accepts written prompts for different tasks, and performs the tasks to complete them.

“This isn’t about replacing engineers. It’s about helping them do their jobs better,” says Kolotka. “However, some process facilities, such as refineries, may have a siloed culture for engaging in new analytics or bringing in new controls. Consequently, Seeq focuses on keeping the subject matter experts (SME) at the center, so their cross-functional collaboration is streamlined when trying to perform a rootcauses analyses, determine why anomalies are happening, or develop a simple interface that’s easy for their team to access, adopt and share.”

For instance, Andres Barbaro, engineering VP at Seeq, reports its AI Assistant is infused with process and analytics capabilities; informed by time-series and SME context in Seeq’s collaborative platform; drives traceable and profitable results two to eight times faster than before; and delivers scalable solutions. For example, Seeq Workbench can be

Figure 1: To increase tonnage and uptime, and improve nickel ore calcination in its kilns, Groupe Eramet’s Société Le Nickel (SLN) in New Caledonia adopted Rockwell Automation’s FactoryTalk Analytics Pavilion8 model-predictive control (MPC) software with an intelligence layer that sits on top of automation systems, continuously assesses present and predicted operations, and resets control targets to reduce variability and boost efficiency. Using MPC in its kilns, SLN reduced product temperature errors by 6%, cut temperature profile variability by 16.1%, and improved uptime from 70% to 83%.

directed by AI Assistant to search for and add data items, such as a signal named “Temperature-Area C,” from a specific data source, identify outlier sensor readings, and execute an analysis that removes spikes to create a new, clean temperature signal with the outliers removed.

“We’re excited to put our analytics platform together with AI Assistant because a regular generative AI model couldn’t clean the signal in this way,” says Barbaro, who spoke at Seeq’s Conneqt 2024 conference in May in Miami. “It’s really Seeq Formula doing the hard work. We can also use AI Assistant to close gaps in the graph where the outliers were removed, and create a condition when the absolute difference between the cleansed temperature and the original one is greater than two to show when spikes are happening.”

Sweet AI gets in on the act

James Caws, senior automation and analytics engineer at British Sugar (www.britishsugar.co.uk), reports it’s the sole processor of the 8 million tonnes of sugar beets grown annually in the U.K. This 110-year-old industry presently turns the beets into 1.2 million tonnes of sugar and coproducts each year. It’s plant in Wissington, Norfolk, also produces bioethanol, and it takes waste heat and CO2 from its combined cycle turbines to operate onsite greenhouses that grow cannabis for a U.S.-produced epilepsy drug. Its front-end processes are agricultural, while its core applications are mainly chemical, relying on diffusers, absorption columns, reactors, multi-effect evaporation, and big heat exchanger networks for heat recovery. Back-end operations include crystallization, drying and packaging.

“Our main challenge now is we’ve got a lot of people in the industry with more than 30 or 40 years of experience, so there’s a significant risk our technical performance will decline if we can’t capture that expertise within the next few years,” says Caws, who also spoke at Conneqt 2024. “We estimate it takes 10 years to become an SME, including going through all the training for five or so operations roles. When we look at what it takes to train an expert to make technical decisions in our factory, first there will be formal and informal training on the process operations. Next, they’ll look for data in all kinds of places, such as trends data in Seeq or our previous trending packages, laboratory information, financial input, shift logs, and talking to the other SMEs to get their opinions. Even so, we may need to try five different solutions to find the one that works. Rather than this shotgun approach, we want to move towards a rifle approach that will be right the first time.”

To secure its legacy know-how and make precise decisions, Caws reports that British Sugar drafted a three-part Advanced Insight Center program that includes:

• Capturing more than 100 years of knowledge, time-series analyses and time-consuming trends, and embedding them into Seeq’s automated analytics,

• Using AI to implement 24/7 digital assistance, and

• Achieving real-time, data-driven, visualization-enabled decision making.

“We believe it’s really important to visualize to drive decision making, which is why we’ve been working with Iota Software (www.iotasoftware.com) for the last six months,” explains Caws. “However, even with the best analytics and visualizations, we still need to change people’s workflows. How can we have just a few SMEs examine all these dashboards and abnormal situations across the whole factory? We also don’t want process operators to spend so much time on alarm rationalization or designing HMIs. We want to reduce their workloads, instead of giving them another set of dashboards to evaluate.”

To help its clients sort through all their data, Hargrove employs a unified namespace (UNS) that grants access to network participants, and lets it present content to operators, engineers, maintenance and managers on dashboards tailored for each group. Stephens adds that Hargrove is also using AI tools to help clients improve quality and reliability by emplying offline analytics and real-time monitoring and alerts.

“We work with several types of AI, but it all boils down to information on a platform that gets analyzed by an algorithm. AI tends to take everything as gospel, since it often doesn’t have a valid way to tell if that data is legitimate, or if it should alert staff that it isn’t, so cleaning and pre-processing data is important,” adds Stephens. “AI is still evolving, but it’s beginning to allow us to look at much greater volumes of data,

and find the relationships, efficiencies and value in it. For example, AI can help with multi-variable analyses, and generate a better fingerprint of what’s going on in a process. In addition, as microprocessor costs decrease, AI capabilities will increase, so more intelligence will be coming, including devices with self-analytics. It will make data collection and analysis easier and more powerful.”

Quiz the assistant

Consequently, British Sugar worked with Seeq over the past few months to develop a proof-of-concept (PoC) with Seeq’s AI Assistant. Its basic workflow lets users ask the digital assistant a question about British Sugar’s data, and the AI searches for abnormal conditions in Seeq’s monitoring software that have been defined by the SMEs. It also searches corporate documents, textbooks and causal maps for related information about the situation its researching, as well as impacts, potential causes and suggested actions. This allows the AI Assistant to add context and assemble a response that’s specific to a particular persona, such as a professional role or job description. It uses a technique called retrieval-augmented generation (RAG), which improves regular, language-model responses by adding realtime, external data retrieval.

“This doesn’t use information on the Internet. It just references the data we’re pointing at in Seeq and our corporate documentation,” adds Caws. “This makes the results more accurate and reduces the possibility of a hallucination.”

For example, one of the PoC’s sample questions was, “Has anything gone wrong with crystallization in the last 24 hours?” The AI Assistant responded, “Within the last 24 hours, there were two instances of Pan 4 high saturation at seed that could impact product quality. When checking for root causes, there have also been high brix at seed events, and the microwave brix instrument reading is questionable. It would be worth performing a calibration.” Likewise, the AI Assistance could be asked other what-if questions, such as:

• At any time, an operator trainee could ask the equivalent of the most experienced technician, “A laboratory result for SO2 in thin juice came back as high. Explain the impact of this and search for root causes.”

• SMEs responsible for optimizing a unit across multiple factory locations could quickly focus on the biggest opportunities by saying, “Give me a prioritized list of current opportunities within my areas across all factories.”

• A plant manager traveling to work in the morning could say, “Give me a production summary report for the last 24 hours.”

As part of the PoC, Caws reports the Wissington plant developed and adopted several dashboards using IotaVue software, and added the Seeq AI Assistant to them. For example, a detailed view of the crystallization process includes

Figure 2: British Sugar’s plant in Wissington, U.K., developed several dashboards using Iota Vue software, and added Seeq’s AI Assistant to them. A detailed view of the crystallization process includes batch cycle times, spotting abnormal situations and system checks, but it also integrates AI Assistant into the upper-right corner of the display, so users can ask it questions.

batch cycle times, spotting abnormal situations and system checks. However, it also integrated the AI Assistant into the upper-right corner of the display, so users can ask questions, such as, “What was the production for the first week of the month?,” “What process issues occurred on Mar. 1?,” or “What are white pans used for?” (Figure 2)

“AI Assistant is really good at interpreting what you mean, so it can break down what production was, what the production targets were, and estimate the value of lost opportunities,” explains Caws. “We also asked what went wrong during the same period, and it gives us a table of the abnormal situations, which we can drill into more to ask further questions about root causes and other issues. Finally, we can ask what to do in response to these situations, and it gives us a list of potential recourses and actions that our best SME would suggest we can try, but now that expertise is available 24/7 to everyone.”

Slice, dice, containerize and hybridize

While web-based software can help analytics, E Tech’s Switzer adds they’re also aided by how each organization’s data is partitioned. “Most users previously had one huge database in a horizontal architecture, but the recommendation now is to break it into hundreds of parts based on software containers. “For example, our CPG client wanted to transfer their files from one monolithic database with a single application program interface (API) to a RESTful API and one database at each of 12 buildings or 1,000 sites,” explains Switzer. “We put an Azure gateway in front, so the client could view our panel access, and monitor all of the small applications in those 1,000 databases in their horizontal, distributed architecture. Because one database on one server is limited, it’s more flexible to spin up as many databases and distributed applications as needed.”

Source: British Sugar, Iota Software and Seeq

Switzer reports containerized applications are also more efficient because they’re built on common software types, which are familiar to many users, such as Representational State Transfer (RESTful), Simple Object Access Protocol (SOAP), JavaScript Object Notation (JSON), Message Queuing Telemetry Transport (MQTT) publish-subscribe protocol, and NodeRED flow-based, low-code development software. “Containers are to virtual machines (VM) what VMs are to physical servers,” adds Switzer. “This is why it helps to use common development tools and protocols, which can talk to each other, and perform tasks that can be scaled up.”

Hasselbeck explains that monitoring power consumption or other parameters occurs on a hybrid platform, such as a virtual server or cloud-computing service. However, sensors, cables and physical servers are usually on-premises, which is why they require a gateway like the Azure version E Tech uses to reach a SCADA/HMI stack and RESTful API in the cloud, allowing users to benefit from the best aspects of each side. E Tech refers to its bridging capability as a “product as a service” (PaaS) or customer-specific SaaS.

“This can alleviate bottlenecks in processes by taking thousands of formerly separate, individual reports, and turning them into one centrally managed reporting application,” says Hasselbeck. “It allows users to have better interactions with their data, which can also be finer-grained at the millisecond-level, rather than the tenths of seconds or full seconds they experienced before.”

For example, E Tech’s prior work with its CPG and data center clients earned them the trust to team-up on new opportunities, such as a middleware solution and custom-cloud application connecting their enterprise resource planning (ERP) to various downstream processes via Kepware drivers distributed to mechanical components. In fact, the CPG client has deployed this solution at about 100 plants.

Ajinomoto spiced up production by amplifying its technology investments by

DURING day-to-day operations in many processing plants, the first order of business is to achieve and maintain stable operations. While acceptable stability doesn’t necessarily equal maximum efficiency, it's usually difficult to achieve the latter without the former. Process optimization and fine-tuning aren't only important to increase production value, but also to keep the underlying process running as an essential baseline before undertaking more sophisticated enhancements.

Proper tuning of proportional-integral-derivative (PID) control loops is necessary first to achieve overall stability, and later for more optimized performance. Consequently, a typical process plant employs trained staff to tune loops with or without the help of software tools. Unfortunately, this activity is often intermittent and, therefore, difficult to support solely with in-house personnel, who are already tasked with a full load of other pressing responsibilities.

Ajinomoto’s process plant in Eddyville, Iowa, which produces monosodium glutamate (MSG), embodies a typical scenario (Figure 1). The facility has about 200 PID control loops, and the support team initially procured a PID looptuning software package followed by a plantwide control loop performance monitoring (CLPM) solution to look after these loops. Despite these investments, the team only had

bandwidth for the most basic applications. The situation changed when an enhanced support offering introduced by the software vendor empowered the MSG producer to kick their process performance into high gear.

Optimization ranges from basic to advanced

Plant staff are familiar with the occasional need to use a voltmeter to verify motor or signal voltages, or a torque wrench to properly tighten bolts. In the same way, they use loop-tuning software from time to time to improve the performance of individual control loops. Though common, this approach is basic and can be counterproductive because it overlooks the much more complicated and subtle interactions often at play between loops and the varying conditions they experience.

Loop-tuning software is used to model the dynamic behavior of individual PID control loops and recommend tuning parameters suitable for a loop’s application. The goal is to optimize a loop’s responsiveness within a range of operation and under typical conditions. While many poorly performing loops are easy to spot due to equipment or process oscillations, this isn’t always the case. Most process plants only pursue their worst performing loops, and tend to do it in a reactive manner.

Figure 1: Ajinomoto’s plant in Eddyville, Iowa, is one of the manufacturer’s global production portfolio of 117 facilities. The plant produces monosodium glutamate (MSG) that's used as a food seasoning and flavor enhancer.
Source: Ajinomoto and Control Station

2: CLPM software simplified the isolation of excessive variability in steam pressure. This chart of the before-and-after trend of the process average absolute error shows a notable decrease after shutdown and replacement of the on/off valve.

Unlike tuning software, CLPM solutions monitor loop behavior on a plant- or enterprise-wide basis to proactively identify issues that put process performance at risk. CLPM solutions go beyond the basics of loop tuning. They equip users with insights into mechanical issues and challenges stemming from loop interaction.

Ajinomoto saw the potential behind CLPM and deployed Control Station’s PlantESP loop performance monitoring software, which leverages data from a production facility’s existing historian. It equips users with an intuitive set of key performance indices for identifying trends, along with advanced forensic tools to isolate root causes. Like most software, all CLPM solutions require active users to realize measurable benefits.

Better tasting CLPM technology

Ajinomoto recognized the value to be gained from improving control in terms of increased quality and production output, and reduced energy and materials use. Like most processing plants, this facility generally ran well, but it persistently struggled with a few particular control-loop issues that sometimes weighed heavily on performance.

The enterprise committed to digital transformation and made great strides implementing advanced, model-predictive control (MPC), reducing manual interventions, providing comprehensive visualization and reporting, and pursuing data-quality initiatives. Applying CLPM was a logical next step in the plant’s transformation, though assigning the necessary resources remained a sticking point.

In 2023, the CLPM software supplier expanded its services portfolio by introducing Digital Lifecycle Solutions (DLS). It provides a means for the supplier’s technical experts to partner with end users, take a lead role in using the CLPM software, and facilitate achievement of plantwide process optimization. The availability of DLS, which engages process control specialists collaboratively with onsite

personnel, was a favorable concept and a tipping point for Ajinomoto’s business.

Ajinomoto established a remote services contract with the CLPM supplier. To clearly define the scope, the team created a formal project charter that assigned resources, clarified performance targets and established joint accountabilities. Through this engagement, the CLPM supplier conducts weekly analyses, and submits reports detailing a nominal list of worst-performing control loops. The list includes supporting documentation and recommendations for corrective action. Monthly meetings ensure the team is aligned, and that impediments to improvement efforts are addressed systematically.

Extracting efficiency from unruly processes

Posting a first win is essential to the long-term success of any team. Early in the partnership, the team discovered a steam header pressure loop that was exhibiting oscillatory behavior. The loop was the source of variability affecting more than 10 downstream units before recirculating back upstream. Persistent oscillations negatively impacted the gas-fired units responsible for producing steam as they ramped up and down in response to pressure changes. More importantly, variations in steam pressure created inconsistently sized MSG crystals.

At first, the team attempted to eliminate the variability by tuning just the steam pressure loop, but this was only partially successful. With so much to be gained by eliminating the remaining variability, they considered other loops that might interact with the steam pressure controller to negatively influence its performance. However, the plant runs hundreds of control loops, and HMI trends made it difficult for the team to pinpoint the variability’s source.

Using the CLPM software’s forensic capabilities, they found the answer. Employing spectral analysis and cross correlation tools available in the software, the team first

Figure
Source: Ajinomoto and Control Station

Figure 3: Producing consistent crystals is critical to Ajinomoto’s success when manufacturing MSG. Through engineering collaboration and using CLPM technology, it improved product quality.

Source: Ajinomoto and Control Station

identified other PID loops that shared the same repetitive cycling, and then they pinpointed an on/off valve in a downstream unit that proved to be the root cause. The on/off valve was installed years earlier, instead of a modulating valve that would have been appropriate for the application.

This type of issue—where a mechanical solution negatively impacts process performance—isn't unusual. In this case, the on/off valve was a technically sound option for the basic heating tank it supplied. However, the valve’s limitations resulted in years of unacceptable surging upstream. Having isolated the valve as the root cause with the help of the CLPM software, the team recommended its replacement with a new modulating valve that allowed proper tuning of the steam header pressure loop and other loops that ultimately enhanced operational reliability (Figure 2).

As more of bad-actor loops were tuned—individually and in groups—the team found it easier to spot other less impactful but still important issues. The CLPM software provided the big picture of plantwide control loop performance. It also let users drill down and assess performance of a unit, a loop, and even at a specific metric level.

Savoring success

Controlling the performance of complex production processes is a difficult task. Although basic CLPM functionality can reveal problematic conditions, many end-users aren't fully equipped to perform deeper analyses. By employing CLPM technology and partnering with the supplier’s expert staff, it becomes easier to apply advanced analytics and implement effective, corrective actions. As an extension to a

plant’s staff, third-party CLPM experts let resident engineers focus on maintaining production, while benefiting from expert and timely analysis.

After six months of teaming with the CLPM supplier, approximately half of the facility’s loops were optimized. The site documented notable improvements in overall process stability, energy use and throughput. The size of crystals produced—a proxy for quality—has been maintained at the highest levels (Figure 3). Comprehensive use of the CLPM software’s capabilities and reporting provided better visibility into loops that are trending in problematic directions, so the team can proactively address underlying issues well before they lead to costly unplanned downtime.

The partnership extended the time effi ciency and overall effectiveness of internal company engineering resources, and it helped the team gain greater insights into plant performance than were previously possible. From Ajinomoto’s perspective, the CLPM software and expert DLS services arrangement more than paid for themselves. Encouraged by these successes, the company is investigating expanding these services to include additional facilities around the globe.

Meg Lashier is senior production coordinator at Ajinomoto Health & Nutrition America Inc. She has a lead role in the automation and digital transformation group at the company’s Eddyville, Iowa, facility. Ziair DeLeon is a field application engineer at Control Station. He is responsible for deploying, using and supporting Control Station’s portfolio of process diagnostic and optimization solutions.

Photo:Login/Shutterstock.com

UNDERSTANDING controllers, such as the benefi ts of cascade and reset feedback, came from my observation with simulations. It didn’t happen with abstract mathematical derivations, memorizing what's written or trial-and-error. Simulations are useful for evaluating and demonstrating the benefi ts of next-level, advanced regulatory control or model predictive control (MPC).

I encourage you to use dynamic simulators to legitimately represent your process, and add environmental effects to the simulator (noise, drift, stiction, resolution) to make the simulation representative of what nature provides.

A simulation including natural vagaries is termed a stochastic simulation. By contrast, most simulations are deterministic. By definition, a deterministic calculation returns the same value whenever or wherever you do it. For example, 3 x 4 = 12, whether you did the multiplication in fourth grade, or when astronaut does it today at the space station. A stochastic process, however, doesn't reproduce exact values. It produces a realization (a possible value) from a distribution of values. For example, roll a six-sided die and the values could be 1, 2, 3, … 6, each with a 0.166 probability. When your process is operating, nature doesn't keep the inlet humidity, fuel BTU content, ambient losses or catalyst reactivity constant. Also, nature contrives mechanisms that add noise to measurements.

Run the tests you want to use on a stochastic simulation to evaluate controllers. Use both long periods at steady setpoints to determine process variability, setpoint changes that mimic changes in production rates, and product specs. From extended time simulation, measure the frequency and magnitude of specification violations and waste generation, and on-constraint events. Then, shift setpoints away from constraints and specifications, and tune controllers to make violations acceptable. Evaluate the process economics (throughput, costs of materials and energy, cost of upsets, etc).

Part one of this two-part article explains how to add environmental effects to your simulations by

The deterministic simulator represents what we model as the truth about nature. In general, the exact rules and coefficient values that nature uses to generate data are unknown. To create a digital twin with appropriate fidelity to your process, you may need to calibrate your simulator—adjust coefficient values to better match the process data. Once you've calibrated a dynamic simulator, add the following features to better represent the control challenges.

Simulating noise

Measurements are subject to noise, which are random perturbations of the true process output. Noise on measurements can come from several sources. One is thermal static from device electronics. Another is caused by electromagnetic influences from proximity machines on measurement transmission wires. A third is mechanical vibrations of a device. These are external to the process. However, the process might also generate noise-like deviations.

Flow turbulence causes pressure perturbations on a differential pressure measurement device. Boiling causes pressure perturbations and thermal variations on sensors in the two-phase mixture. Incomplete mixing downstream of an injection point causes composition fluctuations. Noise perturbations are independent and normally distributed.

Commonly, the Gaussian (also called normal or bellshaped) distribution is accepted as a representative model of noise. It represents the confluence of many random influences of equivalent impact:

ni = √[(-2ln (r1,i)] sin(2πr 2,i)

Here, the variables r1,i and r 2,i represent uniformly distributed, independent, random numbers on the 0 to 1 range (excluding 0), typically called a RAN or RAND function. And ni is normally and independently distributed with a mean of

zero and standard deviation of . The subscript i represents the sampling counter, which is shown to explicitly indicate that the perturbation changes with each sampling. There's no need to include it in the simulator.

There are several approaches to including noise in a measurement. The most common is to consider the noise as additive to the true measurement. To simulate this, add the noise perturbation to the deterministic model output. Here, in the previous equation has the same units as the measured variable:

xmeasured,i = xdeterministic model,i +ni

Other noise models could consider that the noise magnitude scales with the variable. This may be relevant in flow rate measurements, when the intensity of process turbulence is proportional to flow rate, or when process gains could characterize the noise. For example, in pH processes, measurement noise might be due to imperfect mixing, with base-rich and acid-rich fluid packets passing the sensor. In a strong-acid/strong-base mixture, pH sensitivity to composition is high in the near-neutral (pH = 7) region, but low in the far-from-neutral ranges (pH < 5, or pH > 9). Finally, in some processes, noise isn't symmetric. The “plus” deviations may be larger or smaller than the “minus” deviations, and may need an alternate function to generate skewed noise. You must understand the sources of random noise in your process measurement to be able to choose a noise model. You need to choose a value for in the first equation. The 5 range includes about 99% of all values. So, if you use your experience for the range of normally noisy values when the process is at steady state, set to be one fifth of that range:

time persists at 11:15:26, until it jumps to 11:15:27. Time did not stop for the duration; the display just held the last reportable value, until it could report a new value. The resolution is 1 second or 1/60th of a minute or 1.67% of a minute. A digital thermometer may have a resolution of 0.5 ºC, and as the temperature progressively changes, the display holds the prior value until the change is enough to display the new value. If the thermometer range is 25-45 ºC, then it can only report 41 distinct values, and has a resolution of 2.5% of full range. If a 10-bit processor is in the signal transmission path, it can only hold 28 = 1,024 values. If the calibration range is in the middle 64% of its signal range (as 4-20 mA is a portion of the 0-25 mA range), then only 655 values are used. This means the signal resolution is the CV range divided by 655. The resolution, the ability to detect change, is 0.153% of the full range.

Resolution is visually detectable when the signal jumps between identical discrete values. In nonlinear processes or sensors, discretization can be invisible at one operating range and clearly visible at another. When detectable, poor resolution can cause a controller cycle in a pattern like valve stiction. This effect could be termed discretization or truncation. Choose either the smallest interval for change, ∆, or the number of values, N, within a range of R = yhigh – ylow. The relation is ∆ = R/N. Then, if y is the process value, and ydisplay is what will be transmitted or reported, calculate ydisplay from y :

ydisplay = ylow + ∆ ∙ INT ((y – ylow) / ∆)

This equation truncates the y-value. You could round it by using INT ((y – ylow) / ∆+0.5).

The controller should act on the ydisplay value, not on the simulated y value.

Simulating bias

In my experience, there is no justification to determine the exact value for . It takes time, and the actual noise your process generates will likely not be exactly Gaussian. In the third equation, don't use the range of possible process values. Use the range of the perturbations at steady state. For example, the speedometer on an automobile might have a noisy display with a of about 0.1 mph, so at 55 mph, the display fluctuates between 54.75 and 55.25 mph. The same vehicle might have a speed range of 0 to 80 mph. Incorrectly using the speed range implies the measurement noise has a standard deviation of 16 mph, indicating that at 55 mph the display would fluctuate between 15 and 95 mph.

Simulating resolution

Resolution is the smallest reportable change in a signal. Your digital watch might reveal the time in seconds, and report that

Noise is often termed random error. However, there may be a measurement calibration bias, a permanent offset, also called a systematic error. To simulate a systematic error, simply add the error to the measurement:

xmeasured,i = xdeterministic model,i + bias

Certainly, you can simulate both random and systematic error combined. As with noise, the systematic error might be scaled by (not added to) the process variable value.

Simulating faults

Faults could be related to many types of instrument failures, process issues or data transmission errors. These could be spurious signals, a progressive degradation, or a step event. You might be demonstrating an automation technique for robustness to faults. Simulate the fault effect in a reasonable and defensible manner.

Apparatus for data analytics

Control 's monthly resources guide

COUNTERING ABNORMALITIES

This online article, “High-quality production with the fusion of process knowledge and data analysis technology—process data analysis using machine learning,” show how Sumitomo Seika Chemicals Co.’s plant in Himeji, Japan, improved its production of super-absorbent polymers by drafting an observation item list, developed a workflow for process analytics, identified disturbances, and reduced fluctuations. It’s at www.yokogawa.com/us/ library/resources/references/successstory-sumitomoseika-chemicals-en YOKOGAWA www.yokogawa.com

BASIC STAGES AND VIDEO

This online article, “A step-by-step guide to the data analysis process” by Will Hillier, covers the stages of data analytics, including defining questions, data collection, cleaning with exploratory analyses, and descriptive, diagnostic, predictive and prescriptive analyses. It's at www.careerfoundry. com/en/blog/data-analytics/the-dataanalysis-process-step-by-step.

CAREER FOUNDRY www.careerfoundry.com

MONITOR,

MODEL BATCHES

This online article, “Understanding the basics of batch process analytics,” covers process monitoring and diagnostics, modeling outputs to monitor quality, batch modeling of completed batches, introducing statical process control (SPC), and batch evolution and level models. It's at www.sartorius. com/en/knowledge/science-snippets/ understanding-the-basics-of-batchprocess-data-analytics-507206

SARTORIUS www.sartorius.com

PREDICTIVE GAS OPTIMIZATION

This two-minute video, “Nippon Gases prevents unplanned downtime, extends APM strategy with Predictive Asset Intelligence,” shows how it used AI-based condition monitoring software to improve the performance of critical equipment such as compressors, turbines, and purifiers. It deployed SymphonyAI’s Predictive Asset Intelligence software built on Iris Foundry dataops platform to calculate asset health, maximize uptime with predictive warnings, and optimize operations using deep learning AI-models. It’s at www.youtube.com/ watch?v=g_ecfwmUA4c

SYMPHONY AI www.symphonyai.com

CLEANING EXCEL DATA VIDEO

This 10-minute video, “Master data cleaning essentials on Excel in just 10 minutes,” shows how to transition from a raw dataset with several errors to a clean dataset that can be presented to managers by using techniques, including text to columns and removing duplicates, and using formulas such as trim, proper, lower, iferror function and others. It’s at www.youtube.com/ watch?v=jxq4-KSB_OA

KENJI EXPLAINS

www.youtube.com/@KenjiExplains

WHEN TO AUTOMATE

This online article, “Automated data analysis: everything you need to know” by Rand Owens, is the ninth chapter of a larger data analytics publication. It covers fundamentals, benefits, when to automate analysis, best practices, setting goals, evaluating present infrastructure, establishing governance policy and mapping workflows. It’s at www.polymersearch.com/data-anal-

ysis-guide/automated-data-analysiseverything-you-need-to-know POLYMER SEARCH www.polymersearch.com

STATS PARALLEL ANALYTICS

This online article, “What is statistical process control?,” provides a refresher on SPC and statistical quality control (SQC) and how they relate to more recent data collection and analytics efforts. It covers cause-and-effect diagrams, check sheets, control charts, histograms, Pareto charts, scatter diagrams and data stratification issues. It’s at asq.org/quality-resources/statistical-process-control

AMERICAN SOCIETY FOR QUALITY www.asq.org

This column is moderated by Béla Lipták, who also edits the Instrument and Automation Engineers’ Handbook, 5th edition , and authored the recently published textbook, Controlling the Future , which focuses on controlling of AI and climate processes. If you have a question about measurement, control, optimization or automation, please send it to liptakbela@aol.com

When you send a question, please include your full name, job title and company or organization affiliation.

Can bad level transmitter settings cause accidents?

The Fukushima Daiichi nuclear accident is one example, so how can others be avoided?

Q: I want to know more about differential pressure (DP)-type level transmitter calculations and calibrations, so I can better understand how to correctly make low-range value (LRV), upper-range value (URV) and zero-shift settings.

J. AASHOK

instrumentation engineer jayaashok276@gmail.com

A1: The nuclear accident at Fukushima Daiichi is a good example of such an event, but before describing it, I’ll make some general points about level transmitters (LT) and the terminology used when providing the process data describing the liquids they measure and their installation.

For dry-leg applications, you should provide the specific gravity and expected pressure/temperature ranges of the process fluid. For wet-leg applications, you should provide the specific gravity of the process fluid (SG1) and of the wet-leg fluid (SG2). Then, you must select an LT that has a range larger than the span required for the application. The span is defined as the difference between the pressures the LT detects on its high- and low-pressure sides. The output signal from the LT corresponds to the difference between the upper- and lower-range values (S = URV - LRV). LT accuracy is a percentage of its range, which

ADJUSTMENT RANGES OF STANDARD DP CELLS

shouldn’t be much larger than the application’s required span.

Regarding the process fluid, you should provide safety, toxicity and plugging data, and report if bubbler components, chemical seals, pressure repeaters, or regular or extended diaphragm seals are required. You should also specify the elevations of the lower- and the upper-pressure taps, and the height of the wet leg, if any.

Below are some considerations you should remember:

1. It must be understood that all LT installations have two setpoints and two zeros. One zero is the level in the tank (Lmin) and the other is the pressure difference detected by the LT. If the elevation of the LT and that of the bottom pressure tap on the tank are the same, the two zeros will be the same (Figure 1).

2. If the elevation of the transmitter is above or below the pressure tap on the tank, or if the specific gravity of the process fluid differs from that of the wet-leg suppression, the elevation the LT span is required. As shown in Figure 2, suppression (S) is needed if the LT elevation is below the lower pressure tap on the tank. In that case, the zero of the LT must be suppressed by the difference between the maximum span (Sm) and the calibrated calibrated span (Sc), which is the distance S shown in Figure 2.

Maximum

TABLE 1

Minimum

3. The LT action must also match the application. It’s usually direct, if dry-leg installation is used, and reversed for wetleg installation. If the LT elevation is below the lower pressure tap, the zero of the LT is suppressed. The amount of the suppression equals maximum span minus calibrated span (Sm - Sc).

4. For dry-leg applications, make sure the proccess vapors don’t condense. One method is to use pressure repeaters to isolate the vapor space in the tank and the vapor above the wet leg. Also, for wet-leg applications, make sure the leg is always full because the fluid in it is heavier than the process fluid, and doesn’t evaporate or boil due to solar exposion or for other reasons.

5. In case of an unusual wet-leg installation, where the LT is installed above the process tank, the LT action is a function of the net combined effect of the elevations of the transmitter and the lower pressure tap.

6. Make sure the supplier correctly calibrates the transmitter, so the transmitter's span suppression corresponds to the difference between the upper- and lower-range values (LRV - URV).

7. To protect against corrosion, toxicity or plugging, consider pressure repeaters or chemical seals.

8. The zero, span, elevation or depression can be adjusted on mechanical or electronic LTs (Table 1).

Accidents are often casused by errors in LT measurements, and some of the worst consequences can be losing process cooling. This occurred at Fukushima, where reactor overheating caused boiling in the wet leg. The resulting water loss in the wet leg caused the LT to report a rise in the cooling water level in the reactor, when, in fact, it was dropping (Figure 3).

BÉLA LIPTÁK

liptakbela@aol.com

A2: It would be very difficult to keep the high-pressure sensing line filled only with gas. I prefer to use remote chemical seals in this service.

CULLEN LANGFORD control specialist CullenL@aol.com

A3: If your high-pressure leg is diaphragm-sealed and filled with an inert transfer fluid, the pressure on the high-pressure side of the LT will be the head (pressure) of the transfer fluid plus the head of the liquid in the tank plus the vapor space pressure. The LP side will see only the pressure of the vapor space. Since the head pressure in the HP leg is constant, it can be removed by setting the zero point of the LT.

RICHARD CARO

ISA life fellow DickCaro@CMC.us

URV = L max (SG1) – W(SG2); LRV = Lmin(SG1) – W(SG2); SPAN = URV - LRV

Figure 1: Wet-leg transmitter at an elevation that is the same as the bottom tap elevation of a closed tank.

URV = (Lmax + S)(SG1) - (W + S)(SG2); LRV = (Lmin + S)(SG1) - W(SG2); SPAN = URV - LRV

Figure 2: Wet-leg transmitter elevation is below the bottom tap elevation of a closed tank.

Figure 3: Loss of water in the wet leg contributed to the Fukushima accident.

Cables, connectors and buddies hit the links

Protective covers, couplers, assemblies, strippers and labels get wires where they need to go

PANEL-INTERFACE CONNECTORS

M8/M12 SENSOR-ACTUATOR CABLES

Modlink panel-interface connectors from Murrelektronik include interface inserts, frames and optional protection covers that can be easily customized. Modlink interface inserts are available in nine connection combinations that may include RJ45 Cat5e, female/male 15-pin D-sub HD15, female/female USB-A, female/female nine-pin D-sub, RJ12, and a single 120 VAC outlet. Modlink’s IP65-rated single or double frames mount directly on an enclosure opening.

AUTOMATIONDIRECT

www.automationdirect.com/panel-interface-connectors

LEVER-ACTUATED SPLICING CONNECTORS

Wago 221-Series Lever-Nuts are avail able with two-, three- and five-wire connectors, inline versions and carriers for panel or DIN-rail mounting. They support 2410 AWG wire sizes for solid, stranded and flexible conductors. UL-rated for 600 V and up to 30 A, 221-Series has a transparent housing, allowing visual inspection of strip length and wire insertion, while its levers ensure comfortable operation with minimal force. Maximum operating temperature is 105 °C.

GALCO www.galco.com

EASY CRIMP FOR RAIL AND OUTDOORS

Lumberg Automation

RM12C crimp connectors let train builders and operators cut and terminate custom cable lengths for onboard, trackside and other outdoor applications. They feature easy-to-use crimp terminations for onsite assembly, as well as secure, reliable connections that support continuous, error-free data flows. RM12C also provides vibration resistance to withstand the harsh demands and constant motion of rail and rolling stock environments.

BELDEN www.belden.com/new-products

Available with different threads, wire types and other variations, M8/M12 sensor-actuator cables are deployed with M8 and M12 connectors, but they're also available with common threading types, such as ½ inch, 7/8 inch, M9 or M23. Gasproof and gold-plated crimp connections increase vibration resistance and durability, while vibration-resistant knurled nuts enable quick, reliable connections. The sensor-actuator portfolio’s wire types include PVC, PUR and POC sheathings.

PEPPERL+FUCHS

www.pepperl-fuchs.com

BUS COUPLER BRIDGES I/O MODULES

SX8R bus coupler supports up to seven I/O on the base unit, and up to eight more I/O with a click-in expansion power supply, for a total of up to 15 I/O. It supports 255 Modbus TCP nodes, 32 EtherNet/IP nodes with Idec’s FC6A Plus PLC, and eight or 16 CC-Link IE Field Basic nodes when using specific Mitsubishi PLCs. SX8R's 24 VDC power connector block is detachable, so users can choose push-in or screw terminals. It operates at -25 °C to 65 °C.

IDEC CORP.

800-262-IDEC (4332); lp.idec.com/SX8R

DUAL-TERMINAL BINDING POSTS

Multicomp Pro MP770674 dualterminal binding posts are 60A rated, and feature a 3 mm slot with a Ø6 mm diameter cross hole. Their terminal knobs are reversible, and they come in clear or black and red colors. Metal parts are available in nickel or gold plating for durability and conductivity. Pro MP770674 are used in audio devices, test equipment and power supplies. They're also RoHS compliant.

NEWARK www.newark.com/multicomp-pro/mp770674/binding-post-60ablack-red-nickel/dp/22AJ1009

WIRE-SPLICE CONNECTOR WITH WINDOW

Compact 2773

Series wire-splicing connectors employ Pushwire technology to provide a fast and reliable connection for applications. Thanks to their safety inspection window, users can also ensure they’re achieving a proper termination every time. Low insertion forces and high retention forces, as well as a large connection area for different conductor types, are reported to make 2773 suitable for any connection.

WAGO www.wago.com

AUTOMATIC STRIPPER FOR CONDUCTORS

TOOL-FREE, SNAP-IN TERMINAL BLOCKS

Omnimate 4.0 PCB plug-in terminal blocks in the MTS 5 series feature tool-free, Snap-In technology. Other Omnimate 4.0 devices with toolfree, Snap-In connections include power connectors for electronics up to 1,000 V; signal connectors with custom configurations available via a web-based configurator; and hybrid connectors that combine power, signal and data transmission, and use single-pair Ethernet (SPE) protocol.

WEIDMULLER USA

804-794-2877; www.weidmuller.com/en/products/connectivity/pcb_ terminals_and_connectors/omnimate_4.0.jsp

PRINTABLE, SELF-LAMINATING MARKERS

DuraWarp and DuraFlag selflaminating, print-on-demand labels provide marking for wires, cables and harnesses. Made of supple, conformable vinyl and acrylic adhesive, they’re available in more than 40 sizes and can be used post-termination. DuraWrap and DuraFlag can cover AWG sizes 10 through 4/0 and IEC 60228 (mm2) sizes 6 through 625 from four to 40 conductors wide. The redesigned series offers opaquest white, print-on area, which ensures the legend is clearly legible.

IDENTCO www.identco.com

E.Fox S 10 is an automatic stripper for processing conductors. Its intui tive touchscreen lets users adjust to achieve desired cross-sections and stripping lengths. E.Fox S 10 can store up to 100 jobs, and barcodes can be created to access stored favorites for quick recall with a barcode scanner. It can also be integrated into worker assistance systems, such as clipx Wire assist, and controlled with a software program.

PHOENIX CONTACT

www.phoenixcontact.com/en-us/e-fox-s-10-automaticstripping-device

ADAPTER GROMMET WITH PROTECTIVE COVER

AT-KS-AK adapter grommets allow direct connector integration into the cable entry, and enable easy snapping in of almost all 70 Keystone-system modules. Their lockable front cover protects up to IP54. AT-KS-AK's polyamide body provides a stable, secure base for modules. If necessary, modules can be unlocked with a screwdriver, so they can route pre-assembled cables and serve as a connector interface. AT-KS-AK doesn’t require added cutouts in the housing wall.

ICOTEK CORP. www.icotek.com/en-us/products/imas-connect/at-ks-ak

COMPACT METER TESTS FIBER-OPTIC CABLES

CableMaster FO is a dedicated, pocket-sized, Power over Ethernet (PoE) power meter and cable tester, which can measure seven different wavelengths in decibels (dB) using an attenuation meter. Power can be measured in decibel-milliwatts (dBm), dB or milliwatts (mW) to determine signal strength. CabeMaster FO also has a flashlight to illuminate dark wiring cabinets, USB-rechargeable, 80-hour battery, and includes a copper, RJ45, patch cord tester for checking wire maps.

SOFTING IT NETWORKS itnetworks.softing.com/us

GREG MCMILLAN

Gregory K. McMillan captures the wisdom of talented leaders in process control, and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams, and (web-only)

Top 10 lists. Find more of Greg's conceptual and principle-based knowledge in his Control Talk blog. Greg welcomes comments and column suggestions at ControlTalk@ endeavorb2b.com

Simulation scope and funding, Part 1

A multidisciplinary perspective on real-time process simulation

GREG: Real-time process simulation is a powerful tool. From engineering testing to operator training, having a virtual plant to test and train reduces risk in manufacturing facilities. We’re fortunate this month to have Marsha Wisely’s insightful and diverse experience to provide a multidisciplinary perspective on this technical topic.

She started as a process simulation engineer before joining Emerson to configure control systems, working as a part of fleet-wide modernization projects, commissioning greenfield polymer facilities, and working as the lead automation engineer in the Emerson Interactive Plant Environment at the Rosemount office in Shakopee, Minn. Her experience includes working in multiple industries, primarily supporting chemical, pulp and paper, and life sciences, and as a business development manager and product marketing specialist for Emerson’s Mimic product line. Today, she is president and lead process simulation consultant for PlantWise Industrial Consulting LLC.

She joins us in this two-part series to discuss making a simulation project a reality. In Part 1, we discuss how to right size the scope of a dynamic process simulation and get a project funded. Next month, Part 2 looks at the lifecycle of a dynamic process simulation and how to stay current with production.

Marsha, what are some barriers to entry that people face when trying to get a dynamic process simulation of their facility?

MARSHA: There are three things that come to mind: scope, funding and lifecycle maintenance.

GREG: Tell me more about the challenges when it comes to scope.

MARSHA: Process simulation excites many engineers and executives. As a virtual version of your plant, there’s so much you can do with

it. People get excited, see the possibilities, and think they need the highest-tech tools, but their actual needs may be simpler.

Imagine shopping for car—you can get swept away by the features and glamor of a Ferrari, even though all you need is a car (or maybe a bike).

In dynamic process simulation, you can model all the reactions and kinetics for a highfidelity model, or create a model with simple tiebacks, so valves open and pumps start, but that’s about it.

I’ve seen projects end up in limbo for this reason. You don’t have to skip the car because you can’t afford or don’t need the Ferrari. The car will still get you to where you need to go.

GREG: How is scope set correctly?

MARSHA: I recommend identifying the problem you’re trying to solve and your timeline. For example, say you want to start a training program and you know that, in the first year, you’ll focus on standard operating procedures, and by the third year, you’ll train operators on malfunctions and more complicated scenarios. You can start small and build each year, helping to lower the initial investment and reducing maintenance, while you build your system. It allows your requirements to change (if needed) as you learn how you’re going to use your training system.

Scoping simulation projects is an area where consulting can add a lot of value. Every situation, every manufacturing plant has unique processes, equipment and needs. Consultants help you work through all the pieces, and ensure you maximize the return on your process simulation investment.

GREG: Sometimes, you still want or need the Ferrari. I’ve extensively used high-fidelity, first-principle, dynamic simulations to find

the best boiler, compressor surge control, pH control, exothermic reactor control, furnace pressure control and biochemical reactor control. Eliminating shutdowns of boilers, compressors, exothermic reactors and furnaces saved several plants millions of dollars per year. Reducing exothermic batch reactor cycle time increased capacity and profitability by millions of dollars per year. For pH systems, the savings in pH reagent is less than $1 million a year, but eliminating pH violations covered by the Resource Conservation and Recovery Act (RCRA) was critical for continued operation under existing environmental agreements. For these simulations, or any others, how do you solve the funding challenge?

M ARSHA: Think about who holds the purse strings in your organization. There may be technical decision makers with a lot of influence, but more often, the funding for simulation projects comes from a plant manager or someone who cares about profitability.

Every day, every hour that the plant isn’t producing good product, there’s an opportunity cost for the price of the material that could have been made. So, the same reasons you try to justify a simulator are the same reasons your executives will fund it. You need to translate those reasons into the language they understand, which is dollars.

Your experience with high-fidelity simulation illustrates this. Let’s consider your example of reducing downtime. Once you identifiy the downtime reduced at a facility, you can calculate the impact in saved days/hours and/or product prices to determine the opportunity production costs. There are additional expenses to consider as well—from contractors to customer commitment risks to possible safety/environmental remedy costs—so the dollars add

up. Consultants can ensure you capture all the costs and present numbers that will be well understood by your executives (or whoever writes the checks).

GREG: What if management doesn’t agree with the justification or your site is on a tight budget?

MARSHA: In cases where funding is more difficult or the business justification is a challenge, I suggest you start with a small simulation and grow it. A good example is a greenfield project. Budgets get tight, and you may not have the operational data to build your business justification.

Start with a simpler process simulation—medium fidelity or material balance, plus some dynamics where you really need them—so you can begin training operations and test your code prior to startup. Because your process model is less rigorous, it’s less expensive, but now you have a simulation license. When you have operational data and can identify areas where you’ll get the most return

from a high-fidelity simulation, you already have licensing (reduced costs) and you have a tight scope, which will also keep cost lower.

Another lever to pull for price is licensing. Simulation software companies get more flexible pricing options with subscription offerings, which helps reduce capital investment.

GREG: But subscription mean a recurring cost.

MARSHA: True. There’s an annual need to rejustify the ongoing expense, but if you have a maintenance plan that keeps the model in lockstep with production and use your simulation, it’s just a matter of articulating the ROI from using the simulator.

Subscription is also an opportunity for cost-restricted teams. We mentioned earlier that the upfront capital cost is less, helping you land the project. The subscription justification reminds management, and opens the door to more simulation projects, if the business justification is there, of course.

JIM MONTAGUE Executive Editor

“I don’t usually collaborate with my finance colleagues, but now that we need numbers about impacts and data to support sustainability efforts, they’re my best friends.”

Oh the humanities!

Facetime with partners is crucial for sustainability and other epic transitions

SORRY, but talking to other people appears to be unavoidable. The days of mechanical, electrical and controls engineering tossing jobs over cubicle walls to each other—and everyone avoiding IT—are long gone, especially because huge challenges like sustainability demand far more speed and flexibility.

For instance, several experts at the Digital Manufacturing and Cybersecurity Institute’s (www.mxdusa.org) “Win-win of sustainability” workshop on Aug. 13 in Chicago (p.18) showed how their companies are reorganizing for profitable sustainability, but added that soft skills, cooperation and patience are essential because sustainability covers so many areas that even the largest can’t go it alone.

“We’re seeking to reduce the impact of our operations, and scaling up solutions across industries, but we need partners to share synergies and benefits,” said Reick Hansjörg, senior global business developer at Procter & Gamble. “For example, developing a circular economy for plastics includes redesigning recycling processes, accessing collection, enabling consumers, sorting materials, and innovating with circular recycling—and each step involves multiple companies.”

Hansjörg reported that P&G’s Sustainability Solutions approach includes developing foundational sustainability insights, R&D and intellectual property; deciding which goals need partners and recruiting them; and building techno-economic and business models. Once it finds suitable partners, P&G opts mostly for joint-development licensing partnerships and some option co-investment arrangements. This lets it scale up, commercialize globally, jointly create value, and share profits fairly.

“We’re commercializing a 100% polymeric, recyclable aerosol bottle with Plastipak (www. plastipak.com) that has a reduced environmental footprint,” explained Hansjörg. “It’s made using label-less, laser marking that can process more than 300 containers per

minute, and Imflux’s (www.imflux.com) constant pressure- and sensor-controlled, injection molding, which produces lighter parts, uses post-consumer resins (PCR), and saves up to 15% on energy.”

David Koenigs, senior R&D leader for packaging and specialty plastics at Dow, added at the MxD event that his company plans to transform 300 million metric tons of plastic waste per year by 2030, so it can join the circular plastics economy, and use renewable energy for its mechanical, chemical and biobased recycling processes.

“Getting back to virgin polymer performance is the goal, but traditional gasification and pyrolysis need lots of energy, so we’ve been working with P&G on a proprietary process that combines depolymerization, polymerase chain reaction (PCR) dissolution and mechanical means to turn dirty, stinky plastic waste into white, virgin-like polyethelyene,” said Koenigs. “We’re working on this now, but we can’t scale up yet, and we’re still dealing with feedstock challenges.

“This is why partnerships are so important for big tasks like sustainability. To succeed, they must have trust, transparency, and early alignment of expectations, which can also be adjusted later. This puts everyone in one boat with complementary expertise and beneficial overlaps, and the agility to learn and adapt.”

Darrell Boverhof, environment, health, safety and sustainability (EHS&S) director at Dow, added that once individuals and organizations learn their sustainability impacts, they must prioritize their carbon-reduction strategies and keep them aligned. “Sustainability is like herding cats, which is another reason why it requires the right governance and cooperation by all players,” said Boverhof. “For example, I don’t usually collaborate with my finance colleagues, but now that we need numbers about impacts and data to support sustainability efforts, they’re my best friends.”

CONTROL AMPLIFIED The Process Automation Podcast

Control Amplified offers in-depth interviews and discussions with industry experts about important topics in the process control and automation field, going beyond Control's print and online coverage to explore underlying issues affecting users, system integrators, suppliers and others in the process industries.

Check out some of the latest episodes, including:

Coriolis technology tackling green hydrogen extremes

FEATURING EMERSON'S GENNY FULTZ AND MARC BUTTLER

Ultrasonic technology takes on hydrogen, natural gas blends

FEATURING SICK SENSOR INTELLIGENCE'S DUANE HARRIS

Asset-specific insights to transform service workflows

FEATURING EMERSON'S BRAIN FRETSCHEL

Analytics enabling next-generation OEE

FEATURING SEEQ'S JOE RECKAMP

Go Beyond.

Emerson’s DeltaV™ Automation Platform provides contextualized data and unique, actionable insights so you can improve production and embrace the future of innovation—with certainty. Venture beyond. Visit Emerson.com/DeltaV

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.