Control – November/December 2024

Page 1


DIGITAL TRANSFORMATION

Ignition’s industry-leading technology, unlimited licensing model, and army of certified integration partners have ignited a SCADA revolution that has many of the world’s biggest industrial companies transforming their enterprises from the plant floor up.

With plant-floor-proven operational technology, the ability to build a unified namespace, and the power to run on-prem, in the cloud, or both, Ignition is the platform for unlimited digital transformation.

Visit inductiveautomation.com/ignition to learn more.

When we named our industrial application software “Ignition” fifteen years ago, we had no idea just how fitting the name would become...

DIGITAL TRANSFORMATION

With plant-floor-proven operational technology, the ability to build a unified namespace, and the power to run on-prem, in the cloud, or both, Ignition is the platform for unlimited digital transformation.

One Platform, Unlimited Possibilities

Ignition’s industry-leading technology, unlimited licensing model, and army of certified integration partners have ignited a SCADA revolution that has many of the world’s biggest industrial companies transforming their enterprises from the plant floor up. Visit inductiveautomation.com/ignition

ALL SIDES OF CYBERSECURITY

Five system integrators show how they provide clients in multiple process industries with suitable and successful cybersecurity protections

Plant of the Year honorees push process control into the future

Systems integrator reaps rewards of derisking

Process improvement is like sailing.

With an experienced partner, you can achieve more.

Optimizing processes and maximizing efficiency is important to remain competitive. We are the partner that helps you master yield, quality, and compliance. With real-time inline insights and close monitoring of crucial parameters, we support manufacturers to optimize processes, reduce waste, and increase yield.

All sides of cybersecurity

Five system integrators show how they provide clients in multiple process industries with suitable and successful cybersecurity protections by Jim Montague

30

OF THE YEAR

DESIGN & DEVELOPMENT

Rewards of derisking

FieldComm Group's Plant of the Year honorees illustrate the progress made in productivity, efficiency and sustainability by Len Vermillion

System integrator Arthur G. Russell proves new product concepts, designs and development by testing, standardizing, modularizing and partnering by Jim Montague

CONTROL (USPS 4853, ISSN 1049-5541) is published 10x annually (monthly, with combined Jan/Feb and Nov/Dec) by Endeavor Business Media, LLC. 201 N. Main Street, Fifth Floor, Fort Atkinson, WI 53538. Periodicals postage paid at Fort Atkinson, WI, and additional mailing offices. POSTMASTER: Send address changes to CONTROL, PO Box 3257, Northbrook, IL 60065-3257. SUBSCRIPTIONS: Publisher reserves the right to reject non-qualified subscriptions. Subscription prices: U.S. ($120 per year); Canada/Mexico ($250 per year); All other countries ($250 per year). All subscriptions are payable in U.S. funds.

Printed in the USA. Copyright 2024 Endeavor Business Media, LLC. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopies, recordings, or any information storage or retrieval system without permission from the publisher. Endeavor Business Media, LLC does not assume and hereby disclaims any liability to any person or company for any loss or damage caused by errors or omissions in the material herein, regardless of whether such errors result from negligence, accident, or any other cause whatsoever. The views and opinions in the articles herein are not to be taken as official expressions of the publishers, unless so stated. The publishers do not warrant either expressly or by implication, the factual accuracy of the articles herein, nor do they so warrant any views or opinions by the authors of said articles.

Photo: Derek Chamberlain / Shutterstock AI

Choosing the right sensor doesn’t have to be like solving an advanced algebra equation. VEGAPULS 6X is one radar level sensor for any application, liquids or bulk solids, delivering reliable and precise measurements. Simply provide VEGA with your application details, and they’ll create a VEGAPULS 6X that’s tailored to your needs. The culmination of over 30 years of radar level measurement expertise, THE 6X® is the variable that makes level measurements simple.

Exploring the convergence of

Train for digital transformation

As manufacturers pursue digital transformation, leaders must upskill their workforces

A bridge to the future–if you want it

Justifying and testing new fieldbus concepts

INDUSTRY PERSPECTIVE

'Boundless automation' two years on

Process automation architecture now resembles the digital world

16 WITHOUT WIRES

Envision procedural automation in your future

Why it's a natural fit for normal but infrequent operations 17 IN PROCESS

Mary Kay O'Connor Center safe at home

Also, Aveva World tackles big digitalization

Mobilizing thin-client management

Pepperl+Fuchs and Rockwell Automation debut ThinManagerready tablets

35 RESOURCES

Calibration instills confidence Control 's monthly resources guide

36 ASK THE EXPERTS

Economics of pumping stations

How does changing speeds affect pump efficiency?

38 ROUNDUP

Building blocks (and I/O) multiply links I/O modules and terminal blocks provide new sizes, signals, networking and flexibility

40 CONTROL TALK

Keys to successful migration projects

How to get the most out of control system migration projects

42 CONTROL REPORT

Embrace boring Election judging parallels process control and automation

Every

ASK FOR

I N T T E R O P

Len Vermillion, lvermillion@endeavorb2b.com

Jim Montague, jmontague@endeavorb2b.com

Madison Ratcliff, mratcliff@endeavorb2b.com

Béla Lipták, Greg McMillan, Ian Verhappen

A space odyssey

Exploring the convergence of industrial process control and space technology

HOUSTON, Texas, seemed like the perfect place to talk about outer space. Space City, after all, is home to mission control at NASA's Johnson Space Center, and has been the central point for decades of space exploration. But this was an industrial process control event—a largely terrestrial endeavor—so what was there to discuss about “out there"? Turns out, plenty.

Space Day at Yokogawa’s YNOW 2024 users’ conference piqued the interest of process control engineers and executives eager for collaborative opportunities. There’s opportunity and incentive for the process and space industries to learn from each other. Remember, “the needs of the many outweigh the needs of the few.” Spock would be proud. Process control conferences these days tend to focus on autonomous operations. Space is the ultimate use case for autonomous technologies. “Space is the pinnacle of remote operations. Everything we talk about in terms of making things secure, keeping workers safe, autonomous operations is applicable to space,” said Eugene Spiropoulos, senior technology strategist at Yokogawa, during a lively panel on IT/OT technology in space.

Rita Fitzgerald, rfitzgerald@endeavorb2b.com

Jennifer George, jgeorge@endeavorb2b.com

Operations Manager / Subscription requests Lori Goldberg, lgoldberg@endeavorb2b.com

VP/Market Leader - Engineering Design & Automation Group

Keith Larson

630-625-1129, klarson@endeavorb2b.com

Group Sales Director

Amy Loria

352-873-4288, aloria@endeavorb2b.com

Account Manager

Greg Zamin

704-256-5433, gzamin@endeavorb2b.com

Account Manager

Kurt Belisle

815-549-1034, kbelisle@endeavorb2b.com

Account Manager

Jeff Mylin

847-533-9789, jmylin@endeavorb2b.com

Subscriptions

Local: 847-559-7598

Toll free: 877-382-9187

Control@omeda.com

Jesse H. Neal Award Winner & Three Time Finalist

Two Time ASBPE Magazine of the Year Finalist

Dozens of ASBPE Excellence in Graphics and Editorial Excellence Awards

Four Time Winner Ozzie Awards for Graphics Excellence

However, he and fellow panelists pointed out the convergence of space tech and industrial controls is bidirectional. Andrea Course, digital Innovation program manager at Shell, flipped the discussion when she asked, “What can we learn in space that we can use here on Earth"?

While the resulting autonomy and efficiency should be enough to make business leaders salivate, engineers are aware that mirroring the two types of technologies will take some work. For it to happen, there must be more investment in intelligent devices and automation in the process industries. The biggest challenge of adapting space tech for use on Earth is that products are built specifically for the demands of space. “So you have to get creative,” said Wogbe Ofari, founder and chief strategist at WRX Companies. Engineers are problem solvers, and just as they once cracked the codes to get men to the moon 55 years ago, adopting technology built specifically for the demands of space and converting them for a refinery, deepwater drilling, or a chemical processing plant is well within our reach.

So maybe it’s time to reach from the stars.

"There’s opportunity and incentive for the process and space industries to learn from each other."
DUSTIN JOHNSON

Technology Officer, Seeq
" Companies that invest in modern technologies and upskilling can expect measurable improvements in productivity and efficiency, while also attracting and retaining a more motivated workforce."

Train for digital transformation

As manufacturers worldwide pursue digital transformation, leaders must upskill their workforces

AFTER many digital transformation projects consisting of both successful adoptions and efforts stuck in pilot purgatory, manufacturers in many industries are recognizing that success requires more than adopting new technologies. It should be no surprise that the technical parts of digital transformation are far less important than its business and human aspects. In fact, the most important technological consideration is how well it supports people to adopt new practices and behaviors in pursuit of efficient workflows and increased business value.

Despite this understanding, many business leaders still initiate digital transformation programs without clearly outlining and communicating the rationale, business impacts or nature of changes and the steps for achieving them. They also commonly overlook the upskilling required to empower their workforce to use these new technologies, risking not only a subpar return on investment, but also contributing to today’s widening skills gaps.

The most forward-thinking organizations invest in innovative technologies, such as advanced analytics and generative artificial intelligence (genAI), which improve workflow efficiency, use of operations data and manufacturing insights, while also providing industry-relevant, just-in-time learning on demand. By pairing digital transformation with user empowerment, these companies create insights, and decrease the learning curve to understand them, yielding better business results, while easing the tasks of personnel who perform them.

Today’s workforce challenges

The shortage of skilled labor across the industrial sector presents a fundamental challenge for organizations implementing long-term digital transformation. While the most knowledgeable employees are retiring, industry is simultaneously losing talent

to more popular fields that address evolving employee expectations more effectively. According to the World Economic Forum's "Future of Jobs Report 2023," 60% of workers will require training before 2027, but only half have access to adequate training opportunities. The same report found that, in all industries collectively, training workers to use AI and big data ranks third among company upskilling priorities over the next five years.

Cross-industry surveys find large salaries are no longer enough to retain employees. Workers recognize that longevity in a sector and their future employability depend on access to and successful adoption of new technologies. According to Deloitte, manufacturing employees are nearly three times less likely to leave an organization in the next year if they believe they can acquire skills that are important for the future. Similarly, Deloitte predicts that nearly 2 million manufacturing jobs could remain unfilled if organizations don’t address these skills gaps.

Providing the right technology

Successful digital transformation requires personnel to use new tools effectively to improve workflows. Companies that invest in modern technologies and upskilling can expect measurable improvements in productivity and efficiency, while also attracting and retaining a more motivated workforce.

For industrial organizations, modern, advanced analytics platforms are optimized to connect disparate data sources, so users can seamlessly combine and interrogate information regardless of its origin. These platforms provide a combination of intuitive, self-service tools for data cleansing, time-stamp alignment and contextualization, empowering subject matter experts (SME) to quickly derive reliable insights referencing all available data. With live connectivity built into the software, SMEs can apply their analyses

to near-real-time data, whether it’s stored in the cloud or onsite.

Access to these platforms lets users make tangible impacts in their organizations. For example, they can optimize operational efficiency and increase uptime, conduct root-cause analyses and mitigate issues, and/or monitor sustainability key performance indicators (KPI) in real time to decrease waste and emissions. Despite the relative ease of using these tools, companies must ensure their users are provided with sufficient training in software’s functions and features, and in general data analytics principles.

Traditionally, a primary hurdle for effective technical training was a lack of time, especially in sustained blocks, to engage in formal training courses, which often required multiple days offsite. Today’s technologies, however, pioneer new ways for industrial organizations to upskill their workforces, improving adoption with more time focused on value-added activities.

Generative AI and technology’s new frontier

Recent investments in genAI demonstrate this new approach. GenAI lets workers obtain faster results, while

reducing the need for highly technical education and training. For example, engineers without formal training in programming languages can benefit from embedded and standalone genAI solutions, which lower the barrier to setting up complex analyses with advanced algorithms. This helps establish better project understanding, and facilitates improved collaboration with data-scientist colleagues.

Organizations must also ensure that users understand fundamental principles of interpretation, such as how to validate results. When enabling more workers to deploy complex machinelearning (ML) algorithms using simple prompts, it’s critical to precede that with the knowledge, understanding and ability to evaluate model validity and trustworthiness. It’s essential to pair every new technological deployment with a comprehensive and welldefined training path to ensure efficient and accurate outcomes.

Achieving success with advanced ML techniques and AI implementations requires giving workers sufficient training in these essential areas. Research shows that e-learning paths often yield better results than traditional classroom instruction, emphasizing

the importance of access to resources while the work is performed. When delivered as part of an intentional and sustained upskilling program, this capability can also enhance the likelihood of successful adoption.

Putting it in practice

Lowering the barrier for employees to engage in tasks and approaches that support agile and digitalized workforces often incites organic transformation initiatives at the grassroots, including scaling pilots and even starting new ones. Seeing these direct impacts on the business, in addition to pathways for career growth, drives and sustains personnel’s motivation in their roles, which simultaneously improves worker satisfaction and retention.

One U.S., multinational, oil-andgas company created a structured adoption plan to roll out AI-informed, advanced analytics technology across more than 50 sites globally. As adoption increased and experts in the technology emerged, the company followed a strategy known as “train the trainer,” which prioritizes first empowering internal champions, who understand where the technology should be implemented. These champions then facilitate training, and share tangible successes with new learners in the organization. Thanks to this community-led approach, the company has more than 4,000 unique users per month in the software platform, who can identify process anomalies and monitor equipment.

As with all new practices and procedures, fostering successful adoption requires providing staff with the right skills and training. The most successful digital transformation adopters are prioritizing just-in-time learning, empowering their workforces to use innovative technologies, which, in turn, motivates employees to pursue career growth, and push the boundaries of what’s possible for the business.

Photo: Gorodenkoff / Shutterstock.com
JOHN REZABEK Contributing Editor
JRezabek@ashland.com
"Remote I/O is expensive, and requires complicated and worrisome power strategies to match Foundation Fieldbus’ two-wire power redundancy."

A bridge to the future–if you want it

Justifying and testing new fieldbus concepts

FRANK was among a minority of plant maintenance supervisors, who had a large installed base of fieldbus devices, predominantly Foundation Fieldbus. Proficiency in fieldbus was vanishing from his company and suppliers due to retirements and reassignment to current marketing hype. There was also increasing attrition among his decades-old instruments, many of which were in service and under power for that entire period.

Frank had a few technicians, who were trained and knew the procedure for replacing one device with another that was one or more generations beyond. Some nuances needed engineering—such as when control function blocks from early generations needed to be reconfigured with more recent options. However, fieldbus looked like it was going the way of pneumatics because fewer and fewer individuals knew how to help with troubleshooting and repairing it.

All migrations are costly and typically have zero payout—obsolescence avoidance could arguably make the plant more reliable, but the chances of citing likely economic impacts wouldn’t finance the infrastructure change. One of Frank’s manufacturing reps thought he could revert to 4-20 mA/HART devices. That was challenging to accommodate on a one-by-one basis since there were not nearly enough twisted-pair cables in the field to bring in more than a few dozen devices.

Remote I/O is expensive, and requires complicated and worrisome power strategies to match Foundation Fieldbus' two-wire power redundancy. What would he need at the system side to bring in remote I/O, and did he have room for additional cards? It would have to be good for hazardous areas, and he would be gambling on whether his old cable would be adequate for the remote I/O backhaul.

Frank might be intrigued now if he became aware of an effort by the FieldComm Group to describe a concept for migrating Foundation

Fieldbus H1 (bit.ly/migratingH1) via the emerging (but far from mainstream) Ethernet Advanced Physical Layer (APL, www.ethernetapl.org). APL is two-wire Ethernet with power that can be deployed in hazardous areas.

Roughly a decade ago, the Fieldbus Foundation merged with the HART Communications Foundation to form the FieldComm Group, and it's been a leading organization developing standards for new field communications. The founding members are the same that were devoted to developing and marketing fieldbus, though the players have changed. If you’ve been a subscriber to the fieldbus vision—digital integration of field devices over a twisted-pair bus—you might feel like these technology providers had an obligation to define a way forward.

Foundation Fieldbus H1 was originally conceived to have H2, and later morphed into high-speed Ethernet (HSE). H2/HSE bridged the H1 networks together, and transparently supports the fieldbus protocol. But incarnations of this architecture were scarce, and it didn’t win supporting DCSs many projects. Instead, proprietary remote I/O supporting legacy, point-to-point wiring became the solution-du-jour thanks largely to support from “big oil” and speedy adoption by EPC firms. At the end of the day, suppliers must sell their wares—it’s how they achieve positive cash flow. This cruel but inexorable equation is why support for Foundation Fieldbus isn’t abundantly promoted in any supplier’s portfolio; it was outshone by simpler technology in proportion to the cost of manufacturing, marketing and support.

It's encouraging that FieldComm Group has invested the time and resources to publish its TR10365 concept for migrating the installed base of fieldbus devices. So I’ll continue next month imagining what sort of choices we’ll be making, and how to justify testing and investing in new concepts.

When we named our industrial application software “Ignition” fifteen years ago, we had no idea just how fitting the name would become...

fit ting th

TR NS

DIGITAL TRANSFORMATION

Ignition’s industry-leading technology, unlimited licensing model, and army of certified integration partners have ignited a SCADA revolution that has many of the world’s biggest industrial companies transforming their enterprises from the plant floor up.

With plant-floor-proven operational technology, the ability to build a unified namespace, and the power to run on-prem, in the cloud, or both, Ignition is the platform for unlimited digital transformation.

Platform, Unlimited Possibilities

Visit inductiveautomation.com/ignition to learn more.

'Boundless Automation' two years on

As the operational technology (OT) world continues its co-option of IT best practices, the architecture of process automation systems is starting to look less like the multi-layer Purdue model of yesteryear and more closely resembles that of the rest of the digital world: purpose-built smart devices connected to virtualized networks of local computing power, complemented by essentially unlimited cloud-based support. Emerson announced its own “Boundless Automation” vision of industrial automation—field, edge and cloud— at its Emerson Exchange user group event two years ago, and to check in on progress to date Control sat down with Emerson CTO Peter Zornio for a discussion that also ranged from generative AI to next year’s Emerson Exchange in San Antonio.

Q: Hard to believe, but it's already been two years since you and the rest of the Emerson team unveiled the vision of Boundless Automation at Emerson Exchange 2022 in Grapevine, Texas. To start things off, can you summarize the essential concepts of Boundless Automation and its implications for the industrial enterprise?

A: Boundless Automation is fundamentally Emerson's vision of how we see the next generation of automation technology. It's also our vision for overcoming the issues introduced by how industy has implemented automation solutions these past 25 years. In that paradigm, there's typically a big data source on the production side, most often the automation system, but then there are software applications for reliability, for quality, and now for sustainability being put into place. What typically happens is that each one of those functional departments ends up creating its own silo of data that is tied to the databases and the architecture structures in their own area.

What we believe needs to happen is implementation of a common data fabric that allows you to get at all that data—not just from the automation world, but from those other silos—in a manner that is preintegrated with a common context. So, the majority of your data is there in one fabric, easily accessible by whatever kind of application needs it. This allows you to simultaneously optimize operations across various domains instead of being focused on just one of them. And that's increasingly important to customers as they fight to operate efficiently and cost competitively while increasing production, even as they add sustainability and other metrics to their goals.

Q: The year 2022 also marked Emerson taking a big equity stake in AspenTech, the acquisition of Inmation and an announcement by your CEO Lal Karsanbhai of Emerson’s intent to divest all those commercial and residential units and focus solely on industrial automation. What milestones have been achieved since?

A: Just a year ago, we released what we call the DeltaV Edge, which is a new product in the DeltaV portfolio. It's a node that sits on the DeltaV system and allows DeltaV data to pass through a one-way communications protocol—or even a physical data diode, if you like—to populate another set of processors running edge-based technology. It’s a node that uses what's called HCI, or hyperconverged infrastructure, which is a next-generation version of virtualization. What this node lets us do is create an entire digital twin of the DeltaV system, not just the process data, but alarm data, event data and configurations—essentially the complete context of the DeltaV data on the open edge side of that node.

PETER ZORNIO
CTO Emerson

Once you’re there you have access to all the more modern edge analysis tools. We added new capabilities for exporting the data via standards like JSON and MQTT, in addition to the OPC standards that we often use in the automation world. Another part of the Boundless Automation vision is allowing control to actually run on edge platforms, instead of only on purpose-built hardware. And we’ll have our first releases of that by the end of the next calendar year.

Q: It’s certainly no secret that the industries Emerson serves are facing a bit of an expertise crisis, with the waves of experienced operators and engineers retiring. What role does artificial intelligence, especially in the form of generative AI, promise to play in complementing the more traditional, analytics-based advisory tools like Plantweb Insight?

A: First, I’d remind your readers that industrial automation was AI long before AI was cool. Within AspenTech especially, the vast majority of their portfolio is around optimization, advanced control, planning and scheduling. That's all built on data models, but also first-principles models used in the design and simulation of process plants. What’s different from the AI we were using 25 years ago is that instead of using a narrowly defined algorithm trained on a specifically relevant set of data, generative AI uses a general-purpose model trained on broad data sets to answer a whole variety of questions.

In terms of the products Emerson develops and the value we can deliver to our customers, I think in terms of three buckets. The first is in the area of customer support: anyone who supplies an industrial or consumer product is working on developing an interactive copilot to guide its users. The second area is in the configuration of products. Industrial products, especially software and systems, require considerable site or application-specific configuration. We actually have a product today that can take configurations of older, competitive control systems and using AI quickly translate them into modern configurations for DeltaV. But I think the AI application that stirs everyone's imagination is the idea of an actual operations copilot that you can talk to—an interactive operator or operations advisor that you can ask for input on anything.

Boundless Automation consists of three complementary domains–field, edge and cloud–that work closely together to accomplish control and optimization tasks.

Q: I’m looking forward to connecting in person again next spring when the Exchange community will be gathering in San Antonio from May 19 to 22. Other than moving from what has usually been a fall cadence to the spring, any particular innovations and new things we should look out for?

A: Exchange attendees get to see in action all the things we just talked about—and more. But most of our users who participate say the biggest takeaway is the interaction with the other users they meet there, many of whom are facing the same challenges they are.

Solutions Architect

Willowglen Systems

Ian.Verhappen@ willowglensystems.com

"Looking at procedural automation as a logical sequence of steps makes it a natural fit for normal but infrequent operations."

Envision procedural automation in your future

Why it’s a natural fit for normal but infrequent operations

THE definition of procedural automation is the “implementation of a specification of a sequence of tasks with a defined beginning and end that’s intended to accomplish a specific objective on a programmable mechanical, electric or electronic system.” By itself, this doesn’t say much, and you could argue it’s what any control program or application is meant to accomplish. However, the key words are “beginning and end.”

From an operational perspective, a procedure is one or more “implementation modules,” each consisting of a set of ordered tasks to provide plant operations with step-by-step instructions for accomplishing (implementing) and verifying the actions to be performed.

Looking at procedural automation as a logical sequence of steps makes it a natural fit for normal but infrequent operations. This includes startup, shutdown, product change, alarm response, abnormal situation response, complex or repetitive operations, equipment isolation and return to service, and regulatory requirement support.

Two automation styles are used to determine transition for procedural automation:

• State-based control: a state of a module, such as block valve open/close, PID manual/ auto/cascade, and pump run/stop.

• Sequence-based control: a set of actions in which the behavior of a procedure implementation module follows a set of rules with respect to its inputs and outputs.

There are also three degrees of automation and means of performing each task:

1. Manual—the operator is responsible for command, perform and verify work items.

2. Semi-automated—implementation modules are considered semi-automated, with operators and computers sharing coordinated responsibility for command, perform and verify work items.

3. Fully automated—implementation modules are considered fully automated when

the computer is responsible for the bulk of the command.

Implementing and maintaining procedural automation also comes with a cost, so there must be a corresponding benefit. A representative list of benefits arising from procedural automation includes:

• Improved safety performance—automating procedures and utilizing state awareness for alarm management reduces the workload on the operations staff during abnormal conditions, which reduces the probability of human error.

• Improved reliability—automated procedures can aid in maintaining maximum production rates, minimizing recovery time and avoiding shutdowns.

• Reduced losses from operator errors—automating procedures enable operations staffs to standardize operating procedures. A standardized approach reduces the likelihood of human error contributing to abnormal conditions, and reduces the time required to recover from abnormal conditions.

• Increased production by improving startups and shutdowns—operations benefit by achieving faster, safer and more consistent startup and shutdown of processes.

• Increased production and quality via efficient transitions—process transitions from one condition to another during normal operations, accomplishing transitions with reduced variability in less time.

• Improved operator effectiveness—reduces the time an operator spends carrying out repetitive tasks.

• Higher retention and improved dissemination of knowledge—automated procedures can be used to retain the knowledge of the process.

• Improved training—as knowledge and best practices are captured into automated procedures, t he resulting documentation and code can be used for training new operators about processes.

Mary Kay O’Connor Center safe at home

Safety and risk conference draws close to 400 visitors to numerous presentations

BUILDING on the momentum of its recent in-person and virtual events, the Mary Kay O’Connor Process Safety Center (MKOPSC) at Texas A&M University’s (TAMU) Engineering Experiment Station (TEES) attracted close to 400 attendees and 30 exhibitors at its Safety and Risk Conference 2024 (mkosymposium.tamu.edu) on Oct. 21-24 in College Station, Tex. The event was co-located with the 79th annual Instrumentation and Automation Symposium and the first Ocean Energy Safety Day.

Plucked from among the event’s many presentations and exhibits, MKOPSC’s steering committee also bestowed several awards at the conference, including:

• Harry West service award to Mark Slezak, process safety and risk engineering manager at Oxy;

• Lamiya Zahin memorial safety award to Austin Johnes, graduate student at TAMU;

• Best paper award to Ganesh Mohan, process safety and risk manager at Chevron;

• Most innovative booth award to Oxford Flow;

• Best overall booth to ProLytx Engineering; and

• Best booth honorable mention to Softek Engineering

Getting more granular on safety

Steve Horsch, technical leader at Dow’s 58-year-old Reactive Chemicals Group, presented “Using calorimetry to understand reactive chemical hazards,” and reported most users are aware they need to know reactions and rates to avoid containment losses when manufacturing chemicals, but few know about more subtle measurement nuances that can alter those reactions and rates.

For example, a simple equation can express measuring heat from accelerating rate calorimetry (ARC), but more complex calculations are needed to demonstrate constantvolume heat capacity. “Likewise, if a user needs to know how much energy is coming from the heat of a reaction, the real-world process may be producing 315 joules per gram (j/g), while the ARC only shows 240-260 j/g, which means the resulting time requirements will be off,” explained Horsch. “This is why it’s crucial to examine what’s providing data, where it’s coming from, and whether it matches your actual scenario. Reaction kinetics from an ARC can be corrected using Fisher’s equation, but there may also be more complex reaction kinetics due to external heat pooling, which is a big-deal nuance. It’s not easy to collect the right data, but if you don’t have it, then you won’t get the right result. ARC can provide a lot of information to determine safe operating limits, but its limits must also be understood.”

Faisal Kahn, director of the Mary Kay O'Connor Process Safety Center (MKOPSC) at Texas A&M University’s (TAMU) Engineering Experiment Station (TEES), introduces the speakers at its 2024 Safety and Risk Conference 2024 on Oct. 21-24 in College Station, Tex. Source: Jim Montague

Data-aided safety culture

In her presentation, “Learnings and improvements in process safety,” Bridget Todd, VP of enterprise health, safety and environment (HSE) at Baker Hughes, reported that its Enterprise Risk Avoidance program includes KPIs that can be tracked quarterly, reported back to managers, and contribute to highlevel HSE audits. To reach managers and get them to follow up on these practices, the risk avoidance program also requires competence evaluations to make sure participants are doing their jobs capably and safely.

For instance, Todd explained that one Baker Hughes client is an artificial lift team that was experiencing some communications and management problems, and found it had gaps in documenting its pump performance and pressure parameters. It subsequently engaged more closely with its front-line staff on risks and mitigations, and reminded all players why checking and documentation is important.

“Open, common metrics showed leading and lagging indicators, but it also demonstrated training compliance and engagement levels,” concluded Todd. “This enabled tighter governance of all their process safety tasks.”

For expanded coverage, visit www.controlglobal.com/ MKOPSC2024

Aveva World tackles big digitalization

With more than 3,800 visitors onsite and 1,000 more live-streaming online, Aveva World 2024 in Paris on Oct. 14-16 was reported to be the largest event in its history—all focused on overcoming barriers to using industrial software and digitalization to bring real results and value to the physical world.

“We can’t share data among applications and users by continuing to work in silos. That era has to be over, especially in the world of industrial software,” said Caspar Herzberg, CEO at Aveva (www.aveva.com), which is now wholly owned by Schneider Electric (www.se.com). “The two big changes lately are the scale and nature of changes occurring, such as

climate change. Most people don’t understand how bad it’s gotten, but the solution is humanity collaborating across borders and industries.”

For instance, Norway-based Elkem (www.elkem.com) operates 15 plants and 19 control centers to produce its silicon-based products. Over the years it's adopted several Aveva software packages, such as Unified Operations Center (UOC) for aggregating and displaying processes on the plant-floor at Elkem's Bremanger facility.

“We can analyze complex data from Aveva’s applications on one platform, and integrate results from competitors’ software, too," added Herzberg. "This drives efficiency more quickly, so users can cut emissions sooner, and develop the faith and confidence they need to collaborate.”

Jean-Pascal Tricoire, chairman of Schneider Electric; Mattei Zaharia, professor at University of California Berkley and CTO of Databricks; and Caspar Herzberg, CEO at Aveva, compare notes on large-scale data analytics at Aveva World 2024 on Oct. 14-16 in Paris. Source: Jim Montague

Rob McGreevy, chief product officer at Aveva, reported that multiple earlyadopters are already using the Connect platform and AI to optimize their processes. These users include:

This unified platform is likely to combine technologies from Aveva, Schneider Electric and Databricks (www.databricks.com), which are planning to release a joint solution in January 2025. In fact, Herzberg’s opening-day keynote was punctuated by an onstage discussion with JeanPascal Tricoire, chairman of Schneider Electric, and Mattei Zaharia, professor at University of California Berkley and CTO of Databricks, which supplies an AI-enabled, data-intelligence platform.

• Talison Lithium produces 40% of the world’s hard-rock lithium, and it’s employing Connect to reduce energy costs, improve efficiency and yield, increase uptime and reliability, and empower its workforce.

• etap and Red Sea Resorts are using Connect on renewable energy projects to maximize utilization, reduce energy losses, and improve reliability.

“The two big items that come up with customers lately are digitalization and decarbonization, but these efforts require partnerships to get their data right,” said Tricoire. “Digitalization is constantly shifting the human role, not to mention the way we live, so how can the C-level convince users that adopting it will let them to do what they couldn’t do before?”

• BAE Systems and Visony Production use Connect to enable real-time operations on floating production, storage and offloading (FPSO) vessels, such as optimizing fuel consumption to save £35-140 million.

Zaharia added, “AI makes massivescale, deep-learning possible, which can generate better-quality data, results and decisions. We’re excited to see how the new Aveva Connect unified, cloud-based industrial intelligence platform will take on challenges like sustainability.”

“Applying AI to the SCADA data from an FPSO’s pumps can show vibrations and current draws more easily,” added McGreevy. “Connect can also provide digital twins, including 3D models of pumps on the ship, and what devices need to be fixed.” For

SIGNALS AND INDICATORS

• Emerson (www.emerson.com) announced Oct. 16 that it will provide pressure and temperature transmitters, ultrasonic gas leak detectors, pressure regulators, pressure safety valves and hydrogen expertise to HyIS-one’s new hydrogen refueling and storage facility in Busan, South Korea, which is reportedly the nation’s largest hydrogen refueling station for commercial vehicles, such as trucks and buses. The completed station will be able to store up to 1.5 tons of pressurized hydrogen, with a throughput of 350 kilograms per hour, or the capacity to charge more than 200 vehicles per day.

• Seeq (www.seeq.com) reported Oct. 16 that it’s natively integrating its data analytics software with Aveva’s (www. aveva.com) Connect industrial intelligence software to simplify access to operational data in context, and enable users to improve their operations and production.

• Ametek Inc. (www.ametek.com) reported Oct. 31 that it’s acquired Virtek Vision International (virtekvision.com), which provides laser-based projection and inspection systems. Virtek will join Ametek’s Electronic Instruments Group (EIG), and enable a wider range of automated, 3D

scanning and inspection products by its Creaform (creaform3d.com) division.

• Following its recent acquisition of Senix Corp. (www.senix. com), BinMaster (www.binmaster.com) reported Oct. 24 that it will manufacture and market its own line of ultrasonic distance and liquid-level sensors. The former Senis Ultrasonics line of ToughSonic TS-100 and TS-200 noncontact ultrasonic sensors will be offered in general-purpose and chemical-resistant models suitable for measuring distances up to 50 feet.

• Obrist Group (www.obrist.at) announced Oct. 30 that it’s licensing its pool of more than 252 filed and 128 granted worldwide patents to developers of sustainable energy solutions. Its intellectual property portfolio spans a range of technologies from electric vehicles with ranges exceeding 1,000 kilometers to synthetic fuels to large-scale facilities for converting solar energy into hydrogen and methanol.

• To expand further into Canada, Motion Industries Inc. (www. motion.com) agreed Oct. 29 to acquire Stoney Creek Hydraulics (www.schydraulics.ca), which specializes in precision hydraulic and pneumatic cylinder manufacturing and repairs.

Five system integrators show how they provide clients in multiple process industries with suitable and successful cybersecurity protections

BECAUSE the process industries and their control applications are famously diverse, it might be foolish to think they can learn much from each other. They feature starkly different processes and priorities, so they also face different cyber-threats and responses based on those priorities. However, as their users concentrate more intently on cybersecurity—and as they increasingly digitalize—common themes emerge that let them sharpen their focus on best practices they all can use.

“We still see many clients who aren’t adopting network segmentation to the level we'd like, so we’re still blocking and tackling,” says Scott Christensen, cybersecurity practice director at GrayMatter (graymattersystems.com), an industrial technology solutions consultant in Pittsburgh, Pa., and a certified member of the Control System Integrators Association (CSIA, www.controlsys.org). “This is changing priorities because users are putting visibility into their systems first, asking what assets they own, what PLCs do they have where, and how can they add value by operating more securely?”

Over the years, GrayMatter has helped about 1,500 water utilities implement their PLCs, SCADA/HMI systems, control panels and predictive analytics. However, many remain woefully underfunded and understaffed when it comes to cybersecurity. Some help may come to plantfloors from cool, new cybersecurity tools like zero-trust strategies that are emerging on the information technology (IT) side. However, some system integrators report these

ALL SIDES OF CYBERSECURITY

promised blessings have yet to show up on the operations technology (OT) side. The two main reasons are that many process-industry manufacturers are adopting more basic cybersecurity measures first, and they remain reluctant to cede control of their networks to software-based functions.

“Zero-trust is still new, but it’s getting popular among IT users because it lets them more easily identify and decide who can join their networks, allow functions they do need, and block what they don’t need,” says John Peck, OT security manager at Gray Solutions LLC (graysolutions.com), a CSIA-certified system integrator in Lexington, Ky. “Zerotrust isn’t on the OT side yet because many users are concerned it and other recent cybersecurity tools could cause outages and safety issues, including unplanned shutdowns of their processes and equipment.”

First task: don’t call Russia

Even though cybersecurity begins with asset awareness, Christensen reports many asset inventories it depends on are old, incomplete, and don’t have enough data about device behaviors. For example, GrayMatter recently worked with a large liquid natural gas (LNG) facility that had plenty of cybersecurity capabilities, including costly firewalls, intrusion detection, and OT visibility tools. However, within the first few minutes of GrayMatter’s visit, it found five PLCs that had been trying to dial Russian IP addresses for about six months. Christensen reports that GrayMatter remediated this situation by increasing visibility into thethe

Photo: Derek Chamberlain / Shutterstock AI

LNG company’s assets and their behavior, and tuned their baseline intrusion-detection for greater accuracy.

“We evaluated what worked best in this situation, and for this LNG application, what proved to be the most useful was contextual filtering right below its perimeter firewall and in front of its assets, groups and facilities,” says Christensen. “This type of filtering detects behaviors outside of its baseline such as geofencing, and only allows traffic that originates from or is destined for the U.S. or other predefined locations. It establishes rules that authorize participants to join my network. All other traffic is blocked. In fact, we’ve learned that 60% of network traffic is usually white noise, and contextual filtering drops this out, too, which also improves latency, bandwidth and flexibility. I think the biggest lesson for everyone is that no one is running an industrial network security program that checks all the boxes. There’s always room to improve.”

History, context and shifting perimeters

One of the most useful ways users can determine the right cybersecurity approach is learning about its increasingly lengthy history and how more recent events continue to affect it. For example, early breaches due to malware like Stuxnet, Triton, Not Petya and many others continued to evolve in the wake of seemingly unrelated world events like the COVID-19 pandemic and Russia’s invasion and war in Ukraine.

“The pandemic drastically increased remote connections into operating environments, and initially many weren’t done using the most secure practices,” says Larry Grate, business development director for OT infrastructure and security at Eosys Group (www.eosysgroup.com), a CSIA-certified system integrator in Nashville, Tenn. “Most were driven by the need to support operations without the risk of having employees physically come into manufacturing environments. However, during past few years, we’ve seen a lot of work on remediation and securing remote access.”

Eosys works mostly in the food and beverage, chemicals, pulp and paper, and steel industries, but it’s also a member of the Manufacturing Information Sharing and Analysis Center (mfgISAC.org), which is a cybersecurity threat-awareness and mitigation community for small, medium and enterpriselevel manufacturers. Grate recommends joining MfgISAC,

WHAT’S NEW WITH CYBERSECURITY STANDARDS?

Now that they’re been around for more than a few years, many of the best-known cybersecurity standards and guidelines have been added to, updated and refreshed a few times. Here’s their present status:

ISA/IEC 62443 —Several sections of the standard have been revamped or introduced recently. They include:

• 62443-2-1 that was published in August, and covers cybersecurity program policies, requirements and procedures for industrial automation and control system (IACS) asset owners;

• 62443-3-3 that revised its guidance on system security requirements and capabilities needed to construct an IACS that meets security levels targets, and shows users how to gauge their progress; and

• 62443-4-2 that further refined its cybersecurity requirements for components in control systems described in ISA-62443-3-3, as well as its emphasis on secure development lifecycles.

NIST Cybersecurity Framework (CSF) 2.0 and its supplementary resources were launched in February, following a multiyear update. It explicitly aims to help all organizations manage and reduce risk, not just those in its original target audience of critical infrastructure. CSF 2.0 also updated its core guidance, and created a suite of resources to help organizations achieve their cybersecurity goals, with added emphasis on governance and supply chains. In addition, NIST published its Special Publication (SP) 800-50r1 (Revision 1), “Building a cybersecurity and privacy learning program,” which provides updated guidance for developing and managing a robust cybersecurity and privacy learning program in the federal government.

EU NIS2 Directive —The European Union (EU) adopted Oct. 16 its first rules on implementing cybersecurity for critical entities and networks as part of its directive on measures for high, common cybersecurity levels across the EU. NIS2 Directive also details cybersecurity risk-management measures, and when an incident should be considered significant to be reported to national authorities.

as well as Dragos’ Operations Technology-Cyber Emergency Readiness Team (dragos.com/community/ot-cert), which provides free policy and procedure templates, intelligence sharing and cybersecurity best practices for perimeter defenses and other protections.

Separate, disconnect and respond

Once physical and network perimeters are established and evaluated, and assets and configurations within them are inventoried, Grate reports users must develop an incident-response and disaster-recovery plan for every cyber-threat they expect to face. “There are two main playbooks for today’s cyber-attacks. The first is how to respond to ransomware by separating networks and recovering equipment,” explains Grate. “The second is being able to disconnect landed operations, similar to how IT does it, and creating temporary air gaps when necessary.”

AZBIL SECURES JAPANESE REFINERY’S NETWORK WITH OPC UA

Japanese oil refiner Idemitsu Kosan Co., Ltd., recently updated its plant information management systems to protect against cyber-attacks. It had been using legacy OPC networking to share data between onsite control systems, its process information management system (PIMS) and its business intelligence (BI) software that analyzes operations data. However, this communication strategy required firewall ports on the periphery of the system to be left open, which constituted a serious vulnerability. Consequently, Idemitsu Kosan replaced the PIMS with help from Azbil Corp., and implemented networking with OPC Unified Architecture (UA) protocol, which restricted ports and improved the company’s cybersecurity. Azbil provided its total plant information management systems, which features longterm accumulation and sharing of operations data. In fact, the new network lets the BI software extract and display six months of data in 10 seconds, rather than the 90 seconds it used to require.

Source: Azbil

For instance, Eosys recently worked with a Tier 1 automotive parts manufacturer that previously had isolated islands of automation, but subsequently allowed a direct connection between its plant and enterprise networks. This caused it to suffer a compromise incident during which its IT staff could see callouts from ransomware on the OT side. “This automotive client tried to keep running in an islanded state until they could address their outage, remove the ransomware, reload their HMIs and other equipment, and start back up,” adds Grate. “Their system had to run for six months with this malware, which had used an encrypted key to gain entry. During that time, they couldn’t use any of their crippled devices.”

Once the ransomware was removed and the automotive client was fully operational again, Grate reports it and Eosys drafted an incident-response plan, which included creating a defensible architecture based on network segmentation with zones and conduits advocated by the ISA/ IEC 62443 cybersecurity standard. They also started authenticating users and passwords with an active-directory server (ADS), which identifies individuals based on their job roles. Finally, they built an asset inventory with vulnerability data, initiated network traffic evaluation, and adopted a zero-trust strategy, which is similar to whitelisting, but creates micro-perimeters that allow more granular segmentation than individual work cells, though not as granular as individual devices.

“This allows more precise segmentation by risk, and lets users define their risk appetite,” explains Grate. “Users are typically OK with accepting some risk. If a process is lowrisk, they can segment it by each process work cell or larger. If a process is high-risk, they can segment closer down to the devices. Once these risk levels are established, users can begin to implement secure, remote access by using multi-factor authentication”

Risk assess what’s old—including test cells

Years and decades of reliable service makes legacy equipment and systems familiar, but it doesn’t make them cybersecure. In fact, older devices are typically more vulnerable than newer ones, which makes it especially important for them to be part of any thorough cybersecurity risk assessment (cyber-RA).

Andrew Harris, business development director for controls and instrumentation at ACS (www.acscm.com), a CSIAmember system integrator in Verona, Wis., near Madison, reports it’s vital for users to conduct a cyber-RA and get a third-party intrusion test, which identifies their vulnerabilities, and shows where they need to upgrade and protect their devices and networks. After that, they need to do a new cyber-RA every five years during the life of their machines to address new devices and connections that have likely been added in the interim.

For instance, ACS recently helped a large automotive manufacturer upgrade 28 internal combustion engine (ICE) test cells as part of a larger facility modernization (www. acscm.com/projects/facility-upgrade-internal-combustionengine-test-cells). These cells use dynamometers, outdated PLCs and a data acquisition (DAQ) system to R&D cycle calibrations, validate engine performance prior to production, and generate critical, proprietary data. However, they ran on aging technology, while replacement parts were increasingly hard to find, and only one software engineer with the expertise required by its custom, legacy DAQ system remained on staff. The cells were also limited in the testing they could support, requiring units under test to be moved to different locations onsite to finish testing.

To upgrade the cells in place without impeding ongoing testing operations, ACS and its client scanned the available 116,000-square-ft space, allocated 30,000 square feet for the new testing space, and implemented phased replacements of PLCs and a DAQ system for the 28 ICE test cells in ACS’ custom-built cabinets, which interface with all of the automaker’s existing facility and test systems. ACS integrated the entire system into its customized design package, and subsequently improved the reliability and accuracy of test results.

Switches secure data gathering

Likewise, upgraded software enables the automaker’s PLCs and DAQ system to pull information from multiple sources in the cells and unit under test, and synchronize it with one timestamp. To better network, access and monitor the cells, Harris reports that ACS and its client added Cisco’s managed Ethernet switches using regular TCP/IP protocol between the machines and their plant network, which secures its multiple potential failure points by disallowing unauthorized access or communications. It also installs the switches in all devices for its projects before shipping to their sites. These switches provide cybersecurity by operating three layers on each switch, including:

• Layer 1 is a programmable patch panel that establishes a connection between ports, which is just the basis of an unmanaged switch.

• Layer 2 is where data packets are pushed to their destinations, using media access control (MAC) addresses.

• Layer 3 is the network layer with firewall functions, which performs network monitoring between the switch and others in the IT area, and monitors network traffic for anomalous activity or sources.

The switches can be configured for users to set up communications with known devices. This tells users which test cells and other equipment are supposed to be on the network, distinguishes them from laptops and other devices that may be seen as anomalous, and adds MAC addresses for other authorized devices.

PHOENIX CONTACT STREAMLINES SCADA FOR OIL AND GAS PRODUCER

An upstream oil and gas producer operating wells in the multi-state, Denver-Julesburg shale formation employed SCADA systems for its remote well pads that relied on 3G and 4G wireless links that didn’t go through a virtual private network (VPN). However, connections in the SCADA system were made via publicly hosted, Internet protocol (IP) addresses, which were masked by voluminous, port-forwarding rules that were stored in Excel files, and made technical servicing complex. Nonetheless, these publicly hosted IP addresses still created major vulnerabilities for the oil and gas company and its IT department, which couldn’t turn off access to this network. To add firewall security to the SCADA links, but still maintain relatively easy access to the local area network (LAN) via remote connections, the company worked with automation supplier Black Label Services (BLS) and Phoenix Contact’s Industrial Security and Networking Services team. Together, they implemented Phoenix Contact’s TC mGuard RS4000 for the 4G Verizon network, mGuard Device Management (MDM) software, and mGuard Secure Cloud (mSC) service to boost the oil and gas firm’s cybersecurity with functions, such as an intelligent firewall and up to 250 IPsec-encrypted VPN tunnels. The new network eliminated the need for publicly hosted IP addresses and port-forwarding rules, which improved OT security, while connecting back to the SCADA system via a cellular network. Now, the oil and gas company controls who can access its sites remotely, and the IT team can quickly add or remove users as profiles or other needs change.

“Test cells produce lots of critical and proprietary data, so our automotive client wants to be confident none of it’s at risk of being lost or stolen,” says Harris. “This is just protecting information on physical equipment, so we’re not moving any of this data to the cloud or allowing any remote control. However, we still need to monitor this network’s connections and traffic to make sure that only communications with the automotive client’s locations are occurring. So far, no equipment has been compromised because we’re only monitoring switches and network traffic, and unauthorized and thirdparties can’t get out of the switches. Our client is already in the process of adding this solution to more locations.”

Listen in, deploy proxies

Because it’s an engineering company that also provides cybersecurity capabilities, Cybertrol Engineering (cybertrol.com) in Minneapolis, Minn., typically starts its projects by asking each customer what they know about their own systems. To sort out these issues, Cybertrol conducts

cyber-RAs along with its discovery process, but it also employs Cisco’s Cyber Vision software, which listens passively to networks where it’s deployed, and builds a functional map of which devices are talking to each other and what they’re talking about.

“Cyber Vision is especially helpful for OT because it doesn’t ruffle or interfere with plant-floor devices that often can’t handle typical network-sniffing software,” says Alexander Canfield, industrial information technology (IIT) manager at Cybertrol, which is also CSIA-certified, and works mainly in the food and beverage, life sciences, chemical, and medical device industries. “It also analyzes firmware, other components and software for vulnerabilities, and gives grades and advice about how to harden them.”

To further limit questionable network traffic, Cybertrol uses newer zero-trust strategies, which are similar to older whitelisting procedures that only allow communication between predefined devices. However, zerotrust is different and more advanced because it directs network traffic to permitted destinations by learning and baselining what’s allowed to talk to what, prompting users that acceptable communications are occurring, and allowing traffic to move back and forth.

Similarly, Cybertrol employs software proxies, which are look-up tables that direct traffic from internal devices to reach specific services based on lists of actual destinations and directions. Authorized communications that ask for an authorized destination are sent to it. However, if a device doesn’t ask for the right direction, then proxies provide defense by preventing its communications from going anywhere.

Canfield adds that proxies can be deployed by installing a local software service on a virtual machine (VM) running Linux or another operating system, which handles communication handshakes, and funnels authorized traffic through the demilitarized zone

(DMZ) between the firewalls that separate network segments. “The choice of a local service and VM usually depends on which SCADA/HMI platform the user is running,” explains Canfield. “Operating systems like Linux can do this, but others such as Windows can do it just as well. These services can also run in software containers like Docker, but that technology is relatively new, so it’s not as widely used yet. We can also put them in an onsite VM by using the functional design map and running services in the DMZ.”

Solving switch issues

Beyond directing communications and resolving patch issues, Cybertrol also helps users solve individual cybersecurity problems. For example, one of its food and beverage clients recently experienced a network outage just after adding managed switches to Layer 3 of the existing network at its brownfield facility. These stacked, out-of-thebox switches were deployed as a main distribution frame (MDF) because the customer’s IT staff was working with their OT colleagues, and they wanted to remotely access the OT network.

Unfortunately, the new switches were connected without being configured, or even being assigned Internet protocol (IP) addresses, and this caused the whole network to crash when the users tried to add firewalls to the MDF. This occurred because these networks typically use spanning-tree protocols to prevent loop problems, but this requires that all devices use the same types of address assignments and configurations for the network to function, otherwise they’ll conflict with one another. In addition, the food and beverage manufacturer also had mismatches in its virtual local area network (VLAN), and this created conflicts between the new switches and the IT network.

“When we set up spanning-tree protocols within networks, we define one switch that’s in charge, otherwise the

network tries to figure it out by going through its MAC addresses,” explains Canfield. “In this case, we also went to the network and added basic Layer 3 programming. This allows the devices to keep functioning until an overall network analysis could be conducted, where IP addresses, names, spanningtree topology and VLANs could be mapped out for further remediation.”

Software-based security, maturity

GrayMatter’s Christensen reports that software can create deny-all philosophies, such as zero-trust, which only allow network access and communications that are prescribed ahead of time. This is similar to the older strategy of whitelisting, which only allows communications at certain sizes, speeds and times. Christensen explains that whitelisting is a subset of zero-trust and deny-all strategies because it prevents all communications unless they come from known, wellbehaved sources, which define good behavior and baseline it, so they know what to deny.

Christensen reports that the best of today’s cybersecurity programs are extensions of process safety efforts and standards established 20 years ago, which have been carried through in guidelines for basic cybersecurity hygiene, such as NIST 2.0 and standards like ISA/IEC 62443. Over the years, GrayMatter even developed its own OT Cybersecurity Maturity Model based on the collective experiences of its many clients. The model details the present security environments and capabilities that GrayMatter found among its clients, and defines their relative risks and the solutions they need to improve their cybersecurity protections and postures (Figure 1).

“The model’s range is the increasing risk vectors, which require users to address and solve the cybersecurity tasks at Level 1 before they can work on the subsequent levels,” says Christensen. “With clients, we talk about

where they want to be. Is segmenting their networks, backing up their data, and having a disaster recovery plan on Levels 1 and 2 enough? They need to complete these tasks before they can do anomaly and breach detection on Level 3, which is where we say the average client should be. This is because detection requires comparing network traffic to baseline measurements that are made possible by completing Levels 1 and 2. Users must know what real assets they have and how they’re performing and communicating before they can identify anomalies or develop fake assets to serve as honeypot traps for intruders.”

GrayMatter OT Cyber Security Model

Focus on defining current environment and defining needed capabilities to address risk associated

Figure 1: System integrator GrayMatter’s OT Cybersecurity Maturity Model details the present security environments and capabilities it found among its clients, and defines their relative risks and the solutions they need to improve their cybersecurity protections and postures.

In fact, GrayMatter launched its GrayMatter-Guard deceptive software three years ago, which creates decoys by using contextualized rules and filtering to talk to other networks. It learns what communications are allowed, and presents deceptive assets, such as PLCs, HMIs and VFDs that represent the devices they’re sitting in front of. They don’t allow intruders to go further into networks than the fake devices they’re communicating with.

“These honeypots divert and react to cyber-probes and likely intrusions, but they also show how intruders are going after our networks,” explains Christensen. “This turns intruders into our penetration testers. The beauty part is we can remediate cyber-intrusion at the same time that they’re attacking fake attack surfaces.”

OT tools secure multiple plants

To achieve cybersecurity at a fleet of facilities, system integrators apply many of the same principles they use at individual sites. For example, a global, canned beverage producer with eight plants in the U.S. and Canada recently acquired several new facilities that are all different with no standard networking. “Some of the new sites have some network segmentation separating IT and OT, and some don’t. Some have old networking hardware,

Source: GrayMatter

30-40 physical servers, and 10-yearold data storage devices,” says Paige Minier, senior digital transformation manager at Gray Solutions. “They wanted to upgrade all these systems at once, so we helped them standardized on new switches and servers.”

Minier reports the system integrator performed an in-depth discovery at each of the eight facilities, examined their network architectures and topologies, and built a frame and rack elevation diagram. It shows spaces for cables, what devices are on which rack, and what open space is available. This let Gray Solutions and the beverage company revamp and rebuild the plants’ networks, and document the makes and models of PLCs, HMIs and other devices they deployed.

“Four of the plants were fairly standardized on Hirschmann managed switches, which provided segmentation, but much of the hardware was also obsolete and didn’t define standard clients or active models,” explains Minier. “The other four plants had different models of Cisco’s managed switches, but many of them were obsolete, too, while lots components at the control panel-level were

unmanaged. Our discovery reports showed the plants’ present states and cybersecurity hygiene scoring, and detailed all of their end-of-life networks, servers and firewalls. Some sites had no segmentation, and others had never had any software patching, so there were tons of vulnerabilities.”

Even though these findings were grim, Minier adds they gave the beverage company a roadmap to a more secure state, which included a standardized and segmented network in accordance with CPwE and ISA/IEC 62443. It eventually implemented PaloAlto 3200 firewalls, Cisco Catalyst 9500 switches, and Cisco 9330 access switches with intermediate distributed frames (IDF).

“This was a very collaborative project, and had to be as our client learned to manage their cybersecurity,” adds Minier. “They also didn’t want static, one-time cybersecurity scans, but instead wanted to apply patches when necessary, and detect ongoing threats to the OT side. This was their next project, and they went with Nozomi’s network monitoring software after other capital projects were done.”

Pioneering plants push automation into the future

MUCH has changed since DuPont’s DeLisle chemical processing plant in Pass Christian, Miss., was named the first Plant of the Year honoree by FieldComm Group in 2002. Market dynamics for chemical processors, refineries, utilities and other processing industries have changed with the times, ushering in an era of innovative technology for field communications. It led to more productivity, better cost efficiency and progress toward sustainable operations.

Plant of the Year winners have spanned six continents, representing the chemical processing, upstream and downstream oil and gas, steel, utilities and wastewater sectors. Each winner is an upper quartile performer—a company in the top 25% or at the 75th percentile of an industry. They’ve realized more than $125 million in total operating expense savings and an estimated $800 of operations expense savings per device.

A common denominator is the use of HART, HART-IP, WirelessHART or Foundation Fieldbus technologies in their

FieldComm Group’s Plant of the Year honorees illustrate the progress made in productivity, efficiency and sustainability

plants. What’s unique about each plant is the way in which they integrated field communications with emerging technologies and procedures, such as completing entire digital transformations of plants, increasing reliability of communications and remote operations, adding predictive maintenance capabilities, creating sustainable operations, and, lately, the use of artificial intelligence in conjunction with fieldbus communications.

The feted plants showed innovative uses of real-time device diagnostics and process information integrated with control, information, asset management, safety systems and many other automation systems to lower operating costs, reduce unplanned downtime and improve operations. Meanwhile, winners consistently have above average scores for intelligent device adoption best practices.

In this article, we look back at a few of the success stories and how they mirrored many of the notable advancements in the process control sector overall.

An early digital transition adopter

The only two-time honoree, Danube Refinery of MOL Plc, located in Szazhalombatta, Hungary, was first recognized for its early endeavor into the then emerging trend of digital transformation. In the early 2010s, as wireless communication began to overtake tethered infrastructure in many sectors, industrial operations lagged. However, Gábor Bereznai, then head of maintenance engineering at MOL Danube, realized the benefits of integrating process instrument diagnostics and device utilization with a computerized maintenance management system (CMMS) and asset management system (AMS) with SAP process control, taking islands of systems that used to be separate and creating triggers for launching transmitters, control valves and positioners. “This was done by having the diagnostic system inform the CMMS about the valves. This data could then be used in morning meetings with our maintenance team and other staff to help us do risk assessments and identify other problems,” he said when the company won its second award in 2015.

Doing things better

Their digital transformation, which started in 2010, saw further developments fi ve years later. After winning the 2010 Plant of the Year award, Bereznai and his colleagues launched expansions and multiple diagnostic and maintenance projects to bring similar benefi ts to other facilities at MOL Danube. At the time, they’d expanded the use of FieldComm Group technologies to more than 4,700 connected devices on 15 operating units, including 42 WirelessHART devices and six gateways connected to SAP-PM CMMS.

Additional smart devices and enhancement of valve diagnostics and predictive maintenance resulted in saving $350,000 per year on potential shutdowns with smart device monitoring. MOL Danube also set up a cross-functional, risk-assessment team that evaluated 20,000 device notifications per year.

The work that MOL Danube did inspired FieldComm Group to develop JSON-based descriptors for key HART commands. Known as DeviceInfo Files these will soon be available for download from FieldComm Group’s web page.

Overcoming a hurricane

Mangalore Refi nery and Petrochemicals Ltd. (MRPL) was at the forefront of India’s burgeoning oil, gas and petrochemical sector in 2018. Its “do things better” culture inspired it to be an innovator in hydrocarbon processing, adopting digital communications for process control and striving for more effective utilization of its resources and facilities.

Its use of FieldComm Group technologies was no different as it utilized installation savings and advanced diagnostics from more than 9,000 Foundation Fieldbus and 5,000 HART devices. At the time it was awarded the Plant of the Year, MRPL reported it saved more $6 million in project costs using those devices.

Beyond determining the technical advantages of Foundation Fieldbus and other FieldComm Group technologies, Basavarajappa Sudarshan, chief general manager for electrical and instrumentation at MRPL and project team leader, reported that he and his team had to convince colleagues, including operators and managers at MRPL, that migrating to digital communications would be worthwhile and wouldn’t hinder operations. They were successful.

MRPL started its journey into FieldComm Group technologies when it installed all-digital communications on its isomerization unit, including implementing all process control loops with Foundation Fieldbus with control in the field (CIF) functions, as well as HART transmitters used in its safety instrumented systems (SIS). On the strength of that success, MRPL also commissioned more than 10 process units, cogeneration plant and utilities at its refinery with Foundation Fieldbus, HART and WirelessHART.

Designing, building, integrating, commissioning and starting up a new process plant is difficult enough, but dealing with a hurricane and flooding at the same time is just plain unreasonable. Nevertheless, that’s what Chevron Phillips Chemical Company LP achieved when it undertook its U.S. Gulf Coast (USGC) petrochemicals project and built a new unit at the plant for ethylene production. Located at its Cedar Bayou facility in Baytown, Texas, the plant has a design capacity of 1.725 million metric tons/year (3.8 billion pounds/year).

The USGC ethylene project started in 2012. Mechanical completion was done at the end of 2017, and commissioning was fi nished and startup began in March 2018. Near the end of construction, the

Cedar Bayou facility also weathered Hurricane Harvey in August 2017, and used its smart HART and Foundation Fieldbus devices to help hasten the plant’s recovery.

The plant’s automation architecture consists of a distributed control system (DCS) with field control station (FCS) controllers and safety instrumented systems (SIS). “When Foundation Fieldbus and HART technology were chosen for this project, the DCS was selected because it offered an integrated asset management software platform to use with the digital information from the field instrumentation,” said Amit Ajmeri, DCS specialist for the USGC project at Chevron Phillips Chemical Company, when the project won the Plant of the Year designation in 2019.

Near the end of construction phase, Hurricane Harvey and its record-breaking rain halted the entire project. The plant experienced some flooding. Fortunately, most of the plant’s instruments, I/O, controls and field junction boxes were located above the flooding, and weren’t water damaged. The project team verified data for their healthy-device list before Harvey arrived, and confirmed most devices were still in the same healthy condition after the storm, so they didn’t have to perform any diagnostic checks for them.

Developing a disciplined device strategy

To understand the difference data-based decision-making can have on process performance, look no further than Dow’s Texas Operations (TXO) on the Gulf Coast. Originally built in the 1940s in Freeport, Texas, as a plant to extract magnesium from seawater, the chemical company’s TXO operations grew, and its homes are now in Deer Park, Freeport, Houston, La Porte, Seadrift and Texas City.

Throughout the 2010s, integrated data fl ows from some 50,000 smart instruments became central to its reliability objectives, helping to save tens of millions of dollars by boosting overall equipment effectiveness (OEE) and trimming instrumentation-related production losses by 80%, according to the company.

thousands of dollars in savings. Dow also used IAMS to perform routine loop-checking and SIS validation on more than 5,000 loops and leveraged FieldComm Group’s FDI architecture to streamline device integration across the many varieties of smart instrumentation and device profile versions.

Leveraging the Industrial Internet of Things

HMEL’s Guru Gobind Singh refinery in Bathinda, Punjab, is an industrial anchor for the economic development of northern India. It sits at the northern terminus of a 1,017-km pipeline from Mundra, Gujarat, on India’s west coast, where tankers deliver the refinery’s raw materials from abroad. Formed in 2007 as a public-private partnership joint venture between Hindustan Petroleum Corp. Ltd. (HPCL, a government of India enterprise) and Mittal Energy Investments Pte. Ltd. (MEIL) of Singapore, HPCLMittal Energy Ltd. (HMEL) broke ground on the new refinery in 2008 and commenced operations in 2012.

From the start, process automation was at the heart of HMEL’s strategic vision of building a smart refinery. In addition to investments in state-of-the-art process automation, WirelessHART implementations at HMEL leveraged the Industrial Internet of Things (IIoT) to be part of a framework that HMEL calls the “digital refinery of the future.” Among the technologies it used to create its vision are wireless sensors to track the sounds made by steams traps and valves, temperature and vibration signatures of fin fan heat exchangers, and even real-time locations of plant personnel.

The largest integrated chemical complex in the western hemisphere, Dow-TXO accounts for 30% of its products sold in the U.S. While Dow-TXO standardized on HART smart instrumentation communications in 2000, the investment started to pay off in 2014 with the rollout of disciplined device integration reliability strategies and widespread implementation of a standardized instrumented asset management system (IAMS) approach,

Using IAMS, Dow-TXO commissioned and diagnosed more than 40,000 smart loops, achieving hundreds of

Leveraging data from WirelessHART acoustic sensors from Emerson enabled HMEL to monitor 138 control valves and pressure safety valves across eight refi nery units for the passing of hydrocarbons. It helped proactively manage environmental issues, advance the realization of odorless operations, and recover the value of hydrocarbons that would otherwise be lost to fl aring. The associated management application, part of Emerson’s PlantWeb Insight offering, alerts personnel to issues via persona-based email notifi cations.

HMEL also implemented methodologies in areas such as manufacturing execution systems, operator training simulation and corrosion inspection management. “We optimize operations using multitasking and cross-training of operations and maintenance personnel,” explained Jatinder Kansal, assistant general manager, instrumentation at HMEL, when the refinery was named Plant of the Year. “Its bestin-class integration framework and enterprise performance

management and reporting systems further support informed decision-making.”

Kansal attributed the refineries successes not only to FieldComm Group technologies and multi-vendor interoperability, but also to leadership support and to cross-discipline teams, including those responsible for instrumentation, process, operations and IT.

Reducing configuration time

Because China-based Wanhua Chemical Group serves fast-moving and growing sectors, maintaining a competitive edge in its operations and gaining value from its control systems is of the utmost importance. That's why the company turned to FieldComm Group’s Foundation Fieldbus, HART and WirelessHART technologies to serve as the centerpiece of its digital transformation, company representatives said when it was named Plant of the Year.

Wanhua is one of the largest chemical product manufacturers in the world, serving four main industries: polyurethanes, petrochemicals, fine chemicals and emerging materials, such as those used in batteries and electronics. Installation of FieldComm Group’s technologies has been the basis of the plant’s improved operations, maintenance and asset productivity in real-time applications. In addition, FieldComm Group’s technology helped the company save tens of millions of dollars in operational costs ranging from reductions in configuration and commissioning times to remote maintenance capabilities.

Instrumentation with FieldComm Group’s technologies is prominent throughout the Yantai plant. “Wanhua Chemical’s Yantai Industrial Park features about 30 sets of production equipment with a production scale of nearly 500,000 I/O points,” explained Lee Caihua, director of instrumentation for Wanhua Chemical’s Yantai Industrial Park, via a translator. “Intelligent measuring instruments and valve positioners in the plant make full use of FieldComm Group’s Foundation Fieldbus, HART and WirelessHART technology, which operate in up to 180,000 I/O points and account for 96% of the analog instruments and valves.”

Wanhua Chemical’s representatives told Control there were several value-added benefi ts to using FieldComm Group’s technologies at Yantai Industrial Park. Intelligent instruments provided valuable health information about the plant, which was integrated with control and asset management systems to save time during commissioning and throughout the plant’s life. “By using Foundation Fieldbus and HART instruments and AMS, [we] reduced

commissioning time, and provided more effi cient and dependable operations,” Lee said.

Thanks to a 60% reduction in configuration time, Lee added the plant saved an estimated $3 million.

Adding artificial intelligence to the mix

Advanced digital technologies—particularly HART-enabled digital diagnostic tools and predictive analytics combined with artificial intelligence (AI)—support the transition from time-based maintenance to condition-based maintenance at Japanbased Daikin Industries Ltd.’s Kashima plant.

Daikin, the only global manufacturer of both air conditioning equipment and refrigerants, was the most recent Plant of the Year honoree, and its creative use of AI integrated with FieldComm Group technologies earned it the distinction.

The AI system is an anomaly prediction detection system that's used to continuously monitor process data via a data server. AI detects abnormal behavior in processes. Past operations data is used as big data, and AI learns normal behavior of the plant. Based on that learning, it accurately predicts signs of small anomalies in current sensor values, and addresses potential problems to avoid sudden production shutdowns.

Although 25% of the devices at the Kashima plant were HART compatible, the systems were not. Daikin chose a Fast Ethernet-based HART converter to utilize HART information, making it easy to install in various DCS/PLCs without affecting the existing control system. To create a more advanced environment, Daikin selected the AI to detect wear and tear and corrosion on control valves and blocked instrument lines in real-time. By using AI to learn process values, the results from sophisticated diagnostic algorithms were verified.

However, Daikin officials are quick to point out that the success of AI depends on the accuracy of data it receives. Online anomaly detection systems require accurate process data, especially in batch processes. That's where HART devices made a difference, Masumi Yoshida of the engineering department at Daikin Industries, explained last year.

“The HART signals of each device are wonderful data packed with the know-how of each device manufacturer. By having AI learn this along with various process data in the plant, it's more likely to be able to learn various signs of equipment anomalies,” he said.

Soon, we'll see more ways future Plant of the Year honorees create efficiencies and costs savings with Foundation Fieldbus, HART, and WirelessHART technologies.

System integrator

Arthur G. Russell proves new product concepts, designs and development by testing, standardizing, modularizing and partnering

THERE’S no such thing as an entirely sure thing. However, many new products and components — and the processes that produce them — could be designed, developed and produced with considerably less risk, along with efforts to make them more efficient and profitable.

For instance, OEM and system integrator Arthur G. Russell Co. (AGR, arthurgrussell.com) in Bristol, Conn., always had an internal R&D and mock-up area, which was part of its due diligence process when clients requested a specific job.

AGR builds and integrates assembly and packaging equipment from pilot to large-scale, high-throughput (20-1,800 parts per minute) for pharmaceutical, medical device,

and other industry clients. However, its machines typically contain multiple flow, temperature, pressure, and other small process applications, such as measuring reagents down to microliter levels. However, more customers are also seeking to take automation in derisked stages, starting with proof-of-concept (PoC) and pilot stages.

"Originally, this proof-of-concept area was just for us. Now, it's a service that's part of the business that customers ask for," says Brian Romano, technical development director at AGR. "Many clients are near-shoring, reshoring and retooling, reducing headcounts, and/or taking new products from concept to production. Big changes like this mostly emerged a year or two after

the COVID-19 pandemic, and are sustained by the recent slowing of capital expenditures. So, customers want to derisk where they can by doing small-scale, PoCs, and follow them with full-scale automation and production trials, and we try to help with each of the stages.”

Test small before scaling up

Because many clients need to perform inspections as part of their equipment development and production, Romano reports that AGR’s PoCs often used to include mocking up vision inspections, selecting lenses, lights and cameras in different applications, and writing evaluation reports, so they could eventually be rolled into full systems. Now, customers are asking for prototypes

Photo: Derek Chamberlain

system integrator Arthur G. Russell’s R&D and mock-up area in

R&D engineer Jerry Buchas adjusts a robot-actuated loader that brings a client’s product to the proper location in a case, so its end-of-arm tooling (EOT) with an attached actuator can push the product into place. AGR helps customers develop, test, and derisk their designs and requirements for new end-products and production systems. Its clients want to derisk because shorter product lifecycles and increasing demand for variety are compressing development and commissioning schedules. Source: Arthur G. Russell

complete with accessories like fixtures for holding and assembling products, so they can quickly translate them into pilots and full-scale production.

"We very recently worked on a medical device assembly system, and the client wanted a tabletop PoC for a process that provided the full method for how one station was supposed to work," explains Romano. "It was challenging to manage its parts, but the feasibility wasn't completely known since no prototype was completed previously. We had to demonstrate the operation of this challenging station. This PoC was good practice for the customer and AGR."

Schedule and standardize Romano adds that clients are also seeking to derisk projects due to

multiple, former failures of large assembly machines that caused low production yields and poor end-product quality. "Many companies have reported to us they’ve been burned by going straight to full production, where costly machines had to be scrapped or substantially reworked when it turned out they could only run at low overall equipment effectiveness (OEE)."

At the same time, clients want to derisk because shorter product lifecycles and increasing demand for variety are squeezing available product development and commissioning times. Along with other constraints imposed by retooling to make newly developed products, these efforts also severely compress related schedules. This requires OEMs, system integrators and their clients to be flexible enough

to devise and adopt other innovative strategies, according to Romano.

"For the sake of delivery schedules and PoC costs, sometimes customers can provide spare components, vendors can loan demonstration equipment, or similar or alternate models can be used for the PoC," explains Romano. "However, these flexible approaches can also leverage common, standardized and modular parts, as well as common construction strategies carried out by trained engineers and assembly personnel. This might result in a pick-and-place gantry in a standardized production cell or added robots, vision systems, glue dispensing pumps or whatever is needed."

To provide concepts and products that achieve goals like improved design for service, adaptability, maintenance, continuous improvement and data gathering, Romano reports that more components are being designed with automation in mind. These designs strive to:

• Reduce part counts;

• Minimize handling and reorientation of assemblies as they're made;

• Feed and join parts more easily and consistently:

• Inspect and test parts more easily and consistently; and

• Determine if these tasks can be automated.

"Standardization can be a big help, but every project is different and still needs custom automation to satisfy specific assembly needs," adds Romano.

Prove in small steps

Romano reports that derisking any process or product must be done progressively, so each revision at

In
Bristol, Conn., senior

each stage can be thoroughly evaluated and further tweaked if needed. "Derisking must be done in steps, so we can make certain to prove each one," adds Romano. "We've done several medical device projects in the last couple of years, and doing the PoC was essential in almost every case. A derisking project can take up to three to 12 months, depending on size and complexity."

AGR typically starts its derisking projects with small-scale, manual or semi-automatic operations, which it defines as tasks used to make the required parts, such as 200 to 1,000 parts to satisfy the customer's PoC. Next, the OEM examines semi-automated functions, and considers whether they'd profit by adding a robot or another operations that’s more fully automated. AGR and its client decide how to scale up their solution after appropriate levels of automation, control, safety, networking, data acquisition and analysis are considered and implemented.

" Customers want to derisk where they can by doing small-scale proof-of-concepts (PoC), and follow them with full-scale automation and production trials."

Common language, too

Just as standardized and modular components simplify and streamline its projects—and reduce unexpected variability and risk—AGR employs a standard method for PLC, HMI, and robotic programming and communications. Part of this is using common structures for programming, PLC tag names, and setting up PLC communication tasks. This methodology lets users transfer and share data between machines in a more straightforward manner, makes it easier to anticipate what results can be expected on the other side of a process, and lets users coordinate handoffs between Rockwell PLCs and Epson robots that AGR partners with via EtherNet/IP protocol and networking.

AGR also employs a standardized, user-defined tags (UDT) structure, and

WHAT MAKES A GOOD PRODUCT DEVELOPMENT AND DERISKING PARTNER?

Based on its own experiences and input from its customers, system integrator Arthur G. Russell reports the most valuable traits of an aligned technology partner can be identified by asking the following questions:

• Do they have a lab for developing key processes?

• Are they capable of starting with manual and semi-automated processes, and is this capability affordable?

• Do they have the ability to do a pilot scale?

• Can they scale to full production?

• Do they have in-house capabilities for all design and production needs?

• Do they offer the proper solution for the job? (For example, are multiple platforms available?)

• Where are their capabilities and supply chains located? (Footprint considerations are important for reshoring.)

• Do they balance robust, proven technology with new technology, and apply the proper one?

• Can they match the proper level of technology that the customer can handle?

• Do they create designed with customer capabilities in mind?

• Can they help develop user-required specifications (URS)?

a library of add-on instructions (AOI) and subroutines. This lets them approach object-oriented coding methods with a library of functions that can be easily dragged down and configured to bring programs online with less programming and faster debugging. This ease of use helps with all project stages and derisking from PoC to fullscale automation.

Roadmaps and budget partners

Beyond today's immediate projects, Romano reports that derisking efforts and the standard and modular capabilities they encourage can likewise be helpful during a component or process' entire lifecycle and enable a client's complete automation roadmap.

"Developing a new product or process from scratch obviously takes a lot of R&D, so it's useful if we can use a solution that's already been proven, and determine if it can be scaled up," explains Romano. "As for roadmaps, where the customer is at with their process or business dictates much of the type of machine or capability they'll need. There are entanglements here because, if the company doesn't have a full maintenance staff, sufficient spare parts or available automation support, then trying to implement a sophisticated machine will likely be less successful. This is why workforce shortage and skills-gap issues must also be addressed. Machine operators, maintenance technicians, and process and manufacturing engineers must be ready to provide maintenance and support, and handle machine faults and related troubleshooting."

Romano adds that AGR can also expand its outreach to do maintenance and troubleshooting, but it's more efficient and less costly if users can close their skills gaps and have inhouse maintenance and support. "For example, if a

client has a semi-automated or fully automated machine on their plant floor, we ask how they're currently handling maintenance and support,” says Romano. “Do they need us to provide maintenance services as part of their ongoing automation process?”

While evaluating and derisking, other questions arise, such as can a client go 100% automated, or do they need manual steps between some work cells? "There's also the question of cost because a completely automated machine may cost several million dollars, while the semi-automated machine may be a fraction of that,” adds Romano. “The question is, what's their need? Once we have an answer, we can use this knowledge to provide development and derisking that more closely matches their budget and inhouse support capabilities."

Romano reports that capital expenditure (CapEx) budgeting is the process of determining the value of a potential investment project.

The three most common approaches to project selection are:

• Payback period determines how long it would take to produce enough revenue to recover the initial investment. Methods include percentage return on equity (ROE) invested, percentage return on invested (ROI) capital, and return on asset (RoA) or how much profit it can generate. Romano reports the typical payback period varies from six months to two years.

• Internal rate of return (IRR) is the expected return on a project. If the rate is higher than the cost of capital, it's a good project.

• Net present value (NPV) is the sum of all future cash flows over the

investment's lifetime, discounted to the present value, which may be the most effective method.

Because answering these and related questions requires research, Romano adds that it helps to have a wellaligned technology partner do some of the R&D and specifications.

"A good partner can handle parts of the roadmap," says Romano. "In the end, however, clients and users must decide what steps they want to take to conceptualize, innovate, design, write requirements documents, verify designs, conduct pilot projects, and eventually, go full-scale. Ultimately, derisking must balance standardization with creativity, and an aligned technology partner can provide insights and the experience needed to bring any level of automation to fruition.”

Designed and built from the ground up to meet IEC 61508 standards, Moore Industries FS Functional Safety Series instruments help bring the confidence you need to your SIS implementation. Our FS Series now includes the easily programmable, SIL 3 capable SLA Multiloop Safety Logic Solver and Alarm, with voting and powerful built-in math & logic capability.

Mobilizing thin-client management

Pepperl+Fuchs and Rockwell Automation debut ThinManager-ready tablets

IT’S been 25 years since the launch of ThinManager software. This device management and content delivery platform is used by plant operators and maintenance technicians in manufacturing and process industries to centralize thin-client device management. This year, with the help of Pepperl+Fuchs, operators and technicians will have an even easier and safer time working with ThinManager.

Attendees at Rockwell Automation Fair in Anaheim, Calif., this month will get a look at the companies’ newest innovation that makes ThinManager even more mobile, and further transforms the management of thin clients on the plant floor.

Pepperl+Fuchs’ tablet series Tab-IND and Pad-Ex industrialgrade tablets for harsh environments are now their first ThinManager-ready tablet solutions, according to the companies.

The tablets can run ThinManager’s proprietary firmware, and remove an operating system, such as Windows, from the plant floor, says Kimberly Gonzalez, senior product specialist at Rockwell Automation. “We changed the device that enables it to receive ThinManager firmware and be completely managed by ThinManager software,” she says. “[Tab-IND and Pad-Ex] are Windows tablets, but for this new offering, we stripped those operating systems and gave them ThinManager’s special sauce. That’s the novelty of this solution that we're excited about.”

This innovation provides a more streamlined, mobile operating device that can be managed just like traditional thin client devices in a ThinManager-managed plant or factory. It

also increases the cybersecurity of the solution. “There’s nothing on that device that can be susceptible to security vulerabilities,” Gonzalez adds.

Pepperl+Fuchs removes the operating system, and in turn, no longer requires local maintenance such as patches and updates.

Tab-IND is an industrial-grade tablet with a bright, 700 and 800 cd/m2 display for outdoor use. It includes glove and pen support. The 10-in. tablet includes a shortcut button and fingerprint sensor for fast authentication.

It has an extended, -20 °C to 50 °C temperature range. It’s a rugged unit that’s designed for a long lifecycle. It will be available for at least five years, according to Pepperl+Fuchs.

SmartBlack technology allows easy USB accessory integration. Accessories include scanner frames, docking stations, holders and power supplies.

Pad-Ex is an 11-in. tablet featuring an Intel Core i51235U, 12 Gen processor. It has 16 GB of RAM and 256 Gbytes of internal memory with NVMe, PCle and SSD. It connects to a docking station via a 35-in. Pogo connector.

I/O ports include one Thunderbolt 4, one USB 3.2 Gen 2 (Type A), one micro-HDMI, one audio in/out (combo jack), one microSD card (microSDXC) and one DC-in jack.

These features are designed to create a truly mobile, thin-client management system, unlike those used by many of today’s operations managers, who find their thin clients tethered.

“It creates an ease of mobility, especially with the rugged tablet in harsh environments,” adds Gonzalez. And that’s good news for operations managers and maintenance technicians with a lot of ground to cover.

ThinManager-ready tablet series Tab-IND (above) and Pad-Ex (right )
Source: Pepperl+Fuchs

Calibration instills confidence

Control ’s monthly resources guide

BLOCK, BATH AND PNEUMATIC

This eight-minute video, “What is an instrument calibrator?,” covers calibrator types, such as block/drywell for RTDS and thermocouples, fluid bath with immersed sensors, pneumatic and signal reference with regulated pressures. It also describes source and simulation modes and troubleshooting two-wire current loops. It’s at www.youtube.com/ watch?v=dzQYv2m6ApA

REALPARS www.realpars.com

PRESSURE EBOOK AND VIDEOS

This online and downloadable e-book, “Fundamentals of pressure calibration,” covers accuracy versus uncertainty, precision, linearity, hysteresis, repeatability, stability versus drift, as found versus as left and other terminology, as well as reasons and how often to calibrate, locations, instruments, procedures, traceability and standards, and accredited calibrations standards. It’s at info.mensor.com/ pressure-calibration-fundamentals#pr essurecalibrationterminology

MENSOR www.mensor.com

TWO ROSEMOUNT PRESSURE VIDEOS

This four-minute video, “Rosemount 3051 pressure transmitter calibration—learn about zero, lower trim and upper trim” details the working principles of Emerson’s pressure transmitter and how to calibrate it. It’s at www. youtube.com/watch?v=tMhiQ4_IgSE/. It’s accompanied by a five-minute video, “Rosemount 3051 pressure transmitter—how to enter lower range value (LRV) and upper range value (URV),” that shows how to adjust and

change units. It’s located at www.youtube.com/watch?v=grdlPAPBZAI/. INSTRUMENT CALIBRATION ACADEMY www.youtube.com/@InstrumentCalibration

ACCURACY AND METROLOGY

This online article, “Calibration: definition and importance for measuring instruments,” covers standards, processes and steps, checking and adjusting, instruments and measuring devices, accuracy and precision, and metrology’s role in quality and precision. It’s at www.fujielectric. fr/en/blog/calibration-definition-importance-for-measuring-instrument.

A second article, “Calibration of pressure transmitters: the guide for accurate measurement,” covers nuances, key stages and how-to steps. It's at www.fujielectric.fr/en/blog/calibration-pressure-transmitter-guideaccurate-measurement

FUJI ELECTRIC FRANCE www.fujielectric.fr

REASSURANCE FOR QUALITY AND SAFETY

This almost 6-minute video, “Instrumentation calibration: an introduction,” shows how confidence in precise process variables enables quality products and safe operations. It demonstrates how regular calibration compares plant instrumentation to a known quantity to check for accuracy, and performs adjustments if device accuracy is outside allowed tolerances. It’s at www.youtube.com/ watch?v=SamV6zpRNgg

INSTRUMENTATION & CONTROL www.instrumentationcontrol.info

VENDORS AND BIBLIOGRAPHY

This PDF of an apparent book chapter, “Calibrations in process control”

by Halin Eren, covers suppliers, definitions, errors and uncertainties, benefits, methods and procedures, personnel, laboratories, records, support software, measurement assurance planning and a bibliography. It’s at tinyurl.com/2h9fn9td

RESEARCH GATE www.researchgate.net

BACK-TO-BASICS IN 7 MINUTES

This seven-minute video, “Back to basics: calibration” by Jim Montague of Control, covers field instruments, repeatability, accuracy, misuse and misunderstandings and potential pitfalls. It’s at www.youtube.com/ watch?v=sOGKEnB2RJE CONTROL www.controlglobal.com

BENEFITS AND FEASABILITY

This online article, “Automated calibration practices in process manufacturing” by Jim Shields, covers the costs, benefits and feasibility of calibration and documentation, as well as safety, quality, revenue, compliance and savings. It’s at www.processingmagazine. com/process-control-automation/ instrumentation/article/15586701/ automated-calibration-practices-inprocess-manufacturing

PROCESSING www.processingmagazine.com

BASELINE FROM BEFORE

The last version of this column, "Process calibrators have no illusions,” includes resources from Beamex, Fluke, ISA, Endress+Hauser and others. It’s at www.controlglobal.com/measure/ calibration/article/11291195/top-10-instrument-calibration-resources CONTROL www.controlglobal.com

This column is moderated by Béla Lipták, who also edits the Instrument and Automation Engineers’ Handbook, 5th edition , and authored the recently published textbook, Controlling the Future , which focuses on controlling of AI and climate processes. If you have a question about measurement, control, optimization or automation, please send it to liptakbela@aol.com

When you send a question, please include your full name, job title and company or organization affiliation.

Economics of pumping stations

How does changing speeds affect pump efficiency?

Q: I’m participating in an industrial research project in southern Spain, which uses parabolic trough collection of solar energy. Thermal oil runs through the collection lines, which are solar heated, and the hot oil generates steam for turbo-generated electricity production. The oil is circulated by a centrifugal pump using a variable-frequency speed controller. At noon, a higher oil-circulation rate is needed. During lower-irradiance mornings and afternoons, slower pump speed is required.

The oil circulation rate has a 2:1 ratio, but since the required fluid power is proportional to the cube of the flow rate, the pumping power has an 8:1 ratio. The pump is a power “parasite.” Given fluid duty, its efficiency peaks at one speed, and then falls in a parabolic-like manner for higher or lower speeds. Our simulations indicate, rather than one large, centrifugal pump, if there were several smaller pumps in parallel, they could all run during high-circulation periods. We could turn off some during lower-circulation periods to provide fluid power, which would permit those operating pumps to run at speeds closer to their efficiency peaks, reducing the parasitic power demand.

Have others implemented such a scheme? Is there a best number of smaller pumps? Are there mechanical or control issues with switching pumps on and off? Does this scheme provide an energy savings that justifies the overhead and initial capital cost? Because this scheme lessens operating time on pumps, does this have a measurable economic impact on pump life and maintenance costs?

A: Before answering the specific question, I’ll discuss pumping station controls and their selection in general.

System curves

A pump is a liquid transportation device, which must develop enough pressure to overcome the hydrostatic and frictional resistance of the process into which it delivers the required fluid.

The head requirement at a certain flow is the sum of the static and the friction heads that the pump must overcome. Pumping systems are categorized according to the location of their operating point on a head-to-flow plot (Figure 1). The static head portion (Hs) of the total head (Ht) doesn’t vary with flow rate because it’s only a function of the elevation or back pressure the pump is operating against. On the other hand, friction losses increase with the square of flow, and tend to be steeper if the piping is undersized or dirty.

I assume Dr. Rhinehart's application falls into Zone 4 in Figure 1 because the oil is circulated in a horizontal plane, where there’s practically no Hs and the system curve parabola at zero flow is zero. As the zone numbers drop, the friction component of the total head also drops.

When the system curve is flat (Zone 1), there’s little advantage to using variablespeed pumping. On the other hand, in processes where the system curve is steep (Zone 4), substantial energy savings can be obtained by using multiple, variable-speed pumps, which start and stop as a function of required flow.

Centrifugal pumps

Centrifugal pumps use centrifugal force to throw liquid radially outward. They have high efficiency of up to 90% in case of some large pumps, but generally between 60% and 85% as a function of pump design and size. They have only one moving part (the impeller with bearings), deliver steady flow, have a rangeability of 4:1, and are relatively insensitive to air-locking. However, they are susceptible to cavitation.

The shape of the head-capacity curve can droop, be flat, or be normal. For this discussion,

Figure 3: Compared to constant speed pumping (red line), using variable speed pumping stations (blue line) result in two types of savings. Saving 1 results from operating at a reduced pump discharge head and Saving 2 from operating in a higher pump efficiency region.

Figure 1: Pumping station operating zones are defined by the ratio of the total pump discharge head (Ht) divided by the static head (Hs) of their operating point.

Figure 2: When using a constant speed pump, the operating point is fixed at the point where the two curves cross.

I assume the pump curves are normal, stable at all flows, and suited for throttling services. Figure 2 shows the head-capacity curve of a constant-speed, centrifugal pump (dotted line) and a system head-capacity curve of an assumed process (solid line). In case of constant speed pumps, the dotted pump curve is fixed, and the operating point is where the pump and system curve lines cross. A fixed pump curve is the only way to change the flow (Q). In the case of constant-speed pumps, this can only be done by placing a throttling control valve on the pump

discharge, and artificially increasing or decreasing the total friction loss of the of the system (Figure 2). In this configuration, the pumping energy required to overcome the pressure drop through the control valve is wasted energy, which can be eliminated by using variable-speed pumps.

Switching from constant to variable-speed pumping can save in two ways. Figure 3 shows both, as well as the system curves for constant (red) and variable-speed (blue) pumping.

In the case of the red system curve (constant speed), when the flow must be reduced from F1 to F2, the pump discharge pressure has to rise from P3 to P2. The increase is introduced by further closing the control valve on the pump discharge. Overcoming this increased pressure drop wastes energy, which can be eliminated by using variable-speed pumps (Saving 1 in Figure 3).

In the case of using variable-speed pumps (blue system curve in Figure 3), reducing the flow from F1 to F2 is achieved by reducing the pump speed to Point 4. This eliminates the need for throttling or even having a discharge valve (Saving 1). However, it also produces Saving 2 because the red system curve operates on a less efficient point on the pump curve (80% efficiency at Point 2) than the blue system curve (85% efficiency at Point 1).

Users must consider the overall system effi ciency index (SEI) of the installation, which is the product of the effi ciencies of the pump, motor, variable-speed drive and utilization effi ciencies

They must also consider if the 4:1 rangeability of a single variable speed pump is insufficient, and whether to use multiple smaller pumps that are less efficient.

Building blocks (and I/O) multiply links

I/O modules and terminal blocks provide new sizes, signals, networking and flexibility

FOR 3- OR 4-CONDUCTOR SENSORS, ACTUATORS

PTIO sensor/actuator terminal blocks make it easy to connect three- or four-conductor sensors and actuators. These terminal blocks use Phoenix Contact’s push-in technology for quick connections. PTIO’s compact design optimizes panel space, while offering a reliable, secure connection. These terminal blocks also provide LED indication, test points and grounding options, and offer bipolar blocks for sensors with two signal lines.

PHOENIX CONTACT

www.phoenixcontact.com/ptio

12-MM, PROPORTIONAL VALVE MODULE

750-1632 proportional valve module (PVM) provides a small footprint (12 mm) for reduced system components and engineering. It also offers improved response time, superior precision, and better diagnostic features. With the features of larger control modules, 750-1632 include a wide current range and the capability to have two single-coil valves or one dualcoil valve controlled either unidirectionally or bi-directionally, offering high repeatability.

WAGO

www.wago.com

MULTIFUNCTIONAL, ANALOG I/O FLEXIBILITY

EL4374 EtherCAT terminal is a combined input/output for -10/0 to 10 V or -20/0/420 mA signals at 1 kbps per channel. Its ability to measure a range of signals lets users deploy one terminal instead of multiple. EL4374 combines two analog inputs and two analog outputs in a 12mm housing. With a signal measuring range of 1070/0 of nominal, EL4374 supports commissioning with sensor values in the limit range.

BECKHOFF

www.beckhoff.com/en-us/products/i-o/ethercat-terminals/el4xxxanalog-output/el4374.html

IP67, REMOTE I/O WITH IO-LINK

NXR Series IP67, remote I/O with IOLink networking streamlines operations across multiple organizational levels. It reduces inventory and setup time, offering a flexible solution for easier IIoT adoption. Combining IO-Link and digital I/O in one part number, NXR Series is available in EtherNet/IP and EtherCAT models. The EtherCAT model features PC-less maintenance for quick field replacement.

OMRON AUTOMATION AMERICAS

800-556-6766; automation.omron.com/en/ us/products/family/Remote-IO-NXR-Series

FLEXIBLE DCS MARSHALLING

Klippon Connect W2C and W2T signal-wiring and signal-marshalling DCS terminal blocks have a single- and double-level arrangement, which combines with the provision of their four basic functions—fuse, feed-through, disconnect and ground. This lets users achieve maximum flexibility when wiring devices in the field, including the ability to make last-minute changes without altering wiring that’s already been set up.

WEIDMULLER USA

804-794-2877; www.weidmuller.com/en/products/connectivity/terminal_blocks/dcs_marshalling.jsp#wm-330903

SEAMLESS ETHERNET CONNECTIVITY

Fourth-generation 750-363 EtherNet/ IP fieldbus coupler seamlessly connects EtherNet/IP networks to Wago’s modular I/O system. With two Ethernet interfaces and an integrated switch, it allows line topology wiring, eliminating external switches or hubs. 750-363 supports standard protocols such as HTTP(S), BootP, DHCP, DNS, SNMP and (S)FTP. It creates a local process image of connected I/O modules, and features a web server and DIP switch configuration for easy IP addressing.

GALCO www.galco.com

TOOL-FREE WIRING, QUICK-RELEASE LEVER

P-LUP DIN rail terminal blocks and lever-up system is crafted for speed and simplicity, allowing tool-free wiring that saves time with each connection. With push-in (PID) technology, solid and ferruled, stranded wires are secured, guaranteeing reliable performance. The quick-release lever offers effortless adjustments, maximizing convenience. With exceptional contact stability and long-lasting reliability, P-LUP terminal block achieves electrical efficiency.

DINKLE CORP. www.dinkle.com

BUS COUPLER ENABLES SIGNAL CONNECTIVITY

SX8R compact, direct or DIN-rail mounted, bus coupler bridges a range of I/O modules with network protocols, such as EtherNet/IP, Modbus TCP and CC-Link. SX8R works natively with Idec’s PLC FC6A I/O modules that are available in more than 40 models, encompassing discrete and analog signal types and counts. Each SX8R supports up to seven I/O modules on the base unit, and up to eight additional modules by using an expansion power supply.

IDEC CORP.

SPRING CLAMPS WITH 2.5-10 MM CENTERLINES

Spring-clamp terminal blocks feature a patented, cageclamp design, and are available with 2.5 mm to 10 mm centerlines. They ensure high-quality, reliable wire terminations and come "wire ready" with an open terminal chamber. The spring automatically adjusts for different-size wires, providing consistent connection force. Metz also offers spring-clamp blocks in reflow-compatible designs with options, including color sequencing and printing of figures or symbols.

METZ CONNECT USA

800-262-IDEC (4332); lp.idec.com/SX8R

SINGLE-POSITION, MIX-AND-MATCH COLORS

Single-position, screwless terminal blocks with blue, green, red, white and yellow colors can be mixed for color-coded wire connections. TBL-0014-750, TBL-0015-750 and TBL-0016-1000 series feature a push-in, spring or lever-actuated connection for simpler terminations, 7.5 mm or 10 mm pitches, and one to six position options. Their receptacles support 24 to 8 AWG wire gauges and -40 °C to 105 °C temperatures.

732-389-1300; bit.ly/2XAlc7E

IO-LINK MASTER FOR PNEUMATICS AND IIOT

G3 Class A, IO-Link master provides smart and analog sensor connectivity for Aventics Series

G3 Fieldbus platform. It supports EtherNet/IP DLR and Profinet protocols, and achieves pneumatic valve control via direct, digital communications with controllers. Users can include multiple IO-Link masters on one G3 platform. It has eight Class A ports per module to support multiple IO-Link smart and analog sensors, which can be distributed up to 30 meters away.

EMERSON Emerson.com/en-us/catalog/aventics-g3

CUI DEVICES www.cuidevices.com

I/O-COMPATIBLE WITH 32-POINT OPERATION LED

I/O-compatible, space-saving terminal block from Mitsubishi Electric supports its 32 Q series, has a 32-point operation LED display, and supports DIN rail and screw mounting. They meet CE marking requirements, and connect to Mitsubishi Electric PLCs. Since their terminal pitch, compatible devices and representative standards differ depending on the product, it’s necessary to select an appropriate interface block according to the application.

MISUMI USA misumi.info/mitsubishi-terminal-block

GREG MCMILLAN

Gregory K. McMillan captures the wisdom of talented leaders in process control, and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams, and (web-only)

Top 10 lists. Find more of Greg's conceptual and principle-based knowledge in his Control Talk blog. Greg welcomes comments and column suggestions at ControlTalk@ endeavorb2b.com

Keys to successful migration projects

How to get the most out of control system migration projects

GREG: I’ve advocated using dynamic simulation to drive process control improvements and train operators. But there’s another, sometimes overlooked, use case for getting the most out of control system migration projects. Julie Smith, DuPont group manager and global automation and process control leader at the Engineering Technology Center, offers insights gained over many decades. Julie, has your team seen value in this use case?

JULIE: Yes, we’ve been using simulation to assist migration projects for more than 30 years. It’s important to note that migration projects can be extremely challenging to execute well. There’s little return on investment, and a lot of risk to operations. Project teams are under the gun to cutover from their old system to the new one during the plant’s normal maintenance turnaround. This is only a few weeks, with little margin for error.

Project teams typically try to mitigate this risk with a like-for-like approach to design, or just do a copy job. However, there’s no such thing. The new system will not only look and feel different, but the execution engine will also behave differently, particularly if the legacy system is more than 20 years old. Execution cycles will be faster, and there will be more tasks done in parallel rather than sequentially. There may also be process safety implications because grandfathered, legacy safety systems now need modifications to meet current standards. These risks can be addressed by proper simulation and testing before the cutover.

GREG: How can simulation help?

JULIE: A simulation is a virtual replica of a control system connected to a virtual process. It’s a digital twin. The virtual plant must have the same dynamic response as the real plant, making it difficult for operators to

discern the difference. You want people to question if it's real. This creates buy-in for using the tool. Once you have buy-in, people want to use it, and they often try scenarios they’d never attempt otherwise.

GREG: How do you get that level of fidelity?

JULIE: Finding the right level of detail is always a challenge. I recommend using firstprinciples, chemical-engineering models of all unit operations and chemical components in scope. Incorporate the basic physical properties, reaction kinetics and thermodynamics. Start simple, build incrementally, and add fidelity only as you need it. For example, assume the reactor is perfectly mixed, and that valve stiction is negligible. Compare the model response to actual plant data, and adjust as needed to match reality. It will not only speed up simulation development, but also make it easier for others to modify it in the future.

GREG: In my experience, modeling a process using a first-principles model, with all the thermodynamic calculations and material balances and energy balances and reaction kinetics, requires significant process knowledge. The process experts at the site don’t necessarily have the time or skills to develop models. How do you address that issue?

JULIE: We address it in two ways. First, the modeling tool must be easy to use. We maintained an internal tool for exactly this reason. Everything is object-oriented, so the user doesn’t need to worry about writing or solving differential equations. We also have a small team of internal experts, who can translate plant requirements into model requirements, allowing plant experts to describe desired behaviors at a high level. Then, we create the model behind the scenes. It’s a joint effort, but our team does the heavy lifting.

GREG: Collaboration with the plant to build the model is important. How do you use it to help the migration project?

JULIE: At a recent site, we worked with the system integrator doing the coding to incorporate our models into their internal testing before factory acceptance testing (FAT). It was challenging at first because the integrator didn’t want to deviate from their typical project plan, which had time allotted to create a tieback simulation. We started small with an isolated area of the plant that had little interaction with other areas, but had high hazards. We showed that model-based testing gave a deeper and more thorough level of checkout, and identified instances where process safety gaps would have been left unmitigated.

Next, we moved to the main process areas. It was a batch process, with multiple reactors in parallel and highly exothermic reactions. We simulated and tested more than 100 batches offline using our models. The results were amazing. We documented $4.8 million in hard savings via avoided shutdown time and waste-disposal costs. Demands on the safety system would also have cost about $3 million.

The following year, another process area was scheduled for cutover. We reused our models, and the integrator reused the corrected logic instead of going through their standard reverse engineering process. We not only saved another $3.8 million in yield losses and waste-disposal costs, but also returned the asset to production three days early. How many migration projects can say that?

This year, we had the final phase, which included the thermal oxidizer (tox) for the plant. It’s not only a critical, environment-control device, but also affects every operating area onsite. If the tox is down, nobody runs. By simulating the process and

validation checks ahead of time, we saved another $6.5 million, and started up four days early.

GREG: What makes the high-fidelity model so much more effective than the simple tieback version?

JULIE: A tieback model is fine if you’re only concerned with discrete manufacturing, or similar processes without a dynamic component, and processes with simple operations and low risks. Once you combine reaction chemistry and process safety, you must be concerned with abnormal handling, particularly for batch processes. A tieback model never veers off the happy path. Everything always goes to its commanded state. Real plants do deviate sometimes, and the consequences can be severe. How will the deviation be detected? Can the layers of protection catch it in time? How will the unit recover? These questions are best answered by a high-fidelity, dynamic model.

GREG: All processes I worked with are complex and challenging, and

they’re often pushed beyond their original design. High-fidelity, dynamic models are the key to finding and addressing many issues, and taking advantage of the improvements in process control technology offered by migration to the latest control systems. These simulations are essential for best batch profile, feedforward, ratio, override and state-based control to deal with upsets. This makes recovery faster and safer from abnormal operation, avoiding the need for actions by safety instrumented systems. Simulations are critical for developing procedure automation, inferential measurements, model predictive control and real-time optimization. You can start by focusing on higher-fidelity simulations for unit operations that have complex dynamics and pose the greatest risk to plant safety. These include environment and process performance, such as bioreactors, chemical reactors, compressors, columns and neutralizers. I include dynamics from instrumentation, which are particularly important for composition, pH, pressure, surge and temperature control.

“I’ve often been told that water/wastewater processes and other control applications are taken for granted and even invisible until they’re unavailable. Well, the same goes for democracy.”

Embrace boring

Election judging parallels process control and automation

I’VE mentioned more than once that I never networked or system integrated anything more complex than plugging in a laptop cable or an old stereo speaker. However, that statement isn’t accurate because I just completed my fifth 16-hour shift as an election judge in suburban Cook County, Ill., and that job parallels many of the plant-floors I’ve covered.

First, each neighborhood precinct gets a big metal box or voter supply carrier (VSC) with touchscreens, printers, cabling, scanners, uninterruptible power source (UPS) and other components. Second, there are never enough judges to go around, so we lack personnel like all brain-drained plants. Third, many components can be hard to set up and configure, and seem to temperamentally crash, recover network links, and go down again. Fourth, even though its mostly excellent, there are still a few gaps and misplaced instructions in Cook County’s more than 200page election judge manual.

At one point, we even discovered that our ballot scanner was running on battery power, which was quickly dwindling despite being plugged in. We eventually traced the problem to an additional plug near the UPS that wasn’t plugged in. A classic, plant-floor scenario.

Despite these and other snags, once the system settled down, it performed flawlessly and tallied paper and electronic ballots for several hundred voters in our precinct, which is just of one of thousands in Cook County.

By the way, for any onlookers inclined to bellyache about rigged elections when they don’t like the results, I’d encourage them to serve as judges. I’m sure ours and precincts nationwide could use the help, and it would allow suspicious individuals to assume some ownership of the process. I think they’d quickly realize there are so many passwords, safeguards and overlapping documentation that Chicago’s quaint reputation for “vote early, vote often” is impossible these days.

For example, votes on touchscreens are printed out on paper, scanned electronically, and both versions are preserved. Meanwhile, paper ballots are filled out manually, but they’re also scanned electronically, and those two versions are again preserved. This allows all formats to be compared if questions arise.

In another parallel with process control, software and digitalization are taking over more voting and tabulation functions. Previously, judges used bound paper volumes to check signatures, while touchscreens had paper tape accessories to confirm electronic ballots. Now, iPads confirm identities via barcodes, mostly on driver’s licenses, but they’re still attached to paper printers that spit out receipts that the judges carefully collect, resolve with each scanner’s tally, and turn in with our other documentation.

Just as in other fields and disciplines, the religion of paper persists in elections, healthcare, dry cleaning and elsewhere. I sympathize because I still do all my interviews with ballpoints on paper pads, which don’t require power sources that can go out or batteries that can die. However, I also recognize that I’m in the information business, not the paperand-ink business, and software and other innovations that streamline these tasks—and do it securely—can improve the odds they’ll continue to get done.

Admittedly, just like registering and waiting to vote, many election-support chores can be just as tedious as implementing and maintaining arrays of I/O, Ethernet switches, PLCs, transmitters and other process control devices. However, just because some tasks are boring, it doesn’t mean they’re not essential and even sacred. I’ve often been told that water/wastewater processes and other control applications are taken for granted, and even invisible, until they’re unavailable. Well, the same goes for democracy, so we shouldn’t give it up too easily.

Go Beyond.

Emerson’s DeltaV™ Automation Platform provides contextualized data and unique, actionable insights so you can improve production and embrace the future of innovation—with certainty. Venture beyond. Visit Emerson.com/DeltaV

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.