











GENESIS, the next-generation SCADA platform from Mitsubishi Electric Iconics Digital Solutions, is built to meet the evolving needs of modern process industries. Based on decades of automation expertise, it delivers scalability, flexibility and intelligence, optimising operations across manufacturing, water and wastewater, energy, and beyond.
With unlimited licensing, organisations can scale effortlessly from a single site to enterprise-wide deployments. Intelligent asset modelling and an auto asset builder provide out-of-the-box connectivity to Mitsubishi Electric robots and devices, simplifying integration and configuration.
GENESIS supports automated dashboard generation and high-performance 2D and 3D graphics for HTML5, creating interactive, real-time operational views accessible from any device. Operators gain seamless visualisation, reducing set-up time and improving decision-making.
Providing universal connectivity, GENESIS integrates with existing infrastructure, legacy equipment and third-party systems via open protocols. Built-in enterprise-grade security supports Microsoft Entra ID and SAML, helping to ensure robust identity and access management. Developed under IEC 62443-4-1 certified processes, GENESIS follows globally recognised security standards.
With futureproof flexibility and rapid deployment, GENESIS helps industries confidently embrace automation and digital transformation, enabling greater efficiency, operational resilience and peace of mind in an increasingly connected world.
Skills shortages have over a number of years become an increasing problem for the process industries. Seasoned professionals retiring from their jobs take with them decades of invaluable experience, while a new generation of digital natives are being recruited to replace them. While digital skills provide unique advantages, there’s no substitute for decades of real-life plant floor experience.
Fortunately, modern digital control systems are now providing technology — through edge computing — that can provide the insights needed by the new generation of engineers so they can more confidently deal with complex process automation and control challenges. These technologies — along with modern data analytics technologies — are also empowering manufacturers to implement proactive methodologies to anticipate and address potential maintenance issues before they become failures, providing improved performance and uptime, as well as greater operational efficiency.
Of course these technologies are not without risk: increasing digitalisation and connectivity implies greater cyber risk and therefore the need for a greater understanding of digital technologies. Cybersecurity is therefore an area where digital natives may find a new niche in industry.
Our technical tips article this month addresses the question of pressure compensation for thermal mass flowmeters. Manufacturers of these devices tend to market them as using a measuring principle that is pressure- and temperature-independent. However, in gas flow measurement this does not take into account the differing behaviours of certain common gases under varying conditions. This article gives some practical tips on how to deal with this issue.
As always, more detailed daily news and new automation products can be found on processonline.com.au, and by subscribing to our biweekly email newsletter.
Glenn Johnson Editor pt@wfmedia.com.au
Fenrich*
As industrial automation systems grow more sophisticated, technology is being called upon to bridge the gap between more experienced workers and the new generation of digital natives.
One of the most pressing challenges facing process industries today is the skills shortage. Every year, more seasoned professionals are retiring from their jobs, taking with them decades of invaluable experience and know-how. Meanwhile, a new generation of digital natives are being recruited to replace them. Without wishing to unfairly generalise, it’s safe to say this new generation is more tech savvy and digitally connected than their predecessors. While these skills provide unique advantages, there’s no substitute for decades of real-life plant floor experience when it comes to understanding the nuances of industrial process control.
For example, by just looking at the process from end to end, or even listening to a piece of equipment, an experienced engineer may know exactly what attention it needs, as well as what can be safely ignored. Once this vital and often highly specific application knowledge is lost, it is very difficult to get it back.
Today, innovations in control systems are emerging that can help ensure that data is turned into actionable insights, while also giving today’s workforce the tools they need to confidently grapple with complex process automation and control challenges.
Meanwhile, industrial automation systems are becoming more and more sophisticated. Sensors, and the data they generate, can provide real-time insight into the performance, condition and maintenance needs of process applications at a component level. The challenge that remains is how to turn this data into actionable insights. An experienced engineer can see an alert or indicator and intuitively know what is likely to happen next. Perhaps the process can be run for another couple of months without the need for any intervention. Or it may be that the effected component needs to be immediately taken offline and maintained. Making these calls correctly requires a deep, learned, and intuitive understanding not just of the individual process, but also the subsequent impact on dependent and adjacent processes, and the bigger picture of the production facility.
Having vast amounts of data flying around the plant at any given time creates other challenges; namely, making sense of all that data, turning it into relevant, understandable information, and delivering it to the right person or team in a timely manner. In a modern plant many of the connected devices are generating vast quantities of data that was unavailable and unseen just 10 years ago. Take an electric motor, for example. Previously, the data available for that motor’s operation would be limited to whether it was on or off, and what speed it was running at. Today, sensors fitted to a motor can tell you how much it is vibrating, its RPM, temperature, load and power consumption, run hours, and other health parameters. With a fleet of motors, and other equipment to monitor, staying on top of all this data has become increasingly difficult.
To that end, the data needs to be manageable. Digital natives, who are used to seamless and intuitive user experiences in consumer electronics, expect this information to be meaningful, and at their fingertips. They don’t want to have to go to a different console or asset management system or open a spreadsheet to access it. The challenge for automation manufacturers is to make sure that relevant information reaches the right personnel, without overloading them with noise.
Cloud and edge computing are increasingly extending and redefining the role of the automation system. This evolution creates an environment that resonates with today’s tech-savvy engineering talent. In older automation systems, all process data resided within the core control layer. This limited who could access it, restricted flexibility, and slowed innovation because of the risks involved in modifying a centralised system. Modern edge and cloud technologies remove these barriers, enabling separation of concerns and making data freely accessible for faster, safer improvements.
Today, edge computing is creating a new space that sits outside the traditional boundaries of core process control: let’s call it the ‘digital habitat’. Within the core process control system sit familiar elements including field devices, controllers, I/O, HMI software and engineering software. Process data leaving this core enters the digital habitat, where it’s consumed by apps — hosted either in the cloud or locally at the network’s edge via an edge device that can be used to drive plant monitoring and optimisation (M&O) applications. These applications process the vast amount of input data and present it in a format that is meaningful to each person’s role and function. Crucially, the architecture ensures that core process control is isolated from this app level, eliminating risks that could affect the ability of the DCS to perform its key process control, safety, monitoring and communications functions.
With data and apps formally isolated from core process control, information in this new environment is freely available to a wide array of users without causing interference to primary DCS functions and endangering plant operations. The new environment provides a secure, governed data pathway
to connect, collect and store contextualised process data. This can be leveraged by M&O and other analytic applications, leveraging tools like AI/ML to unlock actionable insights — leading to improved operational performance, extended asset life, better decision-making and expanded serviceability.
Modern technology can spot subtle patterns in massive datasets and use those insights to inform intelligent decisionmaking. Yet in an industrial process plant, raw computing power often struggles to match the hard-won intuition of a veteran operator with decades of experience. For example, a gradual change in a sensor reading on rotating equipment — like a pump or compressor — might go unnoticed by most. But an experienced engineer may recognise it as a sign of an unrelated issue that may soon cause costly, disruptive problems.
A digital environment helps capture and share that invaluable expertise. By sending process data from core control systems to the cloud or an edge platform, AI-powered monitors can analyse it and send an alert when something’s not right. Preserving and passing on know-how is essential, and digital tools make it accessible for the next generation.
For the workforce of tomorrow, these developments open new possibilities for effective plant management. While most problems will still require some degree of human intervention, the process of identifying them can at least be automated, saving time and resources and enabling early intervention. As the ‘digital habitat’ expands, engineers can now increasingly use smart devices to receive ‘anytime, anywhere’ updates. Real-time data on process and plant status from the DCS can be delivered to these devices as actionable information, empowering engineers to resolve potential issues faster, reducing the safety, cost and downtime implications of unresolved problems being left to escalate.
The aim of edge-based M&O applications is to enable engineers to view a range of plant data and alarms via a standard web browser using intuitive and secure dashboards to provide a consistent and easy-to-use interface that augments the ability of workers to ensure safe and efficient plant performance. As these
DIGITAL NATIVES, WHO ARE USED TO SEAMLESS AND INTUITIVE USER EXPERIENCES IN CONSUMER ELECTRONICS, EXPECT...INFORMATION TO BE MEANINGFUL, AND AT THEIR FINGERTIPS.
technologies evolve further, both engineers and technicians will be able to operate at a higher level of autonomy by focusing more on value-added activities, and less on tedious tasks. New asset start-ups will also be able to be achieved more quickly and, with the increased automation of engineering, more smoothly with less risk of error.
The digital habitat of the edge environment can solve multiple problems at once. First, it provides a mechanism for capturing continuous application data. The apps within the edge environment can monitor process data for any anomalies and identify items that need attention. This saves time by reducing the need for manual inspection of equipment, while also serving to retain and store vital process knowledge. Over time, as the historical data set enlarges, it will help operators to know exactly what is going on in their systems at all times, allowing them to prioritise work accordingly.
Crucially, the edge applications need to be designed to be seamless and intuitive.
Should the system identify that a device or process needs attention, a maintenance engineer with a toolbox may still need to go and inspect it, but they will have a better understanding of the urgency. In addition, edge computing enables engineers to detect or diagnose anomalies that would have previously gone unnoticed. While sensors and monitoring systems can provide granular data into an asset’s health and maintenance needs, this data can be overwhelming to less experienced personnel. No longer can the experience of a seasoned engineer be counted on to distinguish what requires immediate attention and what can be left for another day. The edge environment, and the apps that reside within it, aim to preserve that experience,
by providing meaningful insights to a new generation of engineers, when and where they are needed.
Shifting non-core functions to a flexible, secure digital environment supports greater agility and adaptability while equipping the digital native workforce with the technology and knowledge required to maintain tomorrow’s productivity.
*Kim Fenrich is currently Global Product Marketing Manager for ABB and has over 30 years of diverse automation industry experience, across a wide variety of industries including power generation, processing and manufacturing. Kim holds a Bachelor of Science in Electrical Engineering from the University of Akron, and currently lives near Wilmington, North Carolina.
The Wieland Electric SLC4 range of safety light curtains offers finger, hand and body detection capability.
LAPP Australia Pty Ltd
The BTC industrial box thin clients feature a durable, enclosed, IP4x-rated aluminium housing designed to withstand harsh conditions in process industries.
Pepperl+Fuchs (Aust) Pty Ltd
As a batch size 1 safety relay, the myPNOZ is a modular system designed to deliver tailored safety solutions for diverse industrial applications.
Pilz Australia Industrial Automation LP
The EnerIT Smart Energy Management System provides real-time monitoring of energy-related systems within a manufacturing facility.
Metromatics Pty Ltd
The VECOW EAC-5100-OOB is an embedded AI computing system designed for edge AI applications, powered by NVIDIA Jetson AGX Orin technology. Delivering up to 275 TOPS of AI performance, it features the latest NVIDIA Ampere architecture with 2048 CUDA cores and 64 Tensor cores, to provide a performance boost for real-time AI processing. This system supports a wide range of power inputs from 9–50 V, making it suitable for industrial, automotive and mobile applications.
The EAC-5100-OOB offers a wide range of connectivity options, including six GigE LAN ports, four with PoE+ support, five USB 3.1 ports, and one digital display supporting 4K60 resolution. It also features a PCIe Gen 3 x8 slot, isolated CANbus and RS-232/422/485 ports. Its rugged design allows for performance in harsh environments, operating in temperatures from -25 to 70°C and meeting military-grade standards for shock and vibration resistance.
With built-in out-of-band (OOB) management, the EAC-5100-OOB enables remote disaster recovery, allowing operators to manage and reboot devices even during system failures.
Backplane Systems Technology Pty Ltd www.backplane.com.au
The DMM4 extension module from SICK is designed to enhance efficiency by enabling fully configurable muting applications with SICK’s deTec4 safety light curtain and deTem4 safety multibeam sensor.
With DMM4 and Safety Designer software, users can individually configure muting parameters and access advanced features such as smart restart interlock and partial muting modes. This level of flexibility is designed for precise differentiation between humans and transported materials, optimising safety without disrupting material flow.
Benefits include maximising machine uptime with customisable, applicationspecific muting configurations, and reducing installation effort with 10 direct connections for muting sensors, safety command devices, and up to three safety sensors. It is also possible to switch between different muting modes while machines are running. Process efficiency can be improved with an object size-dependent smart restart interlock, minimising downtime caused by small objects.
Integration and maintenance is simplified with easy device connections that reduce cabling and commissioning efforts, while space utilisation is optimised as even closely positioned safety devices remain unaffected by one another.
SICK Pty Ltd
www.sick.com.au
Advances in technology are steadily reducing the lifespan of electronic devices. This is resulting in steadily growing demand for finite raw materials. At the same time, e-waste is continuing to pile up. Worldwide annual e-waste generation could rise to as much as 74 million metric tons by 2030. However, only a small fraction of all electronic devices is recycled: over 80% of the e-waste generated ends up in landfill or incinerators, including all the valuable raw materials, precious metals and rare earths contained in the electronics. Incineration can also release hazardous chemicals and substances into the environment.
The small percentage of e-waste that undergoes treatment typically gets shredded, while only a limited portion is manually disassembled, cleaned of hazardous substances, broken down mechanically and sorted into different fractions. Such manual disassembly entails high costs and is not very effective. There have been virtually no sustainable value retention strategies to refurbish and recycle electronics that will enable an advanced circular economy.
In the iDEAR project — short for Intelligent Disassembly of Electronics for Remanufacturing and Recycling — research scientists at Fraunhofer IFF in Magdeburg are combining knowledge management, metrology, robotics and artificial intelligence into an intelligent system for automated and non-destructive disassembly processes to establish a certifiable, closed-loop waste management system.
“We intend to revolutionise the disassembly of e-waste. Current solutions require substantial engineering and are limited to a particular product group,” said Dr. José Saenz, manager of the Assistive, Service and Industrial Robots Group at Fraunhofer IFF. “In the iDEAR project, we are pursuing a data-driven methodology so that as the widest variety of products, from computers to microwaves to home appliances, can be disassembled in real time with little engineering.”
The research scientists are initially concentrating on the automated disassembly of computers. The system is intended to be upgradeable over time for any equipment, such as washing machines, for instance.
After the items have been delivered and separated, the initial processes of identification and condition analysis are initiated. Optical sensor systems and 3D cameras with AI-powered
algorithms then scan labels with information on the manufacturer, product type and number, detect component types and locations, examine geometries and surfaces, assess the condition of fasteners, such as screws and rivets, and detect anomalies.
“Optical metrology helps scan labels and sort different parts, such as screws, for instance. Previously trained machine learning algorithms and AI interpret the image data and enable the identification and classification of materials, plastics and components in real time based on sensor and spectral data,” Saenz explained.
For instance, the AI detects whether a screw is concealed or rusted. All the data is stored in a digital disassembly twin and also provides information on whether a similar product has ever been disassembled.
In the next step, Saenz and his team define the disassembly sequence so that their software can determine whether to execute a complete disassembly or only focus on the recovery of specific, valuable components. Glued or otherwise mated components hinder non-destructive disassembly. Rusty or stripped screws or deformed components are not ideal for this either.
The disassembly process starts based on this high-level information. The robot receives
a series of instructions and operations to complete, such as “Remove two screws on the left of the housing, open the housing” and so on. Whenever necessary, the machine changes the tool needed between the individual steps. The skills specified in the disassembly sequences include robot actions, such as screwing, lifting, cutting, extracting, localising, repositioning, releasing, moving levers, bending, breaking and cutting wires, which the disassembly robot can perform completely autonomously. The demonstrator even succeeded in tests to remove a motherboard from a computer — a very complex task that requires a high level of precision.
The individual demonstrators for the subprocesses have been built. In the next step, the demonstrators will be interconnected. The goal is one demonstrator that integrates all of the technological developments and can execute all of the of automated disassembly processes.
A slightly more detailed version of this article can be found online at: https://bit.ly/3FKXoUa.
Fraunhofer Institute for Factory Operation and Automation IFF www.iff.fraunhofer.de/en.html
For over a decade, the original SIMATIC S7-1200 controllers have been a cornerstone of many industrial applications. Now Siemens has reimagined this technology, delivering a new generation that brings enhanced motion control, improved cybersecurity, and expanded connectivity. The SIMATIC S7-1200 G2 series is not just an upgrade: it is a transformation of what a compact PLC can achieve.
The need for smarter, connected automation
Industrial processes are now more complex, and companies are under pressure to meet increasing demands for flexibility, efficiency, and compliance with stricter regulations. The SIMATIC S7-1200 G2 directly addresses these challenges, offering greater integration between operational technology (OT) and information technology (IT).
The evolution of automation is no longer just about controlling machinery — it is about harnessing data, improving decision making, and reducing downtime through predictive maintenance. With built-in Ethernet connectivity, the S7-1200 G2 can communicate seamlessly with higher-level IT systems, enabling cloud integration and advanced data analysis.
One of the features of the S7-1200 G2 is its enhanced processing power. Siemens has signifi cantly improved the controller’s speed and communication performance, allowing for real-time execution of automation tasks.
This increased performance is particularly crucial for industries such as food and beverage, pharmaceuticals, packaging, and materials handling, where precision and speed go hand in hand. The controller is engineered to provide high-speed motion control, supporting everything from single-axis control to multi-axis coordination and kinematics.
For manufacturers looking for scalable automation, the SIMATIC S7-1200 G2 delivers a modular architecture that allows for seamless expansion. Its smaller footprint — achieved through a more compact hardware design — makes it an attractive option for machine builders who need to optimise space without sacrificing performance.
As industrial automation systems become more interconnected, cybersecurity has moved to the forefront of concerns. Siemens has addressed this by embedding advanced security features directly into the S7-1200 G2.
The SIMATIC S7-1200 G2 controllers incorporate secure authentication mechanisms, ensuring that only authorised personnel can access and modify system settings. The integration with TIA Portal V18 and later
versions enhances device authentication, making it more difficult for malicious actors to interfere with automation processes. Encryption protocols protect data transmission, while Siemens’ firmware security measures help safeguard against cyberattacks.
The inclusion of near-field communication (NFC) technology means that operators can now access real-time system status updates using a smartphone app, even when the PLC is powered down. This not only speeds up maintenance procedures but also minimises unplanned downtime, ensuring that production remains uninterrupted.
Flexible machine safety for evolving industrial needs
Safety in automation is non-negotiable, and the S7-1200 G2 takes this into account with an enhanced failsafe system. Siemens has improved its failsafe I/O portfolio, offering integrated safety functions that allow manufacturers to expand their safety measures as needed.
The controller’s failsafe signal boards make it easier to implement machine safety functions, reducing complexity and ensuring seamless compliance with regulations. By supporting PROFIsafe communication, the SIMATIC S7-1200 G2 series enables manufacturers to integrate safety functions into their existing networks without major reconfigurations. The flexibility allows businesses to customise their safety infrastructure based on specific operational requirements.
A future-proof investment for Australian industry
For Australian manufacturers, staying competitive means embracing automation technologies that reduce complexity while increasing performance. The ability to scale automation systems as business needs grow is critical, and Siemens has designed the G2 series with this in mind.
APS Industrial, as the distribution partner for Siemens in Australia, is ensuring that local industries can access this next-generation technology. Whether for OEMs, system integrators, or end-users, the SIMATIC S7-1200 G2 offers a compelling solution that balances performance, cost-efficiency, and ease of use.
As Australian industries continue to navigate the demands of digital transformation, investing in future-ready automation solutions like the S7-1200 G2 will be the key to staying ahead. For more information on how APS Industrial can support your automation needs, visit APS Industrial: https://apsindustrial.com.au/simatic-s7-1200-g2
How to ensure clarity of the signal from your encoder, and avoid excessive electrical noise.
Pryde Measurement
An encoder is an electromechanical transducer that converts mechanical rotary motion into digital signals for the control of machinery. The encoder produces a square wave signal as the shaft rotates. Speed, position, servo feedback, etc., can be determined through proper processing of this signal. As the electrical signal leaves the encoder, it is free of electrical noise. However, by the time the signal reaches its intended counter, PLC, etc., it may be degraded and may not be clean enough for the system to work properly.
To ensure clarity of the signal from your encoder, and avoid excessive electrical noise, there are several options and installation considerations to take into account.
MANY PEOPLE BELIEVE THAT BY SPECIFYING THE DIFFERENTIAL OUTPUT ON THE ENCODER, THEIR NOISE PROBLEMS WILL SIMPLY GO AWAY.
The encoders’ cables are an important consideration, with cable length, termination, and connections all playing a part in keeping a signal ‘clean’. There are also additional methods to reduce noise, and this article will focus on strategies and ways to reduce noise and signal distortion to ensure the signal from your encoder remains clean and uncorrupted.
A common cause of signal degradation is electrical noise. The longer the cable run, the more induced noise the cable picks up. If this noise becomes excessive, miscounts will occur. Electrical noise causes miscounting because the receiving device cannot tell if an input signal is a valid encoder signal or noise.
Normally there is sufficient input signal conditioning, or filtering, to take care of this problem. However, filtering at the input of the receiving device will reduce the speed at which the system can operate. Years ago, most counters had high frequency limitations of between 5 and 20 kHz. Speed is now the name of the game, and these frequency limitations are simply not acceptable in today’s production environments. Electrical noise generated by AC power, electric motors, fluorescent lighting, relays, and many other sources can cause a plethora of problems in electrical systems. For the encoder in your system these problems can range from simple miscounting to a complete servo system lockup.
Electrical noise typically enters a system as one of two types: radiated and
conducted. Radiated noise propagates through the air, while conducted noise finds an electrical path onto the encoder cables from ground loops, power supplies, or other equipment connected to the system.
One method to alleviate the problem of electrical noise is using what is called differential signals. With differential signals, the output from the encoder is transmitted as two signals that are exactly 180° out of phase with each other. This is also called complementary signals, because one signal is the complement, or mirror image, of the other. As long as the two signal conductors are next to each other, any noise picked up by the cable will have equal and in-phase components on each conductor in the cable. Using differential input circuitry, the input will recognize only the difference between the signals: as one signal line is in a high, or logic 1 state, the complement is at a low, or logic 0 state. The differential input circuitry will accept this as a legitimate signal, and the in-phase noise products are simply ignored.
To use this type of noise immunity, the encoder must have what we refer to as a line driver output circuit. However, having the line driver output circuit is only half of the equation. Transmitting the signal in differential form is not enough; it must also be received in differential form. To accomplish this, the receiving device must also have a differential input circuit, or what is commonly called a line receiver input.
Many people believe that by specifying the differential output on the encoder, their noise problems will simply go away. However, without the proper line receiver input circuitry, it is a waste of money, and may even be worse from a noise standpoint. If the differential output of the encoder is not properly terminated, ringing and other spurious oscillations will appear on the signal lines.
Differential output circuitry on most encoders operates over the voltage range of 5-28 VDC supply voltage. The older standard for differential signals (also known as RS-422) called for 5 V operation. By raising the voltage in the system, a much better signal to noise ratio results. In a 5 V system with 3 V spikes, the spikes are nearly as great as the desired signal amplitude. In the same setup with the voltage increased to 24 V, for example, it is easy to see that the same 3 V spikes can be easily ignored. However, it is important to remember the input circuitry must also be able to handle this higher voltage.
Cable lengths
All cables have small amounts of capacitance between adjacent conductors. This capacitance is a direct function of the cable’s length, and tends to round off the leading edge of the square wave signal, increasing rise times. It can also distort the signal to the extent of causing errors in the system.
Signal distortion is not usually significant for lengths less than 9 m (or 1000 pF). To minimise the distortion, use low capacitance cable (less than 100 pF per metre), in the shortest length possible for the application. To minimise distortion for cable lengths in excess of 10 m, use differential line driver outputs, along with differential type receiver circuitry.
Also, a low capacitance twisted-shielded pair cable should be used whenever using differential signals. For high frequency
applications (>200 kHz), this type of cable may be needed for all lengths.
Cable termination
Proper cable termination is vital with differential signals. With an unterminated configuration, signal reflections can occur, resulting in severely distorted waveforms. If signal distortion occurs, try parallel termination, which involves placing a resistor across the differential lines at the receiver end of the line. The parallel termination resistor value (R T ) should match the characteristic impedance (Z0) of the cable, typically 70-150 Ω. This permits higher frequencies to be transmitted without significant distortion.
It is usually better to select a value for R T that is slightly larger (up to 10% larger) than Z0, as over-termination tends to improve signal quality better than under-termination. Unfortunately, low valued resistors can increase the power dissipated by the line driver, and reduce output signal swing. In this case, a capacitor should be placed in series with the resistor. The capacitor value should be equal to the round trip delay of the cable divided by the cables cable’s Z0 Round trip delay is equal to two times the cable length multiplied by 5.6 ns/m.
CT ≤ Round Trip Delay / Z0
Example of capacitance calculation
Cable length = 30 m
Signal velocity = 5.6 ns/m
Z0 = 120 Ω
CT ≤ (30m x 2 x 5.6ns/m) / 120 Ω
CT ≤ 2,800 pF
Note that the RC time constant of this type of termination can reduce the system frequency response.
A parallel termination resistor value larger than listed above can often provide adequate reduction of signal reflections, and still maintain adequate frequency response,
and low power dissipation. Experimentation is often required for each application consisting of long cable runs and high frequencies.
Cable connection
It is important to connect cable shields to ground on the instrument end (counter, PLC etc.). Always properly ground the motor/ machine on which the encoder is mounted. Also, ground the encoder case with the following recommendations:
1. DO NOT ground the encoder case through both the motor/machine as well as the cable wiring
2. DO NOT allow the encoder cable wiring to ground the motor/machine exclusively. High motor/machine ground currents could flow through encoder wiring, potentially damaging the encoder and associated equipment.
SUMMARY OF METHODS TO REDUCE NOISE
There are several methods that can reduce noise in an encoder’s electrical signal:
3. Route power and signal lines separately.
4. Twist and shield signal lines, and place signal lines at least 30 cm from other signal lines and from power leads.
5. Maintain signal wire continuity from the encoder to the controller/counter (i.e. avoid junctions or splices).
6. Provide clean regulated power to the encoder and associated equipment (±2%).
7. Ensure all equipment (motors, drive, shaft, etc.) is properly grounded.
8. Connect the encoder cable shield to ground at the controller/counter end, leaving the end near the encoder disconnected.
9. If possible, use differential line driver signal outputs with high-quality twisted, shielded pair cable. The complimentary signals greatly reduce common mode noise levels, as well as signal distortion resulting from long cable lengths. If you follow the recommendations given here, you will see a reduction in noise and signal distortion from your encoder.
Senseca Italy (formerly Delta Ohm) has released the Pro-Line series of handheld instrumentation covering a range of measurement parameters including temperature, pressure, humidity and light.
Several configurations are available including single, dual and 3-input versions featuring data logging options, large integral memory, rechargeable lithium battery, USB charging and downloading, and a rugged waterproof housing with a backlit display.
Temperature models allow the full range of thermocouple and platinum resistance probes (PT100) with standard or custom probes available.
Pressure probes include relative, absolute, differential and barometric types. Light probes include illuminance, irradiance (RAD and UVA), and irradiance in the spectral band of blue light.
W&B Instruments Pty Ltd www.wandbinstruments.com.au
The Pilz IO-Link Safety Master PDP 67 IOLS enables point-to-point communication up to field level and can be a Profinet/Profibus slave device. Users can connect IO-Link sensors and classic safety sensors alongside the IO-Link Safety devices.
The product features four 2FDIO, giving eight failsafe digital inputs and outputs (configurable as two inputs or two outputs on one port) and four Type A IO-Link Safety ports. Sensor and actuator connections are 5-pin M12 ports; 25-byte FS input and 25-byte FS output data is supported for each port (IOL-S Spec).
Protection class is IP67/IP69K, and the supported ambient operating temperature range is -30 to +70°C, with an operating voltage of 24 VDC.
The product is designed to provide safe point-to-point communication from the controller to the sensor in the field, according to IEC 61139-2, and supports a maximum safety level up to PL e (EN ISO 13849-1) or SIL 3 (IEC 61508/62061).
Pilz Australia Industrial Automation LP www.pilz.com.au
Interworld Electronics has announced the Aplex ARCHMI-S-812B, a 12.1″ (800 x 600 resolution) industrial touch screen panel PC, with a 1024 x 768 resolution option also available. It features a cost-effective, yet still powerful Intel Celeron J6412 Processor with up to 32 GB of DDR4 RAM, and supports Windows 10/11 and Linux.
The IP66-certified aluminium front panel provides protection against water, dust, and other solid foreign objects that might cause damage to the system. Along with a rugged fanless aluminium die-cast chassis, and a compact profile, it offers 24/7 reliability, is easy to clean, reduces maintenance cost, and provides a long-lasting rugged solution.
The ARCHMI-S-812B features a 12.1” TFT-LCD that supports resistive touch or projected capacitive touch. Optional auto dimming, 1000 nits high brightness, optical bonding, and AR coating make it suitable for a variety of environments.
The ARCHMI-S-812B includes: two USB 3.2 ports, tow USB 2.0 ports, an RS-232 port, an RS-232/422/485 port, 1 GbE LAN and 2.5 GbE LAN ports, as well as a DisplayPort output that enables users to connect an additional monitor when required. It also has support for expansion via a full-size Mini-PCIe and an M.2 slot.
The ARCHMI-S-812B has a wide operating temperature range of -20-60°C and wide power input range of 9-36 VDC as standard, making it a suitable solution for harsh environments with extreme temperatures. It can also be panel- or VESA-mounted, allowing the system to be ergonomically positioned for operator convenience.
Interworld Electronics and Computer Industries www.ieci.com.au
offers a revolutionary solution for the municipal wastewater and stormwater well by lifting influent directly from the inlet to the discharge while detecting the entry point, without water loading. Because the system handles influent
Contact
The cyber threat landscape for the Australian industrial sector is escalating, with ransomware, state-sponsored cyber activity, and remote access vulnerabilities driving security concerns.
Critical infrastructure operators in energy, water, manufacturing, and oil and gas face growing pressure from cybercriminals and nation-state groups exploiting exposed operational technology (OT) systems.
A growing threat
According to the 2025 Dragos OT Cybersecurity Report, threats to OT environments continue to intensify. Ransomware attacks and statesponsored campaigns are surging, leaving industrial operators facing mounting challenges where inaction is not an option.
Industrial organisations are increasingly targeted by cybercriminals and state-backed adversaries seeking to disrupt operations, steal data and even cause physical damage. Two new OT-specific cyber threat groups, GRAPHITE and BAUXITE, have emerged this year. GRAPHITE focuses on oil and gas and logistics, while BAUXITE targets Australian industries, including water, energy and chemical manufacturing. These groups employ phishing campaigns, exploit known vulnerabilities, and deploy malware tailored for industrial environments.
Ransomware has resurged dramatically, with an 87% increase in attacks over the past year. Manufacturing accounts for over 50% of victims globally, and 69% of all ransomware attacks targeted 1171 entities across 26 manufacturing subsectors. Given the critical role of these industries, Australia must prepare for a worsening threat landscape.
The vulnerability problem
Despite the growing risks, many organisations still lack foundational cybersecurity measures. Insecure remote access, poor network segmentation and inadequate OT visibility remain major concerns. The report found that 65% of sites surveyed had insecure remote access conditions, with exposed default credentials and unpatched VPNs among the most common vulnerabilities. Additionally, 22% of identified vulnerabilities are perimeter-facing, making it alarmingly easy for attackers to infiltrate OT networks.
Many organisations still operate legacy systems that lack modern monitoring tools, and flat network structures allow attackers to move freely between IT and OT environments. These gaps underscore the urgent need for action.
We should also remember that a cyber attack on critical infrastructure doesn’t just affect one organisation; it can disrupt entire communities. Energy grids, water supplies and transportation systems are all at risk. For example, malware like FrostyGoop caused heating outages for over 600 buildings in Ukraine during sub-zero temperatures. While Australia has not yet experienced such large-scale incidents, the risk is real. The increasing reliance on industrial control systems makes us vulnerable to similar scenarios.
Strengthening cyber resilience
The best defence against this evolving threat is a proactive approach, and Australian organisations can take immediate steps to bolster security:
• Invest in incident response readiness: Developing and regularly testing OT-specific incident response plans is critical. Cybersecurity teams and OT engineers should collaborate to recognise and respond to realistic and relevant threats effectively.
• Build network resilience: Moving away from flat network designs and implementing network segmentation with properly managed firewalls can prevent attackers from moving laterally across IT and OT environments.
• Enhance OT visibility: Many organisations fail to monitor their OT environments in real time. Deploying OT-specific monitoring and threat detection tools can help identify malicious activity before it escalates. Additionally, relying solely on reactive defences is no longer sufficient. Proactively hunting for threats allows organisations to detect adversaries before they can cause significant damage.
• Secure remote access: Implement multi-factor authentication (MFA), patch vulnerabilities, and actively monitor remote access points. Limit access only to essential personnel and enforce stringent security protocols for contractors.
• Prioritise vulnerability management: Not all vulnerabilities are equally dangerous, and many are irrelevant in an OT context. Adopt a ‘Now, Next, Never’ framework to prioritise remediation, focusing on the most immediate operational risks.
Turning the corner
Despite the concerning trends outlined in the report, there is room for optimism. Industrial organisations implementing proactive cybersecurity measures are already seeing improvement. Stronger segmentation, improved visibility, and robust incident-response capabilities make it harder for adversaries to operate undetected.
While ransomware attacks in Australia remain less frequent than in North America and Europe, accounting for 26 incidents or 1.5% of global attacks, this is largely proportional to population size. But we cannot afford complacency. Now is the time for organisations to prioritise OT security. By taking immediate action, Australia’s industrial sector can move toward a safer and more resilient future.
Hayley Turner serves as the Area Vice President APAC at Dragos, overseeing the company’s strategy and growth in the region. She began her career with the Australian Government before moving to London to join a cybersecurity company. Hayley is passionate about protecting critical infrastructure and addressing the cybersecurity challenges that affect the operational networks supporting them.
The ROD 300 series scanners are designed to reliably detect contours even during fast production and logistics processes. With high scanning rates and angular resolution, the laser scanners of the ROD 500 series are more suited for navigation tasks. They also offer integrated window monitoring, detecting if the optics window becomes dirty, enabling predictive maintenance.
The ROD 300/500 series scan at a frequency of up to 80 Hz so that moving objects can be detected and the data quality remains optimal even at high speeds. With their high angular resolution of 0.025° at 10 Hz, the laser scanners determine the contour of the parts even with different high-gloss or matt surfaces. Protection class IP 67, aluminium base and built-in laser diode make the ROD 300/500 laser scanners resistant to external influences. The sensors work in temperature ranges from -30 to +60°C, making them suitable for applications in both the deep-freeze sector and for hightemperature requirements, such as in battery production.
For driverless transport systems, the ROD 500 series can create a very precise map of the environment, such as a material storage area, and enable collision-free AGV navigation. With dimensions of around 80 x 80 x 80 mm, the sensors can be integrated into the small installation spaces of mobile vehicles.
Leuze electronic Pty Ltd www.leuze.com.au
Joe Reckamp, Analytics Engineering Group Manager, Seeq Corporation
By combining retrospective data analysis with predictive tools in advanced analytics platforms, process manufacturers can predict failures, enhance productivity, and establish optimal maintenance schedules.
Process engineering teams have in the past relied on stored data to improve operational efficiency, through preventive maintenance, failure mitigation strategies and process optimisation. These efforts emphasised monitoring and were built around available operational data — often recorded manually — to identify problems.
As time passed, these explorations progressively began using data stored in process historians and other databases to refine insights, leveraging diagnostic analytics to investigate issues and anomalies.
However, these procedures are reactive in nature, and the aim today is to move more
towards proactive approaches that leverage historical data and context to drive more accurate process improvement decisions. Modern advanced analytics platforms are empowering manufacturers to implement proactive protocols that anticipate and address potential issues before they become failures, providing improved performance, enhanced operational efficiency, and increased uptime.
Today, advanced analytics can guide organisations through problem-solving journeys by examining historical, current and predicted time series data, which unveils insights that drive improved decision-making.
The word ‘analytics’ is typically associated with IT software products, platforms and the cloud. We therefore need to qualify what we mean when we use the term.
For example, advanced analytics describes the use of statistics, machine learning and artificial intelligence in data analysis to glean insights. Other modifiers can be used to differentiate the analytics type based on utility and complexity.
‘Diagnostic analytics’, for example, describes the use of information from past events to respond to a given situation, investigating a raw set of historical data and applying statistical analysis to identify patterns and produce insights. In this case however, there is an unavoidable lag between the event or issue under analysis and the action taken to improve future performance, eliminating the ability to predict events before the fact.
Another type, ‘descriptive analytics’ also uses past data, summarising events via reports that can be more easily interpreted and learnt from. In this practice, the same lag occurs and it can be difficult to develop clear-cut answers on how and when to make decisions that will prevent future performance issues.
In contrast, ‘predictive analytics’ aims to deliver future predictions by using historical data to develop and train models that project future data, enabling teams to foresee plausible occurrences and informing them of actions they should take to drive desired outcomes.
Understanding the historical and current behaviour of an industrial system is critical before attempting to predict future performance. This requires engineers to have access to real-time data from all relevant processes and databases. However,
traditional data silos and disparate systems often hinder this access, creating challenges such as managing multiple logins and navigating different interfaces.
Advanced analytics platforms address these and other challenges by centralising data from various sources into a cloud-based platform. This eliminates the complexities of data connectivity and provides subject matter experts with streamlined tools for data cleansing and contextualisation. By unifying data in this way, experts can efficiently extract meaningful insights and build a comprehensive understanding of operational performance.
With live data connections established, organisations can leverage advanced analytics to generate valuable predictions, including those related to equipment maintenance. Predictive maintenance, a key application of this technology, aims to anticipate equipment failures or maintenance needs before they occur. By analysing historical data and identifying
patterns, these models help improve quality, reliability and uptime by enabling proactive maintenance strategies.
The predictive capabilities of advanced analytics platforms stem from sophisticated algorithms that map organisational workflows and procedures to relevant laws, theorems or design principles. This mapping process enables creation of key performance indicators and enables accurate data extrapolation, extending the timeframe for insights and analysis. The following examples demonstrate how advanced analytics has empowered process manufacturers to optimise operational efficiency and transition from reactive to predictive maintenance approaches.
Predictive maintenance strategies are frequently applied to the detection of compressor performance issues. Compressor failures can cause sudden and catastrophic shutdown or environmental safety concerns.
In one large manufacturing facility, data scientists leveraged machine learning algorithms to drill down to the root cause of compressor failure. By superimposing the algorithm on live data to identify signs of degrading performance, they were able to utilise their findings to create maintenance notifications ahead of expected failures, which provided insights on a visual interface for operators and process engineers.
These types of tools empower engineers to identify leading and lagging indicators of degrading compressor health and to continuously monitor variables, helping teams proactively identify risks and prioritise maintenance activities.
Collaboration between teams — such as process, maintenance and reliability — can be strengthened by leveraging built-in tools within advanced analytics platforms for sharing analyses and insights in easily digestible dashboards and reports.
One petrochemical and refining company was experiencing significant reactor shutdowns caused by a failing critical feed gas compressor on a polyethylene line. These failures had a high impact on production, preventing any way to immediately restart the process. Such unplanned reactor shutdowns were causing a minimum of four hours of downtime,
costing the plant upward of US$200,000 with every incident. Previously, attempting to prevent such occurrences, the compressors were maintained on a preventive maintenance schedule, but this did not entirely prevent unplanned shutdowns.
Previous manual attempts to investigate and identify the safety interlock that prompted the shutdown failed to yield a root cause. A process engineer at the refinery therefore took an alternative approach, using an advanced analytics platform to rapidly locate the five most recent shutdowns and subsequent restarts — planned and unplanned — from decades of historical process data. With timedissection tools, they focused on shutdown and startup time periods and overlaid all events, presenting abnormalities in the discharge pressure profile of the two most recent startups (Figure 1).
Upon further investigation, the engineer also identified early warning signs on the motor amperage signal. Without a method to view the startups back-to-back, the motor degradation had gone unnoticed by operations.
As a result of this root cause analysis, the process engineer implemented a monitoring solution to identify and flag future motor degradation to prevent similar unplanned shutdowns. When an out-of-tolerance value appears, the compressor motor is now immediately added to the maintenance work list for the next planned shutdown — a proactive maintenance approach that is expected to eliminate unplanned shutdowns due to this failure mode.
Valves play critical roles in almost all process plants. Keeping valves in top condition is
Source: Seeq
Source: Seeq
UNDERSTANDING
had no methodology in place to determine early warning signs other than exhaustive manual inspections, which required complete shutdown and became increasingly costprohibitive as the asset base grew.
essential for maintaining efficient operations, but condition monitoring can be tricky based solely on observation. Advanced analytics solutions often provide low-effort, high-return opportunities to monitor valve conditions and reduce unexpected failures, simultaneously protecting adjacent process equipment and devices.
An oil and gas producer operating more than 50 well pads — each with a gathering system containing a critical flow control valve — was experiencing frequent valve failures. Each failure occurrence rendered the pad inoperable for days until repairs could be made. These failures were typically caused by sand erosion, and the producer
To solve this problem using advanced analytics, subject matter experts leveraged both real-time and historical process data to calculate a metric indicating progressive erosion in the valve seat, and were able to establish an indicator to predict future failures. The team leveraged first principles to produce this metric, which was used as the basis for a predictive model to approximate time to failure.
This analysis was scaled to all well pads by leveraging historian hierarchies imported into the analytics platform through its native connectors. By deploying an advanced analytics platform to monitor process conditions, the oil and gas producer was now able to predict erosion progression and approximate the time to valve failure, enabling the maintenance team to prioritise service, significantly reducing downtime and operating expenditures.
The future of process manufacturing hinges on in-depth knowledge of past equipment behaviour. Advanced analytics platforms combine retrospective and predictive analytics, empowering process experts and data analysts to efficiently construct robust models, forecasting maintenance needs and illuminating paths to mitigate risk.
Armed with this potent digital arsenal, process manufacturers build better models, providing plant insights and projecting issues prior to failure so personnel can optimise maintenance schedules and prevent costly downtime.
The Leuze Series 33C and 35C each include stainless steel sensors designed for packaging processes in hygienic environments. They include retro-reflective photoelectric sensors for glass and PET detection, sensors with background suppression for detecting small objects, dynamic reference sensors, and through-beam photoelectric sensors for radiation through films.
Power PinPoint LED technology means the sensors can be aligned and commissioned quickly and easily. This is achieved due to a bright, round and homogeneous light spot. With protection classes such as IP67, IP68 and IP69K as well as ECOLAB, CleanProof+ and Diversey certifications, the devices also work in wet areas and during intensive cleaning processes.
Both sensor types are equipped with an IO-Link interface. This enables quick and easy parameterisation and diagnostic data.
Leuze electronic Pty Ltd www.leuze.com.au
Azbil MagneW PLUS+ electromagnetic flowmeters are designed to measure many types of liquid, including water, chemicals, slurries and corrosive liquids. The standard model has a mirror-smooth PFA liner for high adhesion resistance that enables durability for long-term use. It is available in an integrated type and a remote type and can be used in a wide range of settings, including explosion-proof and outdoor environments.
Suppression of flow noise is 3.5 times that of the Azbil conventional model for improved stability in the presence of noise. The MagneW PLUS+ is designed to achieve more reliable measurement in individual applications through features such as an excitation frequency change function, an optional auto spike cut-off setting, travel averaging and manual zero adjustment.
In addition to the serial number and production date on the product tag plate at shipment, the HMI enables checking in maintenance mode. Statuses that may be difficult to read on the product tag plate are backed up as electronic data.
A high-speed response function with a damping time constant of 0.1 s can be selected as an option. This enables compatibility with high-speed batch applications, allowing use with a pulse frequency of up to 3000 Hz.
Communication with CommPad is supported as in previous models. Communication superimposed on the analog signal can be used by selecting the HART communicator function.
AMS Instrumentation & Calibration Pty Ltd www.ams-ic.com.au
HAWK Measurement Systems has launched the FPM Flotation PulpMaster, a continuous level measurement system designed to meet the demands of flotation cell pulp and liquid level management. Built with pulse-guided RF float level technology, the FPM is engineered to provide higher precision, reliability and lowmaintenance operation.
The HAWK FPM offers advanced accuracy and signal recognition algorithms as well as a projectspecific design high-density polyethylene (HDPE) float with a low coefficient of friction, specifically designed to prevent build-up and jamming in harsh pulp environments. The float design helps ensure continuous, accurate level measurement, eliminating the risks of pulp overspill or froth undercarriage. The impedancematched pulsed RF technology is said to respond solely to the actual pulp and liquid level, with zero interference from froth levels, allowing highly accurate level detection of the pulp level for optimal foam overflow performance.
Hawk Measurement Systems Pty Ltd www.hawkmeasure.com
Emerson’s Rosemount 625IR Fixed Gas Detector is designed to provide fast gas detection in all plant environments using optical absorption detection technology.
End users need fast detection of hydrocarbon gases without false alarms, so gas detectors must be able to operate in hazardous environments and all weather conditions. Emerson says the Rosemount 625IR meets these needs by using specialised, solid-state dual IR sources and dual IR receivers, which combine to provide continual cross checks and internal adjustments that maintain the factory calibration. Heaters on the optical surfaces and a range of optical protection accessories provide continual detection without unexpected downtime.
The detector is simple to install and maintain because it remains factory-calibrated for life due to its advanced drift monitoring technology and health monitoring diagnostics. A range of accessories are available to make function testing fast and easy using either test gas or a gas-free test filter, with the latter option reducing maintenance time and cost.
The product has a standard operating temperature range of -40 to +75°C, and its ingress protection levels are IP66/67 and NEMA 4X. It is rated for use in hazardous areas and is also certified for marine applications.
The detector is SIL2 rated in accordance with IEC61508, and SIL3 rated when used in a 1oo2 voting configuration. It has a five-year proof test interval, which gives end users confidence to reduce the amount of scheduled maintenance visits.
Emerson
www.emerson.com/au/automation
We specialise in Industrial Test & Measurement Sensors/Transducers: Pressure, Temperature, Laser Displacement, Flowmeters, Water Meters, Pneumatic Controls, Joysticks, Rotary Encoders, Accelerometers and LVDTs
Instrumentation: Dataloggers, DAQ Systems and Digital Display Units.
The USi-Industry from Pepperl+Fuchs is designed to provide reliable object detection in challenging environments and outdoor areas, regardless of material, colour or surface finish. Designed for versatility, the product features miniature ultrasonic transducers that are separate from the evaluation unit, allowing for installation in tight spaces. Its IP69-rated transducers generate an elliptical, three-dimensional sound field with opening angles of ±17° and ±5° for high detection performance.
With two fully independent sensor channels, the USi-Industry offers increased flexibility, since each channel supports two distinct parameter sets that can be switched via a configurable digital input. A comprehensive teach-in mode allows not only the programming of switch points but also the learning of the entire detection environment. Additionally, three selectable operating modes and adjustable sensor cycle times enhance adaptability to various applications.
Pepperl+Fuchs (Aust) Pty Ltd www.pepperl-fuchs.com
The claim that thermal mass flowmetering is independent of pressure is only half of the truth.
Afrequently asked question by users is: “When is it necessary to use pressure compensation for a thermal mass flowmeter?” If you have spent any time searching the internet for thermal flowmeter suppliers, what you may have noticed is that many of them promote the thermal mass flow measuring principle as a “pressure- and temperature-independent” principle. Most of the time there is no explanation as to
why this is the case, but if you search long enough you may come across such an explanation as offered by Sage Metering, Inc.:
“A thermal mass flow meter is a precision instrument that measures gas mass flow … These meters measure the heat transfer as the gas flows past a heated surface. The gas molecules create the heat transfer, the greater the number of gas molecules in contact with the heated surface the greater the heat transfer. Thus, this method of flow measurement is dependent only on the number of gas molecules and is independent of the gas pressure and gas temperature.”1
Of course, what this is saying is that because thermal mass flowmeters measure mass directly, instead of deriving the measurement from volume flow, there is consequently no need to compensate for pressure or temperature as would be the case with other gas flow measuring
principles like differential pressure, turbine, positive displacement and vortex shedding.
If you search longer, you might even come up with an explanation of why thermal mass flowmeters measure mass ‘directly’. For example:
“Thermal dispersion mass flowmeters measure the heat convectively transferred from the heated velocity sensor to the gas molecules passing through the viscous boundary layer surrounding the sensor’s heated cylindrical surface. Since the molecules bear the mass of the gas, thermal dispersion flowmeters directly measure mass flow rate.”2
Of course, this leaves out the whole thermodynamic explanation, but at least ties the heat transfer to the mass of molecules.
If you search even further, you may find a theoretical explanation like the following that proves the direct mass measurement using a theoretical explanation with formulas:
“The primary desired output variable is the total mass flow rate qm flowing through conduit or flow body. qm depends on the product ρV , the fluid’s mass density times the point velocity, embodied in the Reynolds number Re= ρVDµ. ρV is often called the ‘mass velocity’ and is the total mass flow rate per unit area (kg/s · m 2).” 3
The above explanation suggests that because one is actually solving for the product, ρ V, which is contained in the Reynolds number, by equating it to other known terms in an empirical correlation — density itself is not a required term to solve for this product. Therefore, density does not have an influence on the solution of this product. That is, the mass velocity, which is being sought. Ergo, thermal mass measurement is not affected by density, and by matter of consequence, it is not affected by pressure and temperature.
However, when you continue searching the internet for thermal mass flowmeter suppliers you will come across statements such as, “Our meters are temperaturecompensated”, or you might even happen upon some suppliers who offer a “pressure measurement” option. If you read further, you may find out that this pressure measurement option is also intended for compensating for changes in pressure. Now, you may already be asking yourself: “I thought that thermal mass flowmeters are pressure and temperature independent?”
Well, the truth is that this statement is only half of the truth. Thermal mass flowmeters are not dependent on density for the direct measurement of mass flow due to the reasons already stated above. However, they are greatly dependent on the fluid characteristics, which are in turn dependent on the gas composition, and the fluid characteristics (thermal conductivity λ, specific heat capacity c p and dynamic viscosity μ) are all influenced by changes in pressure and temperature. As soon as the application conditions vary from the reference conditions (those laboratory conditions existing during calibration), then, without temperature and pressure compensation, there are additional measurement errors that must be considered. So, to say that thermal mass flowmeters do not require pressure and temperature compensation is an untruth. They do require it, and practically every thermal mass flowmeter manufacturer uses some form of compensation to do this.
Temperature has typically a much larger influence on the fluid characteristics than pressure does. Because the entire measuring principle is based upon temperature measurement, a dynamic correction of this is possible. Pressure usually has a lesser influence on the fluid characteristics, and typically a fixed pressure value is entered into the device during commissioning. If the process pressure changes, the fixed value in the device does not change with it. In effect, these devices are not corrected for changes in pressure.
Some devices allow for an input reading from a separately installed pressure transmitter. This enables a dynamic compensation for changes in pressure. However, when would such a compensation be necessary? Obviously, this requires a pressure measuring point, which may not always be available, and either a current input or bus communication. In the end, this could result in an increased measuring point
price for the user, who is not always prepared to accept this especially when there are uncertainties about the added benefits.
As stated, pressure has typically a lesser influence on gas characteristics than temperature. However, its influence is also gas dependent and can be greater depending on the type of gas. Generally, we state that one can expect in air about ±0.25% o.r. additional error for every bar difference (in either direction) to the reference pressure (in this case, the static pressure entered into the device). On the other hand, certain gases like CO2 or O3 have a much larger dependency on the pressure, and the additional error can therefore be much higher.
The extent of influence of gas characteristics on the mass flow accuracy depends on thermal conductivity (being the largest factor), followed by specific heat capacity and then viscosity. For gases such as H and He the total error can be as low as 0.3–0.4% per bar pressure change, but for gases like CO2 and O3, the error can be as much as 0.85% or 0.88% per bar pressure change respectively. Therefore, users might need to make more serious consideration of pressure compensation for these gases.
Of course, gas characteristics alone should not be the sole determinant when deciding for or against pressure compensation. Other important factors should be the amount of pressure fluctuation in the application. If the pressure is deemed to be fairly constant, using pressure compensation could be simply overkill. If, on the other hand, the pressure is known to be unstable and to fluctuate by relatively large amounts, then it might be beneficial to use pressure compensation, especially in the case of gases like CO2 whose gas characteristics are known to be more dependent upon pressure.
Lastly, perhaps the most important factor in deciding to use pressure compensation is to be candid about the importance of measuring accuracy and repeatability. Is a high level of measuring performance required for the application? Depending on the amount of pressure change, the corresponding measurement error might still be in an acceptable range for certain applications.
1. Whorff F 2023, Fundamentals of Thermal Mass Flow Measurement, Sage Metering, <<https:// sagemetering.com/back-to-basics/fundamentals-ofthermal-mass-flow-measurement/>>
2. Olin J G 2008, A Standard for Users and Manufacturers of Thermal Dispersion Mass Flow Meters, Sierra Instruments, p. 13 <<https://www.sierrainstruments. com/prnews/Thermal%20Mass%20Flow%20 Measurement%20of%20Fluid%2010.15.08.pdf>>
3. ibid., p. 17
Spanish poultry company DAGU had been searching for an operatorfriendly and reliable marking system to work 15 hours per day, efficiently marking egg cartons with best-before dates and batch numbers.
DAGU, founded in 1980, is a poultry company based in Guadalajara, Spain. From its origins in the 1950s to the present day, the company has continuously evolved, and values excellence, service and food safety. Today, DAGU is part of the HEVO Group, which produces over 70 million dozen eggs annually and employs nearly 500 people, 40% of whom are women. The cooperative is deeply committed to fostering inclusion and proudly supports a diverse team. Luis Alberto Sanz, Production Manager at DAGU and member of the association since 1990, highlights the operation.
“Our cooperative’s facilities are state-of-the-art throughout the entire production process. We meticulously select raw materials, manufacture the best feed for our animals and produce eggs of the highest quality,” he said.
DAGU has maintained a leading position in the Spanish egg-laying poultry sector and is widely acknowledged as being at the forefront of the field in terms of innovation, outstanding quality, advanced equipment and systems, as well as meticulous attention to detail. A traceability system enables DAGU to keep permanent track of the farm of origin and the laying date of each egg. ‘Farm to fork’ traceability guarantees flawless control of quality and food safety throughout the entire production process.
DAGU had been searching for an operator-friendly and exceptionally reliable marking system to work 15 hours per day, efficiently marking 200,000 egg cartons per batch with best-before dates, farm information and batch numbers. Depending on the carton, different imprints were required. Another requirement was that all printers could be efficiently managed from a single PC via VNC (virtual network computing).
DAGU opted for an advanced solution from Leibinger’s official partner in Spain — Lusaro MarkColor, S.L. — and now has 16 Leibinger JET2 NEO printers installed, one at each exit of the egg graders.
“We greatly appreciate the high flexibility in printing different packaging formats and the simple, centralised operation of the printers,”
Sanz said. “Service, delivery of consumables and coordination of all requests are ideally covered by Lusaro MarkColor. The collaboration is excellent. ”
DAGU designed the supports on which the electric actuator and the JET2 NEO are installed, which resulted in a highly flexible support for the different carton formats. A PC was placed on top of the sorting machine from which all the printers are managed via VNC. The project was initially commissioned in 2020 and DAGU now has 16 JET2 NEO printers installed on its Moba sorters.
The JET2 NEO is a high-performance CIJ solution for all standard coding and marking applications. Introduced in 2000, it is used worldwide to print data such as best-before dates, production dates and lot numbers on up to three lines, with a variety of options and inks available. The printer is designed to ensure fast, reliable performance combined with low energy and solvent consumption. It features Leibinger’s Sealtronic print head technology, which minimises downtime and maximises productivity. Sanz is delighted with the performance of the LEIBINGER printers.
“We would definitely recommend these high-quality LEIBINGER printers to other companies who have similarly high requirements in terms of quality, productivity and reliability,” he said.
Result Group www.resultgroup.com.au
Ella Averill-Russell, IICA Sydney Branch Manager
Australian utilities are facing increased pressure to modernise operations, reduce waste and meet sustainability targets. Digitalisation — the integration of advanced instrumentation, IoT sensors and smart valves — is driving this transformation. Vendors and plant engineers must work collaboratively to deploy cutting-edge technologies that enhance efficiency, improve system reliability and support electrification efforts.
For utilities to operate efficiently, vendors must provide tailored automation, sensor-driven insights and intelligent valve solutions that help engineers optimise processes. Smart meters, networked sensors and automated valves provide real-time data on electricity and water flow, identifying inefficiencies such as leaks, pressure fluctuations and energy wastage. Engineers rely on this data to implement predictive and condition-based maintenance strategies, reducing downtime and extending asset life.
Vendors offering IoT-enabled smart valves and actuators play a crucial role in modern utility infrastructure. These components automatically regulate flow, pressure and temperature in pipelines, preventing losses and ensuring precise control. Engineers use SCADA-integrated valve technology to monitor system performance and deploy AI-driven predictive maintenance tools, reducing operational costs and avoiding unplanned outages.
For example, Australian water utilities have deployed automated valve technology to instantly detect and isolate leaks, preventing the loss of millions of litres of water annually. Similarly, AI-enhanced monitoring in gas and power distribution networks has enabled engineers to pre-emptively identify failing components before costly breakdowns occur.
Automation, AI and electrification are at the core of utility transformation. Vendors supplying AI-powered automation platforms help plant engineers dynamically adjust grid operations and integrate renewable energy sources like solar and wind. Smart grid technologies balance energy supply and demand, preventing overproduction and ensuring adaptive power distribution. Some examples are:
• Water and wastewater treatment plants: AI-driven pump and valve control systems are improving energy efficiency, cutting power consumption by nearly 20%, and supporting Australia’s push towards electrification and carbonneutral operations.
• Smart water network implementation: Australian vendors have equipped regional water utilities with IoT-enabled sensors and automated valves, enabling real-time leak detection and flow adjustments, leading to water savings exceeding 21 million litres annually.
• Predictive maintenance in energy grids: A leading Australian electricity provider integrated AI-based monitoring solutions from vendors, allowing plant engineers to track grid health, and by pre-emptively replacing transformers they prevented outages and increased infrastructure longevity.
• Hydrogen-ready gas infrastructure: Vendors are now supplying smart valves designed for hydrogen distribution, supporting Australia’s transition to green energy and low-emission gas networks.
Plant engineers need access to secure, interoperable and futureproof systems that align with sustainability and digital transformation goals. As vendors move from being equipment suppliers to technology partners, utilities increasingly rely on customised automation, data analytics and smart valve solutions to modernise operations.
Hydrogen-compatible smart valves, AI-powered predictive analytics and IoTbased monitoring systems are just a few of the technologies shaping the future of utilities. Vendors that offer integrated, plug-and-play solutions are enabling engineers to transition towards more sustainable, efficient and resilient infrastructure.
April
Robotics Summit & Expo 30 April – 1 May 2025
Boston Convention and Exhibition Center www.roboticssummit.com
May
IICA TÜV Functional Safety
Engineer SIS Training — Sydney 6–9 May 2025
Pullman Sydney Hyde Park iica.org.au/Web/Web/Events/Event_Display. aspx?EventKey=TUVSYD25
Australian Manufacturing Week 6–9 May 2025
Melbourne Convention and Exhibition Centre australianmanufacturingweek.com.au/ Seeq Conneqt 2025 13–15 May 2025
Fontainebleau, Las Vegas USA www.seeq.com/resources/events/conneqt-2025/ IICA Technology Expo Perth 14 May 2025
Perth Convention and Exhibition Centre, Perth WA iica.org.au/Web/Web/Events/Event_Display. aspx?EventKey=IICAPRTH25
IICA TÜV Functional Safety Engineer SIS Training — Brisbane 19–22 May 2025
Albert Room, Sebel Adelaide iica.org.au/Web/Web/Events/Event_Display. aspx?EventKey=BNEMAR2025
Ozwater’25 20–22 May 2025
Adelaide Convention Centre www.ozwater.org
Global Resources Innovation Expo 20–22 May 2025
Brisbane Convention & Exhibition Centre www.grx.au
Workplace Health & Safety Show 21–22 May 2024
Melbourne Convention and Exhibition Centre whsshow.com.au/whats-on-melbourne
Engineers Australia RISK 2025 21–23 May 2025
Rydges Melbourne www.engineersaustralia.org.au/learning-andevents/conferences-and-major-events/risk
June
IICA TÜV Functional Safety Engineer SIS Training – Melbourne 3-6 June 2025
Novotel Melbourne Central iica.org.au/Web/Web/Events/Event_Display. aspx?EventKey=MEL2025TUV
IICA Technology Expo Wollongong 3 June 2025
Wests Illawarra, Unanderra NSW iica.org.au/Web/Web/Events/Event_Display. aspx?EventKey=IICAWOLL25
Microsoft Windows is undoubtedly the most widely used operating system (OS) within industry. Its market dominance is hardly surprising, given most PCs sold over the last few decades have come with Windows preinstalled. Software developers have also found the ubiquity of Windows to be highly attractive and have in turn created an enormous amount of software to run under it.
It’s in the controller, where real-time performance is essential, that Windows’ penetration has not been so deep. Hardware resources are much more limited in industrial controllers, as are CPU power and disk space. Windows is comparatively slow, very resource-intensive and requires a large footprint. It’s also proprietary, meaning it cannot be changed by users, who must pay to use it.
For these reasons, most vendors of industrial control systems have for many years opted to create their own dedicated hardware and runtime OSes. However, the issue with such propriety systems is that they are closed off to all but one vendor: the only software available is that which is created by that hardware developer. Software choices are therefore very limited when compared to what’s available for commonly used operating systems like Windows. Users are also reluctant to commit to products that effectively lock them into a single supplier.
Industrial controller vendors have long been looking for an operating system that’s industrially robust, can run fast enough for real-time control and has a small enough footprint to run on the limited hardware available. Linux, with its free licensing, open-source structure and huge community of developers, is gaining traction among vendors of industrial controllers.
The Linux kernel, created by Linus Torvalds and released in 1991, is typically packaged as a Linux distribution, with libraries and other supporting software so that it comes bundled as a complete operating system. Many such distributions already exist, including Debian, Ubuntu, and commercial offerings like Red Hat and ChromeOS. Linux also has a real-time kernel, which is a must for industrial control.
The single biggest advantage of Linux is that it offers ‘containerisation’, through technologies such as Podman, LXC and the popular Docker. Containerisation is where application software is packaged and run in isolation, in its own self-sufficient ‘container’.
Containers use computing resources more efficiently because they utilise OS-level virtualisation, rather than virtualisation of the entire machine. This opens the possibility of multiple runtimes executing concurrently on a single controller — a highly desirable feature for those who wish to deploy modularised extensions.
The concept of containerisation can be extended further, into virtual control technology. Commonly referred to as a ‘virtual PLC’, one or more containers can run on a server, which can reside in the cloud (or elsewhere). Controller runtimes can execute in these containers to provide remote control to distributed I/O systems in the field. This can speed up program development and simplify maintenance.
One objection to Linux has been its GPL (General Public Licence), which could expose user code to the public — a totally unacceptable situation to those who have invested many manyears in program development. Fortunately, this requirement does not exist for Linux apps, where runtimes reside.
While Linux is still comparatively new, its widespread use in educational institutions means the new generation of engineers could see new opportunities for it.
Harry Mulder is the Principal Automation Engineer at Beckhoff Automation. He has been involved in industrial automation for over 30 years and is fascinated by how new innovations keep affecting the direction of the industry. He really enjoys the practical element of his job, where he has a chance to get his hands dirty!
Westwick-Farrow Media
A.B.N. 22 152 305 336 www.wfmedia.com.au
Head Office Unit 5, 6-8 Byfield Street, North Ryde
Locked Bag 2226, North Ryde BC NSW 1670
AUSTRALIA
ph: +61 2 9168 2500
Editor Glenn Johnson pt@wfmedia.com.au
Managing Editor Carolyn Jackson
Publishing Director/MD Janice Williams
Art Director/Production Manager Linda Klobusiak
Art/Production Marija Tutkovska
Circulation
Alex Dalland circulation@wfmedia.com.au
Copy Control
Ashna Mehta copy@wfmedia.com.au
Advertising Sales
Sandra Romanin – 0414 558 464 sromanin@wfmedia.com.au
Tim Thompson – 0421 623 958 tthompson@wfmedia.com.au
If you have any queries regarding our privacy policy please email privacy@wfmedia.com.au