Bridging the Cloud-to-Things Continuum A collection of perspectives on fog computing Editor: John Koon Foreword: Helder Antunes
Published by Tech Idea Research, a division of Tech Idea International, San Diego, California, USA. Copyright ÂŠ 2018 Tech Idea Research. All rights reserved. No part of this book may be reproduced in any form without the permission of the publisher.
TABLE OF CONTENTS FORWARD .........................................................................................................................................3 ABOUT FOG COMPUTING ...................................................................................................................4 AN OVERVIEW OF FOG COMPUTING...................................................................................................7 CONTRIBUTED ARTICLES................................................................................................................... 12 DECENTRALIZED AND ADAPTIVE SECURITY FOR THE NEW INDUSTRIAL EDGE ...........................................................12 FOG COMPUTING CREATES A NEW RANGE OF POSSIBILITIES IN MANUFACTURING ..................................................17 BATS: AN ENABLING COMMUNICATION TECHNOLOGY FOR IOT, FOG COMPUTING, AND BEYOND ............................20 FOG COMPUTING: BRINGING SDN TO IIOT ......................................................................................................25 GETTING LOCAL: THREE EARLY LESSONS FOR IMPLEMENTING FOG COMPUTING .....................................................31 FOG COMPUTING IN DEVELOPING MARKETS.....................................................................................................35 USE CASES ....................................................................................................................................... 38 FOG COMPUTING ENABLES SMART FIREFIGHTING..............................................................................................38 IMPROVING THE CONVENIENCE OF CARSHARING ...............................................................................................42
Forward Digital innovation is changing the way we live, interact, work, play, and get from point A to point B. It is transforming every industry today, from healthcare to transportation to public services. Yet there is a missing link between the vision of the future and the execution, caused by limitations under the traditional cloud-only or cloud-mostly computing models. While cloud-only works well in some scenarios, it leaves a gap in others which require an infrastructure that can span the continuum from cloud to device. And that’s where fog computing comes in. Fog computing, as defined by the OpenFog Consortium, is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from cloud to things. It’s more than an interesting approach to today’s data-driven world – it’s a necessary one. At OpenFog, we have classified the advantages of fog computing as SCALE: Security, Cognition, Agility, Latency and Efficiency. Fog offers all this, and more, through its flexible, distributed architecture. Some industry use cases – autonomous cars, emergency services, robotics and virtual reality, to name a few - need fog computing for its sub millisecond response time. In remote locations, fog computing ensures continual operations, when and where persistent network connectively can be challenging. On billions of devices, fog computing saves network bandwidth and operational costs by shifting computation closer to the devices. Fog also provides extra safety and security protections that are critical in today’s world. In this compendium, John Koon has gathered a collection of perspectives on fog computing from its earliest advocates and adopters – those who are leading the research, developing the technologies and working with end users to incorporate fog computing in real-world deployments. Many of these articles feature the work of the members of the OpenFog Consortium, an industry organization I co-founded three years ago. I hope you find this work insightful and practical as you embark on your journey into fog computing. Helder Antunes Senior Director of Corporate Strategic Innovation Group, Cisco Systems Co-founder and Chairman, OpenFog Consortium
This is a collection of previously unpublished articles and use cases contributed by leading network and technology companies, including several members of the OpenFog Consortium.
About Fog Computing Fog computing is a horizontal, system-level architecture that distributes computing, storage, control and networking functions closer to the users. 1 In the Internet of Things, this new approach is to have computing, analytics and decision-making done near the data source, or “things” (typically a device). Fog computing will provide these services with containerization, virtualization, orchestration, manageability and efficiency. The Internet of Things (IoT) market has great growth potential worldwide. MarketsandMarkets, a global research firm, forecasts that the market will reach $561.04 billion by 2022, at a compound annual growth rate (CAGR) of 26.9%. Grand View Research, a market research and consulting firm, provides a bold forecast for the Industrial Internet of Things (IIoT) market, including solution, services and platform, to reach $933.62 billion by 2025 with a CAGR of 27.8%. The IoT and IIoT markets are very broad. They cover every segment including private and unsecured networks globally. Even though cyberattacks are becoming increasing dangerous to connected devices, many users are still unprepared. Fog computing not only enables IoT and IIoT, it empowers them with security, interoperability and certification. Behind this is an open fog standard, IEEE 1934™, which is backed by the OpenFog Consortium. The market is catching on. 451 Research, an IT industry analyst firm, forecasts the worldwide fog market will reach $18 billion by 2022, with the largest growth coming from the energy/utilities and transportation markets, followed by the healthcare and industrial markets. Half of the revenue will come from hardware, while the other half comes from applications and services. The vertical markets that will be impacted include Transportation, Manufacturing, Energy, Healthcare, Datacenters, Retail, Agriculture, Smart Buildings, Smart Cities, Smart Homes, and Wearables With the future projection of heavy data traffic, consumption and real-time processing by applications, fog will enable decentralized decision-making power by the edge devices while connecting the cloud and edge devices and anything in between. The role of fog is becoming increasingly important. IDC forecasts that by 2025, 45% of the world’s data will be moved to the network edge. We will need a reliable platform to manage it. Fog will be an innovation enabler The future of IT innovation encompasses a convergence of technologies including advanced IoT deployment, development of 5G and artificial intelligence. The OpenFog Reference Architecture, published in February 2017, is available to developers worldwide to give the technical community the information to take advantage of the interoperable, horizontal system architecture for distributing computing, control, storage and networking functions. Fog computing will be able to addresses issues relating to security, cognition, agility, latency and efficiency (SCALE) in this cloud-to-things continuum. Consider 5G: 5G mobile technology offers a speed 100 times faster than what 4G LTE can today. Major carriers like AT&T and Verizon are moving ahead with 5G testing. AT&T is planning tests of the gigabit
Source: OpenFog Consortium
technology in 12 cities while Verizon is testing in 11. Fog computing will play an important role in 5G, serving as its platform. One of the applications is to use the fast 5G to communicate with autonomous vehicles in near real time. Though this is still some time away, the foundation is being laid now. Fog provides added security and safety features Perhaps the most daunting design challenge for IoT developers is making sure their applications are secure. Fog offers a more comprehensive approach to security. To accomplish this, a set of comprehensive guidelines is necessary to handle the interconnection of fog nodes, cloud servers and many of the IoT endpoints in the multi-tier communication and computing infrastructure. Security functions operate within the fog computing architecture with the purpose of achieving two goals: •
To enable the fog architecture to function as a responsive, available, survivable and trusted part of the device-fog-cloud continuum;
To offer information security and trusted computing services through the fog nodes to those devices and sub-systems less endowed with capability or resources to protect themselves.
As a result of these attributes, system managers can monitor the security status of their systems in a scalable and trustworthy manner. The distributed architecture helps to disperse information quickly to keep security credentials and software up to date on a large scale of devices while providing real-time incident response without disruption of service. The fog architecture safeguards connected systems from cloud to device. This creates an additional layer of system security in which compute, control, storage, networking, and communications work closer to the services and the data sources they serve and protect. In short, security is local, not remote. Open standard promotes interoperability Having universal network interoperability is a noble goal. To achieve that, however, will not be easy. Looking back on history is helpful in this situation. Using the Universal Serial Bus (USB) as an example, we no longer need to load drivers. Although we all take its “plug-and-play” benefits for granted, it took many years of collaboration among many tech companies to achieve the results we see today. Microsoft Windows was the main operating system and the goal was to make sure all peripheral devices would work seamlessly with it. Today, the network world is much more complex. It involves the cloud, the cloud continuum, various IoT endpoints (mobile devices and sensors included), many different operating systems, add-on software and mobile applications. The benefits of collaboration will enable an open, interoperable architecture for fog computing to extend the mobile edge with a physical and logical multi-layer network hierarchy. These functions are performed by the fog node part of the architecture. Operators only need to connect to the fog nodes, resulting in interoperability across operators. The ETSI MEC specifications include the Application Programming Interfaces (APIs) that support edge computing interoperability. By adopting the API, developers can write one set of software to run on both the OpenFog and MEC architecture.
Creating standards in fog computing In May 2018, the IEEE Standards Association (IEEE-SA) voted to adopt the OpenFog Reference Architecture as a new IEEE 1934 standard. As a result, the following properties of fog computing have been specified: • • • • •
Openness; Interoperability; Horizontal system architecture; A cloud-to-thing continuum (communicating, computing, sensing and actuating entities); Distributed computing, storage, control and networking functions .
Once interoperability is fully defined, the OpenFog Consortium will create a certification program to allow local test labs to test fog products. Product developers will be able to have their network products tested and certified under the umbrella of IEEE 1934. Users will be able to enjoy the benefits of increased productivity, security, network device interoperability and certification to guarantee devices will work together seamlessly from cloud-to-things. John W. Koon Tech Idea Research
An Overview of Fog Computing Sizing the fog computing market 451 Research, an IT industry analyst firm forecasts the worldwide Fog market will reach $18 billion by 2022, with the largest growth coming from the energy/utilities and transportation markets, followed by the healthcare and industrial markets. Half of the revenue will come from hardware, while the other half comes from applications and services.2 Where is fog computing needed? Simply put—and simply illustrated in Figure 1—fog computing is deployed between things at the edge and the cloud.
Figure 1: Fog computing provides an infrastructure between devices at the edge and the cloud.
Fog is a system-level horizontal architecture that distributes resources and services — including computing, storage, control and networking—anywhere along the cloud-to-things continuum. Some of the distinctions of fog computing include:
Decentralized decision-making power by things closer to the data source (e.g., users, infrastructure, devices)
A horizontal and hierarchical architecture can connect anything between the edge and the cloud (or clouds, as many use cases involve multiple clouds owned by different service providers, manufacturers, etc.)
451 Research url: https://451research.com
About the OpenFog Consortium The OpenFog Consortium was formed in 2015 by Arm, Cisco, Dell, Intel, Microsoft and Princeton University. The mission of the Consortium is to drive industry and academic leadership in fog computing and networking architecture, testbed development, and interoperability and composability deliverables that seamlessly bridge the cloud-to-things continuum. Members and partners of the Consortium believe that proprietary or single vendor fog solutions can limit supplier diversity and ecosystems, resulting in a detrimental impact on market adoption, system cost, quality and innovation. The OpenFog Consortium is affiliated with: • • • •
Barcelona Supercomputing Center ETSI MEC; IEEE Communications Society; and the IoT Acceleration Consortium.
The OpenFog Consortium also works with the OPC Foundation. Every year, the Consortium hosts Fog World Congress, the only multi-day conference on fog computing and networking.
• Continuous operations where reliable network connectively can be challenging • Efficient use of bandwidth and lower operational costs. Transformative trends that depend on fog computing Before fog computing, there were two primary approaches for handling the data generated by the IoT, 5G, AI, augmented virtual reality and other transformative horizontal and vertical applications and technologies: cloud-only architectures and operations that reside only at the edge of the network. Here are a few ways that fog complements and augments these models: • It provides practicality (cost, reach, congestion, etc.) associated with handling a massive and growing volume of data (measured in zettabytes) ; • It helps enable the sub-millisecond response time required for many applications; • It distributes functions (compute, storage, control and networking) horizontally and hierarchically •
It provides interoperability and collaboration
Fog computing bridges the gap between the cloud and the things at the very edge, along with all the technologies in between. Let’s look at two examples:
5G mobile technology 5G offers a speed 100 times faster than what 4G LTE can today and is already being tested nationwide. Fog computing will serve as the platform for 5G for applications like autonomous vehicles, which require near real time communications. 5G provides the throughput and fog computing eliminates the latency that edge-to-cloud adds. Artificial Intelligence Preprocessing closer to the source of the data is essential to the evolution of AI. According to The Data Warehouse Institute (TDWI), dirty data ends up costing U.S. companies alone about $600 billion each year. Cleaning up dirty data can entail everything from eliminating duplicate data to filling in missing data. It can involve even more challenging preprocessing requirements, such as converting different formats into a common format or language. Fog computing can provide preprocessing closer to the source of the data, to help ensure that cleaner data is forwarded to the cloud for deeper analytics.
The eight technical pillars of fog computing Fog computing is based on eight technical pillars (Figure 2) conceived and developed by the OpenFog Consortium. The OpenFog Reference Architecture The OpenFog Reference Architecture (OpenFog RA) is a medium- to highlevel view of system architectures for Figure 2: Pillars of fog computing, as specified by the OpenFog fog nodes and networks. It is the Consortium. result of a broad collaborative effort of its independently run open membership ecosystem of industry, technology and university/research leaders. It was created to help business leaders, software developers, silicon architects and system designers create and maintain the hardware, software and system elements necessary for fog computing. It enables fog-cloud and fog-fog interfaces. Fog computing is IEEE standard 1934 In June 2018, the IEEE announced that the OpenFog Consortiumâ€™s OpenFog Reference Architecture for fog computing has been adopted as an official standard by the IEEE Standards Association. The new standard is known as IEEE 1934. The standard was developed by the IEEE Standards Working Group on Fog Computing & Networking Architecture Framework, which is sponsored by the IEEE Communications Societyâ€™s Edge, Fog, and Cloud Communications Standards Committee. How fog computing works As described in the sidebar, fog is able to fill a functionality gap between the cloud, the edge and things. Fog computing provides a: Horizontal architecture: Fog supports multiple industry verticals and application domains, delivering intelligence and services to users and business. Continuum of services: Fog enables services and applications to be distributed closer to the edge and anywhere between devices (things) and the cloud.
Figure 3: This architectural design for autonomous driving use case shows the hierarchical and distributed advantages of the fog computing architecture. All of the fog nodes have the same behavior, manageability and control features. Regardless of location, they automatically interoperate, share data (because it has the same format and metadata), share objects, and have the interfaces for connectivity and inter-node communication. This enables the fog network to support Vehicle-to-Infrastructure (V2I), Vehicle-to-Vehicle (V2V), Vehicle-to-Network (V2N), Vehicle-to-Pedestrian (V2P), and Vehicle-to-Cloud (V2C). And, as shown here, fog nodes can be located on the car, embedded in or co-located with roadside infrastructure, at the neighborhood level, and at the regional level.
Fog nodes are a core architectural component With fog, computing, storage, control and networking is distributed across a fog hierarchy in fog nodes, a building block or core element for fog computing. A fog node can have a wide range of characteristics, creating a fluid system of connectivity and collaboration: •
Fog nodes can be as small as a chip or as large as a server;
Fog nodes can be embedded in or co-located with devices, equipment, infrastructure, etc.;
Fog nodes can be distributed anywhere in the fog hierarchy;
Fog nodes can collect and share data in any direction in the fog hierarchy (east-to-west and north-to-south);
Fog nodes can be used to preprocess, aggregate and filter data;
Fog nodes can interact and adapt as a dynamic community.
Figure 3 illustrates how this fluid system of connectivity and collaboration creates the infrastructure between the edge and cloud for the autonomous driving use case.
OpenFog certification program and test beds When the interoperability of fog computing is fully defined, the OpenFog Consortium will have a certification program in place worldwide. This will enable developers to have their devices tested and certified locally to ensure all fog network devices can work together seamlessly. The Consortium is already creating testbeds to adapt the OpenFog RA to different market segments and use cases. The Consortium also plans to host FogFests (plug fests) to help drive component-level interoperability and accelerate the time to market. Source: The OpenFog Consortium
Contributed Articles Decentralized and Adaptive Security for the New Industrial Edge Contributed by: Xage Security In 2009, the Stuxnet computer virus targeted Iran’s nuclear program by attacking the interconnected control systems of 15 uranium enrichment plants. At the time, this cyberattack was the most sophisticated to date. Introduced into a single machine via a USB drive, Stuxnet contained two components. First, to mask its actions, this virus secretly recorded normal plant operations and then reported those readings back to the operators. Under this cover, it proceeded to destroy 984 enrichment centrifuges by spinning them out of control. The attack impeded the program by decreasing enrichment efficiency by 30 percent.3 4 The story of Stuxnet is a call to action for operational technology (OT) leaders. The battle to protect smart factories is asymmetric: While OT departments must protect every device, all of the time, hackers can threaten lives, disable operations, and disrupt supply chains by finding just a single soft network target. Reducing costs and increasing customer satisfaction Picture going into a store and buying products and services built just for you—your colors, your features, your design. In today’s world, your jeans, shoes, coat, and even your car make a statement about you the individual rather than about your market segment. To create more personalized products and services, industries are reinventing their production environments. Industry-leading companies are transforming their operations internally and across their horizontal partners by adopting IoT, edge, cloud, artificial intelligence (AI), robotics, automation, drone and analytics technologies. The result is a highly connected “cyber-physical” production ecosystem, termed Industry 4.0, that will increase efficiency and customer satisfaction. Not only does Industry 4.0 promise to transform the customer experience, it will also optimize production. Verticals like manufacturing, utilities, transportation and energy will reduce costs by creating self-healing systems that improve preventative and remote maintenance, as well as remote control. In this new era of operations, AI will drive process optimization and fault correction through peer-to-peer cooperation and the autonomous operation of devices at the industrial edge. A 2016 survey by PriceWaterhouseCoopers (PwC) revealed that 72% of manufacturing enterprises predict their use of data analytics will improve customer relationships and customer intelligence along the product lifecycle.5
W. Broad, J. Markoff, and D. Sanger, “Israeli Test on Worm Called Crucial in Iran Nuclear Delay,” New York Times, January 15, 2011. 4
D. Kushner, “The Real Story of Stuxnet,” IEEE Spectrum 53, No. 3, 48 (2013).
Geissbauer, R., Vedso , J., & Schrauf, S. (2016). 2016 Global Industry 4.0 Survey. Retrieved from https://www.pwc.com/gx/en/industries/industries-4.0/landing-page/industry-4.0- building-your-digital-enterprise-april2016.pdf 5
This new “smart operations” environment will connect everything—human operators, assembly line robots, electrical smart meters, warehouses and delivery trucks—to everything else. By integrating sensor-based, communication-enabled systems at the industrial edge, Industry 4.0 promises to deliver the highly customized products and services that customers want, faster, cheaper and with more flexibility. But there are inherent security risks in networking and integrating these new technologies across the operational ecosystem. “As we’ve come to realize, the idea that security starts and ends with the purchase of a prepackaged firewall is simply misguided.” – Art Wittmann, VP, Business Technology Network The vulnerabilities of enterprise security To realize the promise of Industry 4.0, security needs to be woven into this autonomous, any-to-any, edge-heavy ecosystem. The security layer of this fabric needs to be as distributed, redundant, flexible and adaptive as the systems it is tasked to defend. Current centralized security systems simply aren’t designed to handle the scope, nature or complexity of the Industry 4.0 operational network. The risks of relying on traditional security models are great. Cyberattacks, including those at Equifax, Sony Pictures, Target and Yahoo, feature regularly in the news media. While the costs of these high-profile cyberattacks have been enormous, an attack on an Industry 4.0 oil refinery or chemical factory could exceed the disasters at Union Carbide in Bhopal, Fukushima Daiichi or Deepwater Horizon. A cybersecurity system that protects authentication and information exchange is the foundational requirement of Industry 4.0. “Exhaustive prevention is an illusion. We can’t secure misconfiguration, shadow IT, third parties, human error, former employee...” -Stéphane Nappo, CISO, Société Générale Many of the current technologies, concepts and protocols used in traditional enterprise security predate Industry 4.0. These security solutions are managed by central IT departments that are responsible for maintaining firewalls and authenticating every person, application and device that accesses them. But the hackers aren’t going away: In their April 2017 Internet Security Threat Report, Symantec reported that there were 357 million malware variants in 2016, compared with 275 million in 2014.6 In their July 2017 The Rise of Thingbots report, F5 Labs reported that from January 1 to June 30, 2017, 30.6 million IoT brute force attacks were launched, a 280% increase in attacks over the prior period of July 1 through December 31, 2016.7
Symantec: 2017 Internet Security Threat Report. (2017, April). Retrieved from https://www.symantec.com/securitycenter/threat-report 7
F5 LABS: 2017 The Rise of Thingbots. (2017, July). Retrieved from https://f5.com/Portals/1/PDF/labs/F5_Labs_Hunt_for_IOT_Vol_3_rev09AUG17.pdf?ver=2017-08-09-112721-777
Symantec registered 94.1 million malware variants in February 2017, up from 32.9 million seen in January 2017 and 19.5 million in December 2016. 8 The sheer volume of users, heterogeneous applications and IoT devices that will need to be authenticated and secured is staggering: Industry executives and analyst firms estimate that between 20 and 30 billion IoT devices will be in place by 2020.9 Existing approaches to security will struggle to defend this vast, highly distributed infrastructure. Each new device will give hackers another target to attack. Additionally, because many IoT devices ship with static or predictable default credentials, 10 11 they are particularly susceptible to malware trojans like Mirai and Bashlight. Using common factory usernames and passwords, Mirai seeks out and infects vulnerable IoT devices. Once infected, an IoT device, which will continue to function normally12, will await instructions on which target to attack. Through the infection and control of multiple devices, Mirai is able to launch Distributed Denial of Service (DDoS) attacks.13 On October 21, 2016, a large number of IoT devices infected with Mirai launched multiple major DDoS attacks on Dyn, a DNS service provider. As a result, several high-profile websites, including GitHub, Twitter, Reddit, Netflix and Airbnb among others, were inaccessible.14 The current model of enterprise security is incapable of protecting Industry 4.0 with its intermittently connected, heterogeneous devices and applications, distributed across organizations and geographies. Today’s centralized IT security paradigm needs to be replaced by cybersecurity that is distributed, flexible and adaptive.
Arghire, I. (2017, March 13). Security Week: New Malware Variants Near Record-Highs: Symantec. Retrieved from http://www.securityweek.com/new-malware-variants-near-record-highs-symantec 8
Nordrum, A. (2016, August 18). IEEE Spectrum: Popular Internet of Things Forecast of 50 Billion Devices by 2020 Is Outdated. Retrieved from https://spectrum.ieee.org/tech-talk/ telecom/internet/popular-internet-of-things-forecast-of-50-billion-devicesby-2020-is-outdated 9
Stone, M. (2017, September 29). The IoT Future Is Here: How To Protect Your Enterprise From The Latest Threats. Retrieved from https://www.forbes.com/sites/juniper/2017/09/29/the-iot-future-is-here-how-to-protect-your-enterprise-from-thelatest-threats/#7855100c1c41 Goodin, D. (2017, August 25). Leak of >1,700 valid passwords could make the IoT mess much worse. Retrieved from https://arstechnica.com/information-technology/2017/08/ leak-of-1700-valid-passwords-could-make-the-iot-mess-muchworse/ 11
Moffitt, Tyler (October 10, 2016). “Source Code for Mirai IoT Malware Released”. Webroot. Retrieved December 3, 2017.
njccic (December 28, 2016). “Mirai Botnet”. The New Jersey Cybersecurity and Communications Integration Cell (NJCCIC). Retrieved December 4, 2017. 13
Williams, Chris (October 21, 2016). “Today the web was broken by countless hacked devices”. theregister.co.uk. Retrieved December 3, 2017.
Differences between enterprise security and industrial security
Traditional Enterprise Security
Centralized and top-down
Decentralized and autonomous
Security not in real time
Real time, heterogeneous and peer-to-peer
Diverse industrial protocols
Modern directory services
Device username and password logins
Millions of devices
Billions of devices
Security to drive industry 4.0 To deliver its potential, Industry 4.0 requires security that is autonomous, real-time, zero-touch deployable and adaptive. Instead of interacting with a centralized security system, equipment, devices and applications need to cooperate to protect themselves. How blockchain meets the security needs of IIoT Blockchain elegantly satisfies the challenging security environment of Industry 4.0. By enforcing immutable records and distributing and sharing identical security data across the nodes in its network, Blockchain is tamper proof, redundant and self-healing. Through a process of continual reconciliation, consensus between devices secures the network when new or intermittently connected devices join it. Through the Xage security fabric, for example, devices establish consensus to identify and isolate bad devices and applications infected with malware like Mirai. This self-healing capability delivers the data integrity and redundancy that Industry 4.0 needs to thrive. Blockchain also integrates well with redundant, threshold-based technologies like Shamir’s Secret Sharing to flexibly secure operational data. “...The most critical area where Blockchain helps is to guarantee the validity of a transaction by recording it…[in]...a connected distributed system of registers, all of which are connected through a secure validation mechanism.” – Ian
Khan, TEDx Speaker, Author, and Technology Futurist
Grow the network to increase security To deliver the protection required for Industry 4.0, Xage provides an “any- to-any” security fabric that promotes autonomous operation, credential rotation, access control and zero-touch deployment. In contrast to traditional enterprise security systems, the security of a network protected by the Xage
security fabric increases with additional nodes. In short, protection, data redundancy and privacy increase as the operational network grows. If Industry 4.0 is going to fuel the next wave of economic growth, it will require a new model of defense. Centralized, top-down enterprise security is incapable of sufficiently protecting the millions of autonomous, interconnected, heterogeneous devices and applications found in Industry 4.0. About Xage Security The Xage Security Suite is the first and only blockchain-protected security platform for the Industrial Internet of Things (IIOT). Xage creates the essential trusted foundation for secure interactions between machines, people and data. Advancing beyond traditional security models, Xage distributes authentication and private data across the network of devices, creating a tamper-proof â€œfabricâ€? for communication, authentication and trust that ensures security at scale. Xage supports any-to-any communication, secures access to existing industrial systems, underpins continuous edge-computing operations even in the face of irregular connectivity, and gets stronger and stronger with every device added to the network. Xage customers include leaders in the largest industries, spanning energy, utilities, transportation and manufacturing.
Fog Computing Creates a New Range of Possibilities in Manufacturing Contributed by: ADLINK People associate “fog” with limited visibility, a lack of mental clarity and confusion. But when fog computing rolls into a factory, the very opposite is true—it makes the factory considerably smarter. Everything becomes clearer and more predictable. As with most industries, manufacturing is a highly competitive endeavor. The more a facility can reduce its energy use, set up a more efficient maintenance schedule and analyze its manufacturing history, the more profitable it can become. The smart factory gains an advantage through day-to-day productivity increases and cost reduction, as well as long-term opportunities with new business models. Fog, an extension of the cloud to the network’s edge, helps facilitate all of this. Computing at the network’s edge With the increasing number of devices connecting wirelessly to transmit and receive data, it’s important to be able to perform data processing at the network’s edge, the point nearest the data-generating device itself. There are simply too many demands for reduced latency and bandwidth, high security, low cost and superior connectivity to limit data processing to the cloud. Processing at the network’s edge can help a factory lower costs, boost performance and keep pace with the competition. Decisions near the device Here’s why data processing at the edge is so important. Devices such as cameras and sensors— traditionally not always “smart” devices—suddenly become capable of making decisions themselves without transmitting the information elsewhere for processing. A smart sensor can make determinations by taking input from the physical environment and using built-in compute resources to perform predefined functions based on specific feedback. For example, the device can sense if a conveyor belt in a factory needs maintenance before that conveyor goes out of service. Moving data offsite can be a cumbersome, expensive proposition when there’s a tremendous amount of it. Fog computing, therefore, helps make data management operations more efficient and less costly. Latency can also become a distinct problem when large quantities of data are processed remotely. When the data travels to the cloud or even to a server inside the building, such action creates a delay. Sometimes the delay is enough to compromise the process. For example, if there’s a piece of robotic machinery that somehow becomes jammed during the manufacturing process, it’s important that everything shut down, instantly. But there is no “instantly” when a device is sending and receiving data remotely. There’s always a delay of some duration.
Figure 4: A predictive maintenance model using Vortex Edge PMQ.
Data security and energy savings One of the major benefits fog computing brings to the smart factory is the strengthening of data security. Data that travels to the cloud or to some other remote server is exposed to undue risk. But with the fog’s capacity for data processing via on-premises devices, that risk drops off substantially. The smart factory can enjoy elevated levels of safety and privacy. Another plus is the ability to better monitor energy usage and determine when it might be excessive. If a piece of equipment is using more electricity than the manufacturing process requires—established via a pre-set usage range—technicians can be alerted of the need for recalibration to eliminate waste and cost. Fog computing applications Perhaps the most prevalent example of a beneficial fog computing application is predictive maintenance. Before the advent of the smart factory, a manufacturer only became aware of machinery issues once the equipment failed. Alternately, the manufacturer could rely on scheduled maintenance checks that may or may not be necessary—checks that held up the manufacturing process and reduced productivity. In either case, there’s a distinct cost involved. Suppose your factory contains a piece of highly sophisticated machinery working in a rapid manufacturing process, equipment critical to production. Naturally, you wish to minimize any downtime. Analytics possible through fog computing can take sensor data and determine when maintenance on this equipment is most advisable. It can also calculate the best time for a check, a time when idling the machinery will have the least impact on production. The fog can help prevent equipment malfunctions that impose costs on a factory when defective products proceed down the line or, in a worst-case scenario, make their way out the door to the customers.
Integrating assets When a manufacturing facility is sold on the virtues of fog computing, it’s important that the organization create a business case describing the benefits: the cost savings, enhanced data security, reduction in energy usage, gains in quality control. After all those considerations are nailed down, the next step is implementation. One of the most basic requirements of an implementation is the extraction of data from equipment on the factory floor. It’s not always easy because the machines aren’t integrated with one another; they weren’t made to “talk” among themselves. The remedy here has been difficult for many factories, because opening up the equipment to connect it with other pieces can void the warranty. With those pieces being so costly, such an action is far too risky. Optical character recognition is one technology used today for extracting data without being intrusive and opening up legacy machines. Products can now read the data, extract it and unify the information into a single protocol—a solution made possible by fog computing and Internet of Things technology. The fog computing infrastructure processes all this machine data at the edge. Some of the data may then go to the cloud for longer-term analytics purposes, while some data is sent peer-to-peer to other devices on the factory floor. The data is now in a usable format, and the manufacturer can quickly and easily integrate new devices into the system, as needed. Once the system is in place, predictive maintenance and other processes that help strengthen manufacturing are possible. Fog computing and the IoT can create opportunities for new business models in established industries or help advance established business models in expanded industries. And because an increasing number of companies today are implementing this technology, a manufacturer who ignores it may have a difficult time staying competitive. About ADLINK ADLINK Technology is a global leader in Edge Computing. Our mission is to facilitate the use of advanced technologies to help optimize the business performance of our customers. We provide robust boards, platforms and user interfaces; real-time data connectivity solutions; and application enablement for state-of-the-art industrial computing (e.g., machine learning via AI-at-the-Edge). Together, these also enable innovative end-to-end IoT solutions in support of operational excellence or new revenue streams. ADLINK serves customers in many vertical markets including: manufacturing, networking and communications, healthcare, infotainment, retail, energy, transportation, and government and defense. ADLINK has an excellent eco-system of technology partners; we are a Premier Member of the Intel® Internet of Things Solutions Alliance, a strategic embedded partner of NVIDIA, and a valued thoughtleader and contributor in many standards and interoperability initiatives, including Eclipse, ETSI, OCP, OMG, OpenFog, PICMG, ROS-I and SGeT. ADLINK’s products are available in over 40 countries, either directly or through our worldwide network of value-adding distributors and systems integrators. ADLINK is ISO-9001, ISO-14001, ISO-13485 and TL9000 certified and is publicly-traded on TAIEX (Stock Code: 6166).
BATS: An Enabling Communication Technology for IoT, Fog Computing, and Beyond Contributed by: The Chinese University of Hong Kong
The Wireless Challenge The Internet of Things (IoT) is a network that will connect billions of physical devices, such as vehicles, home appliances, sensors, actuators, etc., and enable them to exchange data among themselves. Such a network can be realized by fog computing, which builds a continuum of interface between the cloud and the things. Due to the size of the network, when two devices communicate, they most likely need to go through a large number of other devices, referred to as relays. Many of these IoT devices are connected to a wireless instead of a wired network, and some of them can be mobile devices. Here, wireless may refer to technologies for short-range wireless (e.g., Bluetooth, Wi-Fi, ZigBee), medium-range wireless (e.g., LTE), long-range wireless (e.g., VSAT for satellite communication), or underwater communication (e.g., acoustic, optical). In some cases, data exchange between IoT devices can be achieved through power-line communication. A main challenge for all wireless communications (and power-line communication) is data packet loss. Depending on the specific technology, this may be due to noise, interference, path loss, multi-path fading, Doppler spread, etc. In the Internet, the transport and network layers are dominated by TCP and IP, respectively. TCP uses retransmission and rate control to resolve the end-to-end packet loss, and IP forwards received packets at intermediate nodes. When the source node transmits data packets to the destination node via a number of relay nodes, each packet must be successfully transmitted on each hop before it can reach the destination node. As such, the end-to-end packet loss is simply an accumulation of the packet losses on the individual hops. As an example, if the packet loss on each hop is 0.2 (assume without retransmission), which is quite common in certain applications (e.g., low-power IoT, terabit satellite), then after 10 hops, only (1 âˆ’ 0.2)10 â‰ˆ 10% of the packets are left. As such, we very rarely see wireless networks that have more than a few hops, much less tens or even hundreds of hops. In other words, if a device communicates with another device through a number of other devices and the throughput is to be maintained at any reasonable level, only very few of those devices along the path can be wirelessly connected. In fact, in todayâ€™s Internet, almost all network connections involve at most two wireless hops, namely the first hop and the last hop. This shortcoming of wireless communication will very soon become a bottleneck for the development of IoT because it prevents the massive deployment of wireless IoT devices. Network Coding In the past few decades of network communication, data packets have been transmitted through the network by store and forward. The philosophy of this telecommunication technique is primarily based on the folklore that information is a commodity. This folklore, however, was refuted by network coding
theory15 which shows that in order to achieve the network capacity, coding needs to be employed at the intermediate nodes of the network. Network coding induces a paradigm shift in the study of network communications. In the past, a network solution merely consisted of several point-to-point solutions (e.g., channel coding, encryption) that were “glued together” by routers. With the advent of network coding, many of these network solutions need to be revisited. Since its inception, network coding has been applied to many different domains including channel coding, wireless communication, distributed data storage, cryptography, and even quantum computing. Several thousands of research papers have been written on network coding and its applications. Network coding theory unveils the non-commodity nature of information. Unlike traditional network protocols that put great emphasis on link-by-link reliability, network coding only focuses on end-to-end reliability. This principle of network coding has spawned a new class of codes that provides a practical solution to the longstanding problem of packet loss accumulation in multi-hop networks. BATS Codes Network coding, in particular random linear network coding (RLNC)16, can achieve the capacity of a wide range of networks, including networks with packet loss. However, the straightforward solution provided by network coding may be too complicated to be implemented. This is in terms of the complexities for encoding at the source node, recoding at the intermediate node, decoding at the destination node, and recovering the linear combination coefficients generated by RLNC. Since recoding is employed at the intermediate nodes, a buffer, typically not small, is also required. All these issues prevent the implementation of network coding on most devices, in particular many IoT devices that have low computational power and a small memory and may even be battery-powered. To meet this implementation challenge, a class of efficient network codes called BATched Sparse codes (BATS codes) was introduced 17 18. BATS codes are a class of rateless network codes that include fountain codes and RLNC as special cases. A BATS code consists of an outer code and an inner code. The outer code, which is a matrix generalization of a fountain code, is applied at the source node to encode the source file into batches of coded packets. These packets are transmitted through the network, where some of them may be lost along the way. At an intermediate node, the inner code, namely RLNC, is applied to packets belonging to the same batch that are received at that node. At the destination node, a matrix generalization of the classical belief propagation (BP) decoding is applied to recover the source file.
R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, “Network Information Flow,” IEEE Transaction on Information Theory, vol. 46, no. 4, pp. 1204-1216, 2000. 15
T. Ho, M. Medard, R. Koetter, D. R. Karger, M. Effros, J. Shi, and B. Leong, “A Random Linear Network Coding Approach to Multicast,” IEEE Transaction on Information Theory, vol. 52, no. 10, pp. 4413-4430, 2006. 16
S. Yang and R. W. Yeung, “Coding for a network coded fountain,” 2011 IEEE International Symposium on Information Theory, Saint Petersburg, Russia, July 31–August 5, 2011. 17
S. Yang and R. W. Yeung, “Batched sparse codes,” IEEE Transaction on Information Theory, vol. 60, no. 9, pp. 5322- 5346, 2014. 18
With both RLNC and fountain codes19 as special cases, BATS codes preserve their salient features. On the one hand, the outer code, being a matrix fountain code, inherits the efficiency of fountain code and its rateless property. On the other hand, the inner code uses RLNC for recoding to compensate for the packet loss at the upstream of the network. In particular, BATS codes satisfy all the requirements for a practical solution: 1. high throughput 2. low latency 3. low coding complexity 4. low storage requirement at the intermediate nodes. BATS code is the first and by far the only scheme that can satisfy all four of these requirements simultaneously. Due to the pipelining nature of BATS codes, the latency incurred at an intermediate node is the same as store and forward. This makes BATS codes applicable to real-time applications such as video streaming. A BATS code also only needs to maintain a very small buffer at each intermediate node. For the technical details of BATS codes, we refer the reader to the recent monograph20. Consider a line network where the packet loss on each hop is 0.2 and the total number of hops is: đ?‘›. After 1 hop, the fraction of packets that remains on average is 0.8. This is also the theoretical upper bound on the throughput that can be achieved by any scheme on an -hop network. When the number of hops increases, the throughput is expected to drop. If the relay nodes simply store and forward the packets (e.g., TCP/IP), the throughput would drop exponentially fast with respect to the number of hops. Figure 6 shows the throughput of BATS code for up to 50 hops. We see that after 50 hops the throughput is about 0.7, i.e., a 12% drop from 0.8, whereas the throughput of any store and forward based protocol such as TCP/IP is already close to 0. Figure 7 shows the throughput of BATS code for up to 1,000 hops. We see that even after 1,000 hops the throughput still maintains at about 0.65, i.e., only a 20% drop. This shows that while the throughput of a BATS code does drop with the number of hops, it drops very slowly. Thus, we can by and large think of BATS codes as converting a multi-hop network with packet loss into a single-hop network. BATS codes are efficient enough to be implemented on popular IoT platforms like Raspberry Pi. The following video demonstrates the superior performance of BATS code over fountain code for video streaming over an 11-hop network: http://iest2.ie.cuhk.edu.hk/~whyeung/BATS.mp4 BATS code is an advanced network coding technology that opens the door to a revolutionarily new generation of network communication protocols. It is predicted that billions of IoT devices will be connected to the Internet by multi-hop wireless links, where TCP/IP, based on store and forward at the intermediate nodes, will meet difficulty. BATS codes solve the packet loss accumulation problem in wireless mesh and ad hoc networks, making wireless networks of tens or hundreds of hops possible. This enables the massive deployment of wireless IoT devices in the fog computing environment.
J. W. Byers, M. Luby, M. Mitzenmacher, and A. Rege, â€œA digital fountain approach to reliable distribution of bulk data,â€? in Proceedings of ACM SIGCOMMâ€™98, pp. 56â€“67, New York, NY, 1998. 19
S. Yang and R. W. Yeung, BATS Codes: Theory and Practice, Morgan & Claypool Publishers, 2017.
Finally, in addition to IoT, BATS codes can also be applied in satellite, deep space, and underwater communication networks. These applications of BATS codes have the potential of fundamentally changing the landscape of deep space and underwater explorations. 0.9 0.8 0.7
0.6 0.5 BATS
0.3 0.2 0.1 0 0
Number of hops, n
Figure 5: Performance of BATS up to 50 hops.
0.9 0.8 0.7
0.6 0.5 BATS
0.3 0.2 0.1 0 0
Number of hops, n
Figure 6: Performance of BATS up to 1000 hops.
About the Authors Shenghao Yang is an Assistant Professor at The Chinese University of Hong Kong, Shenzhen. He received his B.S. degree from Nankai University in 2001, his M.S. degree from Peking University in 2004, and his Ph.D. degree in Information Engineering from The Chinese University of Hong Kong (CUHK) in 2008. He was a visiting student at the Department of Informatics, University of Bergen, Norway during spring 2017. He was a Postdoctoral Fellow in the University of Waterloo from 2008 to 2009 and in the Institute of Network Coding, CUHK from 2010 to 2012. He was with Tsinghua University from 2012 to 2015 as an Assistant Professor. His research interests include network coding, information theory, coding theory, network computation, big data processing, and quantum information. He has published more than 40 papers in international journals and conferences. He is a co-inventor of BATS code and has two U.S. patents granted. Raymond W. Yeung is a Choh-Ming Li Professor of Information Engineering at The Chinese University of Hong Kong (CUHK). He received his PhD degree in electrical engineering from Cornell University. Before joining CUHK, he was a member of the technical staff at AT&T Bell Laboratories. He is the author of two textbooks on information theory and network coding that have been adopted by over 100 institutions around the world. In spring 2014, he gave the first MOOC in the world on information theory that reached over 25,000 students. He was a recipient of the 2005 IEEE Information Theory Society Paper Award, the Friedrich Wilhelm Bessel Research Award from the Humboldt Foundation in 2007, and the 2016 IEEE Eric E. Sumner Award (Citation: â€œFor pioneering contributions to the field of network codingâ€?). He is a Fellow of the IEEE, Hong Kong Academy of Engineering Sciences, and Hong Kong Institution of Engineers.
Fog Computing: Bringing SDN to IIoT Contributed by: Nebbiolo Technologies The Industrial Internet of Things (IIoT) is heralding a new wave of modernization, across many industries, with customers and internal stakeholders demanding advances in productivity, management, security, and flexibility across various verticals. However, IIoT deployments continue to face considerable head-winds in terms of (largely) manually managed infrastructure, that are mostly unsecure and setup as silos. Fog computing offers an innovative solution to address these challenges by providing secure access to Operational Technology (OT) infrastructure within the framework of Information Technology (IT) toolsets.
Figure 7: Layout of a typical factory Figure 1 shows the layout of a typical factory, with services and workloads being more IT centric at the top layers (Factory Datacenter, say) and progressively becoming OT centric as they move towards the lower layers (Factory Machine, say). Software defined resource allocation and management is gaining traction within the fog computing paradigm, as it empowers plant operators to be more 'adaptive' to their future needs. From a networking perspective, this translates into implementing Virtual Network Functions (VNFs) across the plant floor using Software Defined Networking (SDN). Figure 2 presents one view of a typical SDN solution and includes the following components. • • • •
SDN Applications; The SDN Northbound Interface (NBI) which allows the SDN application(s) to talk to the SDN Controller; The SDN Controller; The SDN Control Data Plane Interface (CDPI) which allows the SDN Controller to talk to the SDN Datapath;
The SDN Datapath; Management and administrative functions which are responsible for policy enforcement and performance monitoring of the entire solution.
In an IT-centric environment, each of the above components are realized using a plethora of opensource (and a few closed) solutions. More prominent amongst these include OpenStack, VMware NSX, Cisco Digital Network Architecture, etc.
Figure 8: SDN internal components However, an IIoT environment introduces several constraints on the SDN eco-system that necessitate a re-design of a few SDN components. •
Harsh operating environments, which lead towards mostly fan less compute systems. These fan-less designs severely curtail the amount of available compute, primarily due to the restrictions on their available thermal headroom. While new processor designs keep pushing the envelope on a performance-per-watt basis, it’s reasonable to assume that (for the foreseeable future) these headless systems would lag (sometimes quite significantly) behind their cousins from a typical Data Center environment. Unlike in a data center environment, compute in a typical factory floor is mostly sparse and usually not universally reachable (see Figure 1). This introduces issues in scale-up and scale-out of SDN components.
Colocation of SDN components with customer's business logic, on to the feeble amount of available compute, necessitates a re-think on how these SDN components are implemented (and they interconnect to one another). The lack of a cohesive (and effective) perimeter (in terms of firewalls, BUM rate policers etc.) in a typical factory floor (especially bottom up) implies that these SDN components need to pay special attention to their availability and resiliency. Such considerations make, compute-wise, an already bad situation even worse.
Consider a simple example of a Virtual Network Function (VNF) based solution which involves: • • • • •
A learning bridge; Multiple virtual-machines (or containers) connected to the above learning bridge; One or more machines or sensors connected to the above learning bridge; Firewall to restrict the flow of data between the above communication end-points; Some means for an operator to manage the firewall.
Figure 9: Sample topology being implemented as VNFs. Table 1 provides a summary of the manner in which the VNF solution maps to SDN components, and how these are implemented (kernel-space vs user-space). SDN Components
(bridge + iptables) OVS
(user-space) Table 1: Comparison of Linux tools vs OVS
Table 2 shows the manner in which various traffic types are handled by the SDN components using (a) Linux tools (using Linux bridge and Linux firewall) and (b) OVS. SDN Components Traffic Types Forwarding Plane
Protocols (STP, say)
Table 2: Comparison of Linux tools vs OVS at steady state Since the entire solution is implemented in software, it is imperative to enumerate the design considerations in selecting either options with respect to an IIoT deployment • • •
The communication between the controller and the CDPI is software switched (either collocated onto the same compute, or across the plant network) and hence consumes CPU cycles The controller itself consumes CPU cycles The OVS solution involves punt (i.e., exception and slow path protocol) traffic being handled by the CDPI agent and the controller (both of which are implemented in user-space. This is in contrast with the Linux tools solution, wherein this traffic is handled entirely within the in-kernel forwarding plane. As such, the OVS solution incurs the overhead of context switches (between kernel and user spaces) on a per-punt-packet basis. These context switch overheads can get quite aggravating in the presence of network disturbances (say, flood of traffic due to a faulty end point, bursts of expected traffic, network re-organization resulting in STP recalculations etc.).
In general, SDN deployments within an IIoT environment need to account for the following broad considerations •
CPU core counts matter. VNFs would need to scale-up locally in terms of CPU cores. Further, the larger the number of available CPU cores, the easier it is to 'pin' VNFs to specific cores and therefore provide greater performance and resilience. In general, this improves the availability of not just the VNF but also that of the customer business logic. Single threaded performance matters. Due to scale-out difficulties in an IIoT environment, it is imperative that a VNF has the compute headroom to handle bursts of network workloads. Since typical VNFs are single threaded, higher single threaded performance would result in better performance of the overall SDN solution. Thermal envelope matters. Since the IIoT environment typically employs fan less designs, an effective SDN solution mandates that the CPU core and single threaded performance be available at a lower thermal envelope.
VNF Offload matters. While there is merit in considering fast path offloads like DPDK and IOVisor, these offloads are still tuned for IT centric workloads. For example, Intel's DPDK necessitates a few CPU cores to be reserved for DPDK processing. This doesn't fit in an IIoT environment as compute in these environments is core count challenged. The need is to offload not just a general fast path, but to offload the entire VNF. This does not imply IIoT environments need to embrace dedicated routers and switches. Rather, the offloads need to include reconfigurable FPGAs. Each VNF IP could be optimized for use in an IIoT environment and flashed on a per-use basis onto an available FPGA for use on the factory floor. This not only offloads the VNF functionality, but also the slow path (and other control plane) aspects of the VNF thereby relieving the feeble IIoT compute of handling networking duties. Software stack matters. An IT centric SDN software stack lacks the optimizations necessary to operate in an IIoT environment. Such environments require the use of a purpose-built software stack where each individual component has been fine-tuned for its use in these demanding settings. For example, OVS would incur context switching costs (between kernel space fast path and user space slow path) in comparison to Linux bridging (wherein both fast path and slow path reside within the kernel).
Summary Fog computing holds the key to the next big leap in industrial automation and is being embraced by multiple industrial verticals as the means to modernizing not just the equipment but also the processes and manageability toolsets involved in large scale automation infrastructures. NFV is an important aspect of Fog computing, with SDN fast becoming the de-facto means of implementing NFVs in an IIoT environment. While IT-centric concepts of SDN may be mapped onto an IIoT environment, an industrial network designer needs to account for a very disparate set of design considerations for realizing the true benefits of Virtualized Network Functions (VNFs) in such an environment. References •
Fog Networking: An Overview on Research Opportunities, Mung Chiang http://www.princeton.edu/~chiangm/FogResearchOverview.pdf
Fog Computing Overview Video https://vimeo.com/228299847
OpenFog Consortium Website https://www.openfogconsortium.org/
Fog Computing and Its Role in the Internet of Things, Flavio Bonomi, Rodolfo Milito, Jiang Zhu, Sateesh Addepalli https://www.nebbiolo.tech/wp-content/uploads/fog-computing-and-its-rolein-the-internet-of-things- white-paper.pdf
Fog for 5G and IoT, Mung Chiang (Editor), Bharath Balasubramanian (Editor), Flavio Bonomi (Editor) https://www.wiley.com/en-us/Fog+for+5G+and+IoT-p-9781119187134
Fog computing as enabler for the Industrial Internet of Things, Wilfried Steiner, Stefan Poledna https://www.springerprofessional.de/en/fog-computing-as-enabler-for-the-industrial-internetof- things/11002362
Software-defined networking, Wikipedia https://en.wikipedia.org/wiki/Softwaredefined_networking
About Nebbiolo Technologies Nebbiolo is a pioneer of fog computing, developer of “fog/edge infrastructure” software for industrial and commercial IoT application solutions. Nebbiolo’s software platform brings the power of advanced analytics, real-time IoT devices control, security end-to-end from the “things” to the cloud, software defined IoT paradigm, and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. Nebbiolo’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, utilities, oil and gas, as well as smart cities and connected vehicle applications. Nebbiolo's fog computing platform combines edge computing hardware with an IIoT optimized software stack, to offer a hyper-converged solution which integrates compute, network and security considerations into the deployment and management of application services across a wide array of industrial verticals.
Getting Local: Three Early Lessons for Implementing Fog Computing Contributed by: FogHorn Systems Fog computing has matured exponentially since 2012. Today, fog has expanded into a robust edge computing stack, including a wide network of data sources, edge computing processing and cloud processing. Industrial organizations ranging from oil and gas to smart cities to manufacturing are in various stages of deploying this technology. In fact, the market is expected to reach $18.2 billion by 2022, according to the OpenFog Consortium. Gartner amplifies this with a prediction that 50% of enterprise-generated data will be created and processed outside a traditional centralized data center or cloud by 2022. This expected growth in the market suggests that edge computing has the potential to greatly impact how data is processed and how application of that data can improve operational efficiency and business value. As adoption increases across the Industrial IoT (IIoT), three early lessons have emerged specifically around edge computing. The first is that edge computing—meaning complex data analytics, machine learning and artificial intelligence (AI)—must be deployed as close to the source of the data as possible to be most effective. The second is that the heavy lifting of data enrichment and normalization, including decoding, multivariate time series data alignment, dealing with missing or invalid data, etc., must be done in advance of applying machine learning models. Otherwise, to quote an overused phrase, it’s “garbage in and garbage out.” The third lesson is that the biggest obstacle in placing data enrichment, analytics, machine learning and AI close to the data source is the highly constrained compute and communications environments that exist in the majority of industrial operations. Lesson #1: Edge computing is most effective close to the data source First of all, there is far too much data being generated to be sent to the cloud. Just as an example, a typical offshore oil platform generates one to two terabytes of data each day. Most of this data is timesensitive, associated with things like platform production metrics, safety and environmental compliance. To get even one day’s data to the cloud using a satellite connection could take more than a week. Start adding new capabilities like video sensors and the problem just gets worse. In some situations, like in rural areas or with international shipping, etc., there is simply no access to sufficient bandwidth, but failures and undesirable conditions still need to be detected and acted upon quickly.
Figure 10: The amount of data typically generated by industrial equipment.
Two additional key reasons for deploying edge computing close to the data source are response time and security. Sending data to the cloud for processing and then back simply takes too long for many mission-critical applications. Most estimate a 150 to 200 millisecond travel time for data from where it has originated, to the cloud and back. Edge computing can reduce this to 2 to 5 milliseconds. This literally could mean, in some use cases, the difference between an unexpected machine failure or a factory shut down. For instance, a video sensor located on an oil pad or pipeline takes in huge amounts of data to monitor for any potential issue, such as a flame. But in order for the video sensor to accurately and quickly alert the system to react to a potentially disastrous incident, the data must be processed, anomalies identified, and immediate action must be taken. Due to the quantity of video data, a quick response can only be achieved by compute processing performed right at the edge. Security also plays a role in many IIoT applications. Many high value assets are simply not permitted to be connected to the Internet to avoid cyber-attacks, among other security reasons. In this case, edge computing is clearly the only answer. The asset must be able to reach the cloud in a highly controlled fashion, but you cannot allow the cloud to reach the asset. This approach to edge computing also can significantly reduce bandwidth and communications costs, along with costs associated with data volumes and processing time in the cloud. It also optimizes the critical role the cloud plays in IIoT by allowing organizations to use it for what it does best—collecting curated data from millions of edge devices for centralized learning, storage and propagation of refined models back to the edge. In many cases, “normal” data does not need to be sent to a data center or the cloud. Only exceptions or anomalies are relevant for storing for longer terms and using to train machine learning models. For instance, a smart flare stack doesn’t need to continuously send flare video, temperature, compressor
audio and more to the cloud for analysis if it is within specified operational, safety and regulatory compliance guidelines. Instead, the edge computing device only sends data when something out of range occurs, where it can be used in the cloud for further analysis. Lesson #2: Dirty data is the enemy of machine learning One of the key challenges found early within IIoT deployments is that raw, “dirty” data coming in fast and furious from large volumes of sensors causes problems for machine learning. Some estimates suggest that 80% of a data mining project is data preprocessing. This includes processes like converting a variety of legacy protocols into a common format; identifying and removing duplicates; converting numbers, dates and times to a consistent representation; enrichment of missing or incorrect data and more. The preprocessing stage is also a great opportunity to format the data in a way that is more understandable to humans, and associating values with business outcomes. According to The Data Warehouse Institute (TDWI), dirty data ends up costing U.S. companies alone about $600 billion each year. The bottom line is that preprocessing dirty data is critical to the efficiency, accuracy and generation of actionable results from machine learning, and also offers the opportunity to produce significant cost savings.
Figure 11: Top data-related costs to U.S. companies, according to the Data Warehouse Institute.
As organizations increase the amount of data fed into processing systems through large-scale IoT deployments, the “cleanliness” of the data matters exponentially more. By establishing a common language or format across the data being relayed to compute systems, the challenges associated with
processing data multiple times in different formats are eliminated. In addition, cleaning the data as it’s streamed to the edge reduces the amount of unnecessary data that is sent on to inform machine learning processes. Lesson #3: Existing industrial devices and environments are highly constrained The difficulty lies in getting data enrichment, machine learning and AI to fit in a tiny compute footprint (say 256MB or even less), capable of being deployed on existing, highly constrained industrial devices collocated with the data source. These devices include things like programmable logic controllers (PLCs), Raspberry Pi systems, operations control systems, motion sensor kits and ruggedized IoT gateways. The good news is that today there are purpose-built software solutions designed specifically to deliver highperformance edge computing in a very small footprint. It’s probably the most critical thing to ensure before selecting an IIoT edge analytics solution. A small software footprint is very important as IoT edge devices may only support 256MB or less of onboard memory. This is especially critical for IIoT deployments with small physical locations, such as a streetlight. If software fits into existing sensors and IoT gateways where the data is being created, analysis of video, sound, motion, light and other sensors can occur onsite. Additionally, as more cities convert their streetlights to smart streetlights, the amount of data being created and analyzed will increase as well. The cost to send all the data from these new streetlights to the cloud for processing would be cost-prohibitive to many cities and result in failures to respond due to latency in the data transfer between the device and the cloud. The promise of “real” edge computing All the pieces are now in place for the IIoT to deliver on its promise of smarter operations, enormous competitive advantages and significant cost savings. This is true across a range of industries, including manufacturing, transportation, oil and gas, smart buildings / cities, renewable energy and more. Virtually any existing industrial edge device can now easily be empowered with data enrichment, machine learning and AI capabilities, even in the tiniest of compute footprint and with highly constrained communications resources. Furthermore, beyond just applying compute at the edge, fog computing shows potential for impacting the IIoT at every point on the data’s journey from the source to the cloud. 2018 promises to be an inflection point for IIoT projects, and the expected business outcomes are already compelling. About FogHorn Systems FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT applications. FogHorn’s software platform brings the power of machine learning and advanced analytics to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, asset performance optimization, operational intelligence and predictive maintenance use cases. FogHorn’s solutions are ideally suited for OEMs, systems integrators and end customers in vertical markets such as manufacturing, power and water, oil and gas, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City and Smart Car applications.
Fog Computing in Developing Markets Contributed by: AetherWorks By enabling organizations and even individuals to access vast amounts of processing power without the infrastructural burden of owning a data center, the cloud has made it possible for businesses to scale and has even given rise to entirely new business models that rely on its computer rental model. However, as the number of devices connected to the cloud explodes with trends like the Internet of Things, the distance that data has to travel and the strain on underlying infrastructure by which data is transported are becoming increasingly problematic. For example, data from your smart thermostat in NYC likely makes a thousand-mile roundtrip for processing at Amazon’s data center in Virginia before returning anything intelligible, and distance increases latency (the delay in sending and receiving data). It’s also competing with billions of other devices sending data across the same infrastructure to get there; more traffic means more bandwidth strain. For developing markets, simple geography and infrastructural hurdles exacerbate these problems to the point that the cloud becomes uselessly unreachable. One need only look at a map of Amazon’s data centers21 to understand why the cloud is a non-option for would-be start-ups, innovative enterprises and researchers looking to tap the same kind of computing power that has propelled business growth in developed regions. Amazon may not be the only cloud provider in the world, but their ignorance of developing markets is representative of an industry-wide imbalance of data center computing power in developed markets. Research shows every 10% increase in Internet access results in more than 1.3% growth in GDP in developing markets22; improving connectivity is an enormous economic opportunity. Fortunately, although regions in developing markets lack the cloud data centers around which developed markets have consolidated, they are often not lacking computing power. In fact, the developing world is adopting smart phone technology so aggressively that they are on track to add 1.6 billion more compute-heavy smart phones by 202023. In some areas of Africa and India, the number of mobile device owners has even surpassed the number of people with electricity at home. Thus, instead of investing the time, capital outlay and infrastructural burden of bringing more computers in to build a cloud infrastructure, developing markets are uniquely poised to skip cloud dependency altogether in favor of an architecture more suited to the resources they already have. This architecture is called fog computing.
Amazon Web Services, Inc. (2018, March) AWS Global Infrastructure Retrieved from https://aws.amazon.com/aboutaws/global-infrastructure/ 21
Ranjit Goswami (2016, September 26) Smartphones in India: how to get 1.25 billion people online Retrieved from https://theconversation.com/smartphones-in-india-how-to-get-1-25-billion-people-online-65137 22
Gu Zhang (2017, February 6) Smartphones now account for half the world’s mobile connections Retrieved from https://www.gsmaintelligence.com/research/2017/02/smartphones-now-account-for-half-the-worlds-mobileconnections%20%20/600/ 23
Fog computing enables computing power to be accessed on a continuum of computers between a device and the cloud, so data can be processed closer to where it is produced. Where the smart thermostat in a cloud infrastructure must send data all the way to the cloud and back for processing, the same device in a Fog architecture could leverage processing power on a computer in the next room, decreasing the distance data has to travel and reducing bandwidth strain by limiting the total quantity of data traveling to the cloud. With billions of devices already in circulation, developing markets have the dual benefit of not being tethered to a cloud-centric architecture and being flush with fog-ready devices such as mobile phones and laptops. Fog computing solves the cloudâ€™s fundamental distance problem. Cloud data centers are built where property and utilities are cheap24 instead of in population centers, so dense metropolitan areas that need the most processing power often have to send data the furthest to reach it. Conversely, fog seeks to utilize existing devices to process data closer to where it is produced, and the more populated (with people and devices) an area is, the more computers with available processing power there are likely to be. The latency and bandwidth benefits of a fog computing architecture could enable developing markets to excel quickly in industries that depend on fast access to processing power, like IoT or AI. Perhaps even more promising though, are the economic implications for Fog Computing in the developing world. In a literal sense, the direction and distance data travels for processing represents where money is flowing in a given architecture. Through Fog Computing, developing markets have the opportunity to establish an architecture where capital flows between participants in dynamic local economies instead of outward to singular data center owners (cloud providers). For example, fog computing software such as ActiveAether enables virtually any computer to process data and host web services just like the computers in Amazonâ€™s cloud. Computer owners list available resources to a distributed network, and software developers write services that leverage the network just like they would write a service to leverage the cloud. When there is demand for a service, ActiveAether finds a suitable host(s) on which to spin up the service, based on criteria that can include geography and cost. When demand ceases, hosts are compensated for their contribution and their resources become available again. Thus, an IoT start-up in a developing region would effectively be renting its compute power from local business owners and peers with laptops or smart phones, instead of sending cash outward to AWS or Azure. By enabling any computer to act as a host provider, Fog computing architectures circulate cash flow through local economies and establish a passive income stream for billions of device owners in developing markets. Moreover, with infinitely more host providers than are present in a cloud-centric infrastructure, competition will reduce the cost of compute power, removing what can be a prohibitive expense for up and coming tech start-ups even in developed markets25.
Ellen Rubin (2017, June 2) Latency, the Hidden Killer of Cloud Adoption Retrieved from: https://www.clearskydata.com/blog/latency-the-hidden-killer-of-cloud-adoption 24
Stephen Ezell & Robert D. Atkinson (2016, April 28) The Vital Importance of High-Performance Computing to U.S. Competitiveness Retrieved from https://itif.org/publications/2016/04/28/vital-importance-high-performance-computing-uscompetitiveness 25
This wouldn’t be the first time the developing world has leapfrogged technology that had begun to bog down developed markets. In Africa, for example, while only 2% of homes have landlines compared to two-thirds in the U.S., cell phone and/or smart phone ownership in South Africa and Nigeria has already reached that of the U.S. at 90%. In Ghana and Kenya, 84% and 83% of adults own a smart phone or cell phone, respectively26. By leapfrogging landlines to embrace mobile technology, many African regions have avoided an expensive, construction-heavy step in technological advancement, enabling them to move easily to a more dynamic alternative. Fog Computing is a more location-agnostic, modern computing infrastructure than the cloud, better suited to a global technical climate in which over a million new connected “things” are plugged in every hour. By seizing this opportunity to implement Fog Computing, developing markets have the opportunity to skip the cloud’s infrastructural burdens, establish a lower-latency architecture for connectivity, create a passive revenue stream for billions of device owners, and stimulate entire local economies through the new “computing power rental market.” About AetherWorks ActiveAether is a serverless fog computing platform that leverages untapped processing power on devices across the globe to provide compute on demand for software developers. ActiveAether’s patented orchestration service deploys and un-deploys software services automatically, matching service demand with optimal Hosts based on criteria like low-latency, performance and cost. ActiveAether is powered by the FogCoin cryptocurrency, which equips computer owners worldwide to rent out ide processing power for profit as Host Providers.
Pew Research Center (2015, April 15) Cell Phones in Africa: Communication Lifeline Retrieved from http://www.pewglobal.org/2015/04/15/cell-phones-in-africa-communication-lifeline/ 26
Use Cases Fog Computing Enables Smart Firefighting Contributed by: Wayne State University Firefighting is a dangerous job that requires split-second decisions based on immediately available information. Smart firefighting refers to the process of collecting and quickly analyzing onsite information before distributing it to fire responders. Various technologies such as communications, sensors and drones are used to gather massive real-time scene data. That data is then transferred to many advanced machine learning-based, large-scale data/video analytics to transform the gathered field data into useful information and insights for fire responders. For smart firefighting, timeliness and accuracy are two foremost system requirements. The more realtime accurate data the fire responders have, such as the firefighter’s location and physiological condition, the building floor plan, the locations of hazards, and the number of trapped occupants and their corresponding location, the higher probability of saving lives, ensuring firefighter safety and limiting fire damage. In order to obtain more accurate results, deep learning algorithms are widely used for large-scale data/video analytics. These algorithms require not only powerful computing chips but also large storage capacity, which is impractical to deploy on the incident commander’s tablet or personal computer on the firetruck. Obviously, Internet-based cloud computing is a good candidate to solve the resource limitation on the fire scene. In reality, many firefighting-related systems or ongoing projects have proposed using the cloud, such as localization and tracking from TRX System’s NEON Personnel Tracker27, and the Precision Location and Mapping System28. However, the cloud is Internet-based, and the issue is the extra latency in interacting with the remote cloud. In fact, these algorithms usually take the field data as their input. Therefore, the extra latency problem will become much more severe as the prevalence of IoT-based applications with unprecedented volume and a variety of data generated each day increases. Obviously, the traditional cloud computing model is not quite right for the time-crucial firefighting system. On the fireground, each second matters and accurate data is highly desirable. To balance the trade-off between these two metrics, we envision applying a new computing model—fog computing—into the time-sensitive firefighting field. Fog computing is an emerging architecture that will perform the complex data processing and analysis in the proximity of the data source instead of on a remote cloud server. This will improve the system response time, conserve network bandwidth, and potentially address the security concerns and privacy exposure. Latency-sensitive applications on the fireground We will discuss several latency-sensitive and computational demanding applications that are best fit for fog computing.
2016. NEON Personnel Tracker. (2016). http://www.trxsystems.com/personneltracker.html.
Bob Scannell. [n. d.]. Sensor Fusion Approach to Precision Location and Tracking for First Responders. ([n. d.]). http://www.electronicdesign.com/embedded/sensor-fusion-approach-first-responder-precision-locationtracking 28
Hazard detection & occupant counting: This is an essential part of real-time situational awareness. Fires are unpredictable and can result in severe loss of both personnel and property if not well mastered. Smart firefighting will rely on the video analytics to distill invaluable information from body cameras and surveillance cameras on the fireground. For example, while the rescue team is on a search mission, correctly detecting flashover and toxic gas and then quickly broadcasting its location to all fire fighters is extremely important to allow them to avoid a risky area. Hazards such as a fallen ceiling or a wall collapse, or even chemical gas emission could also be identified. In addition, the images and videos from the surveillance cameras are good sources for discovering how many occupants are trapped and even their location. However, these video and image analytics require large compute power, and extensive video data transmission over the Internet will adversely affect the real-time performance. Intelligent automated safety decision system: The rapid advances of communication, sensing technologies and IoT are giving the fire responder access to unprecedented amounts of data. Not limited to the fireground data, some of this data can come from the Internet as well, such as demographic reports, building blueprints, public social media postings, and so on. As a result, the data might be overwhelming and overloaded, producing the risk of distracting the fire responders and even causing them to make wrong or unsafe decisions. Fog computing can make the IoT data actionable and useful. It uses machine learning and artificial intelligence to analyze the flood of data, turn them into actionable knowledge or core insights, and ultimately form real-time recommendations for the fire responders. Imagine the next generation of firefighters: Various wearable sensors and devices as part of the firefighters’ uniform can sense their position, health condition, the presentation of dangerous chemical gases, environmental heat and much more; drones can see the fireground’s aerial imagery; robotics with cameras and sensors can enter dangerous areas to see how much debris is in the way and gauge other important environmental parameters such as heat and smoke density. With these data as its input, the automated intelligent safety decision system can help the fire firefighter find the safety exits, a nearby propane tank, warn when the temperatures around them are elevating, and even estimate the probability of an explosion. Currently, AI-based research for fire responders is still at the early stage. So far there is only one ongoing project, from NASA JPL referred to as AUDREY29, which is exploring the application of artificial intelligence to help first responders make safe and split-second decisions in dangerous situations. Apparently, AUDREY is highly time-sensitive. But the AI-based large-scale data analysis and reasoning determines AUDREY can only deploy the core computation part on a remote cloud, which extends the system response time. Fireground 3D modeling: This will enable the incident commander to know the situational information in 3D space. For example, coupled with a localization system, it can display the firefighter’s position in a more meaningful way (e.g. on which floor and which room). Unfortunately, 3D structure modeling requires many input parameters such as the building’s height, shape, number of floors and inside floor plan, which is unavailable without prior knowledge of the building. The ongoing project FAST  proposes to learn all of these building parameters from the aerial images taken by a drone and from the
AUDREY Fact Sheet. 2016. Assistant for Understanding Data through Reasoning, Extraction and Synthesis (AUDREY). (Aug. 2016). https://www.dhs.gov/sites/default/files/publications/Audrey2-fact-sheet-508.pdf 29
open data portal for city construction. However, 3D structure modeling is not lightweight, preferring to deploy on the devices with rich compute power. Fog computing-enabled smart firefighting The architectural depiction of fog computing-enabled smart firefighting is illustrated in Figure 9. The system consists of the fireground sensing components, routing network and remote cloud server. For the first two elements, there are many different types of devices that have the potential of executing a certain workload. The cloud server is used to store large-scale data for future historical analysis and conduct off-line model training or other latency-tolerant applications. Figure 10 shows currently available edge devices and fog nodes for firefighting and their corresponding distances to the field data source. The objective for fog-enabled smart firefighting is to harness all computing resources close to the data sources, including the original data producer, the field edge devices on the fire vehicle as well as the routing nodes/mobile base stations. The computational task requiring immediate reactions will be deployed on either the fog nodes or field edge devices, physically close to the data source.
Figure 12: The architecture of fog computing-enabled smart firefighting.
Figure 13: Involved devices in the smart firefighting system.
Field edge devices: The edge devices either worn by the firefighter or newly installed around the fire scene are field data sources. For example, tiny sensors embedded in the protective gear or other physiological sensors (e.g., Zephyr BioHarness) are responsible for continuously tracking each firefighterâ€™s health data, such as heart rate and heart rate variability. The location unit 30 is the source of the firefighterâ€™s indoor location data. The infrared body cam and toxic gas sensor can be used to detect hazardous events on the fireground. Existing surveillance cameras can be used to estimate the number of trapped occupants and their locations. Additionally, there is a local centralized data center on the fire vehicle to provide the user interface for monitoring and tracking, which is usually a laptop operated by the incident commander. Most fire vehicles also contain a mobile broadband router with the capabilities to convert the broadband cellular signals (i.e., 4G/LTE) to Wi-Fi and create a local Wi-Fi hotspot. Compared with the field sensors, these vehicle devices have much more capacity both in computation and storage. They comprise the field computing center in Figure 10. Fog nodes: These devices include the routers, base stations and switches, as well as their corresponding capacity-added nodes in storage or computation. Besides the traditional Internet traffic routing, in the fog computing model they are also responsible for processing complex data analysis and algorithms for time-sensitive applications. Cloud server: In fog computing-enabled smart firefighting, the field data used for historical analysis will be transmitted to the cloud center when the Internet connection is available. The less time-sensitive applications such as the dynamic model training can deploy on the cloud server as well. Wayne State University is a premiere university in the heart of Detroit.
FAST: Firefighting AssitantSysTem. ([n. d.]). http://mist.cs.wayne.edu/EdgeCOPS/index.html-FAST
Improving the Convenience of Carsharing Contributed by: The OpenFog Japan Regional Committee Note: In Japan, carsharing is legally handled in the same way as rental cars. Carsharing is a convenient service matching car usage to lifestyles, enabling usage whenever needed and for the required amount of time. However, shared cars must be returned to the specified place at the specified time, and the user-friendliness of such services requires improvement. Fog computing can improve the convenience of carsharing. We currently live in a car society. Problems concerning transportation are many and varied, including city scope, population and household distribution, establishment of public transport systems, and regional economics. For example, as the aging society advances in Japan, the number of accidents caused by elderly drivers is increasing. Additionally, there have been other long-standing issues, including congested roads to areas such as tourist spots and suburban commercial districts, and insufficient on-site parking. Not all of these problems can be solved by advances in car-related technologies (automatic driving, etc.) alone. In addition to the aforementioned social backdrop, cost of ownership, including parking fees, fuel, etc., combined with temporary-use vehicles such as public transport and rental cars, have given rise to an increase in the number of people who lead lives without owning a car, particularly those in younger age groups. Carsharing is a measure focused on solving social problems concerning transportation and creating a sustainable, environmentally friendly car society through the lifestyle shift from car ownership to car sharing. Expectations and issues concerning carsharing Carsharing is a service that was created in Europe as a means of complementing public transportation. It is presently expanding in Europe, the United States, and Japan(*1). In Japan, there are broadly two types of carsharing providers. The first are those that widely provide services to general members; the other provides members with services that are restricted by region, apartment complexes, and other factors. Carsharing users expect easy access in an extremely broad range of locations, such as apartment complexes, coin-operated parking spaces, convenience stores, and large commercial districts, without being limited to rental outlets. Additionally, they also expect the cost aspect to ultimately be more beneficial than owning a vehicle (cost of ownership, etc.). Here is a summary of the expectations of carsharing users: â€˘
Savings on the costs of ownership (parking spaces, fuel, breakdowns/repairs, etc.) that come with owning a vehicle.
Cheaper than rental cars and able to be borrowed where desired, even where there are no rental car outlets.
Can be booked and rented easily using a PC, smartphone, etc.
However, carsharing still has issues. For example, for users to be able to use the car they want at any time, carsharing providers must have a large volume of vehicles in one location. However, costs make this difficult for the providers to achieve, which may make it difficult for users to obtain a reservation, or the car that they want may already be loaned out. Many of these issues can be resolved by implementing fog computing. Characteristics and benefits of fog computing The current use of cloud computing can present hurdles in terms of the sharing and utilization of information and data, and achieving low latency in the acquisition, execution, and processing of large volumes of data. The following pages explain how fog computing can solve these types of problems. Furthermore, protecting large amounts of information and data with security is a mandatory condition, and the use of fog computing can be valuable in this respect. Fog computing is system-level horizontal architecture. It is adequate for autonomous and cooperative actions and distributes the computing, storage, control, and networking services/resources to the continuum of systems from the cloud to the edge (close to the actual sites of data. Fog computing is based on open architecture, and broadly speaking, features the following benefits (SCALE): •
Security: Enhances security and guarantees safe and secure transactions by distributed processing
Cognition: Operates autonomously by understanding the intentions of customers and surrounding conditions
Agility: Operating under shared infrastructure, which allows swift provision of innovation and economical scaling
Latency: Implements local processing in real time and controls cyber-physical systems (CPS)
Efficiency: Aggregates unused resources from connected end devices locally and dynamically and uses them efficiently
Each of these characteristics of fog computing can prove beneficial to improving carsharing services. For example: •
Appropriate and rigorously processed protection of customer information, security risk diversification, and improved service continuity via fault isolation
Services tailored to customer circumstances and the surrounding environment (traffic congestion etc.) are autonomously determined, and the appropriate services are selected
Expanded service provision area and scope, and swift support for simultaneous introduction of new services
Changes in the circumstances of customers using the service and urgent/emergency situations are handled rapidly
Efficient application of the processing power of in-vehicle computing devices allows economical services (reduced communications costs)
As carsharing systems develop and grow in scope, the implementation of fog computing will allow the provision of better services than cloud computing, as explained below.
Carsharing use case scenarios with fog computing solutions For the following use case examples, we have considered two types of carsharing customers: •
User A: User A joins Company A, which provides a carsharing service that can be used at any desired time simply by booking from a smartphone or PC at any time of the day. User A is a special-tier member who is always considerate of people and the vehicle.
User B: User B is a normal member who is somewhat unconfident at driving and joins carsharing service provider Company B, which is linked with Company A.
Companies A and B both provide a one-way (drop off) service. We have divided the carsharing experience into the following three use case scenarios: •
Scenario 1: Renting a shared car: User A rents a car from Company A.
Scenario 2: A shared car is relayed between users: A shared car is relayed from User A to User B.
Scenario 3: An accident occurs: User B causes an accident.
Figure 14: Scenarios 1, 2 and 3.
Figure 15: Scenario 1.
Scenario 1: Renting a shared car In Scenario 1, User A uses the smartphone app of Company A to book a car and subsequently goes to the nearby parking lot where the car is to be parked before the booked time. However, User A is then contacted by the support desk of Company A and informed that because the return of the booked car has been delayed, as a service for paying members, the user will be granted preferential allotment of a vehicle of Company B. With no other option, User A accepts the request for the change and finds the car of Company B by looking for a banner. After unlocking the door by holding up a membership card, User A checks the engine compartment, fuel, and the vehicle body before getting in the car and driving. As soon as User A starts to drive, the in-car drive recorder starts recording video in case of rule violations, accidents, etc. In addition, the system of Company A starts charging a fee when the user enters the vehicle. User A is scheduled to return the car to a parking lot near a train station at the destination within the time originally booked. Now assume that you are User A. Since the current user of your booked car is returning the vehicle late, you have to take the trouble of going to a different parking lot than the one involved in the original booking. And on top of that, you are forced to rent a model that you did not want. Would you be satisfied? You might not use the service a second time. This actual situation is one aspect that users say they are dissatisfied with in the current carsharing market. Furthermore, if User A hadn't joined the service for paying members, he/she would not be provided with a replacement vehicle at all and may have to simply wait for the car in question to be returned. Because of the swiftness of the car transfer, the current vehicle situation (for example, which car is where right now? How long until the car is returned?) is always in a state of flux. Fog computing can enable fog nodes to interact between the cars in order to instantaneously collect the changing data in real time of the current vehicle situation, with no delay. In addition, fog computing can provide a replacement service by autonomously detecting the possibility of a late car return in advance and instantly contacting another carsharing service provider when a delay is anticipated. By using fog nodes, it is possible to immediately search for a replacement vehicle that
matches the user's desired conditions and provide it at a parking lot that is as close by and easily accessible as possible. There is no need to build an ideal system to fulfill all of these requirements from the ground up. Individual carsharing service providers can also use fog computing solely to manage the cars in their fleet or to cooperate with other carsharing service providers for their services. Fog computing is based on OpenFog reference architecture; this interconnectivity and scalability also allows flexibility regarding improvements in cooperative relationships with other carsharing service providers, even as the number of vehicles that are owned fluctuates. By updating the connections between autonomously operating fog nodes as appropriate, the optimum system for natural and business environments is able to be maintained.
Figure 16: Scenario 2.
Scenario 2: A shared car is relayed between users User A arrives at the parking lot at the destination within the booked time and carries out the procedures for return. Fees are calculated based on the time between entering and exiting the vehicle and are automatically charged to the registered credit card the following month. As soon as the procedures for return are completed, Company B sends an email to User B, who is on standby for a vehicle in the desired area, informing the user that a vehicle is available. User B goes to the nearby parking lot where the car is parked and finds the car by looking for a banner. User B unlocks the door by holding up a membership card and begins driving. As soon as User B starts to drive, the in-car drive recorder begins recording video in case of rule violations, accidents, etc. In addition, Company B starts charging a fee when the user enters the vehicle. In this scenario, the car renting process and returning process could have been performed more smoothly. If you were User B, who was on standby for the vehicle, would you want to wait for the vehicle if User A had spent a long time on the return process? If we assume that a car could not be provided immediately when necessary and that the latest information for a provision schedule could also not be made available, User B may have selected a different means of transport. The inability to obtain a future schedule is a major user complaint that can be easily imagined if you have ever used public transportation. Without a doubt, this is a lost business opportunity. Fog computing can monitor the usage situation in real time, allowing the latest information to be provided constantly. The fees of both User A and User B could also be recalculated based on the length of time from the originally contracted time. Furthermore, ascertaining the situation of User A in real
time allows User B, who is on standby, to be instantly notified of the estimated and completed drop off time of User A. And constantly monitoring the situation of car usage through the interaction between fog nodes, and giving guidance in regard to relay locations, makes it possible to guide users so that the drop off time of the current user and the pickup time of the next user (standby user) are synchronized. This also allows proposals for changing the car or parking lot, and enables the user to be accurately guided to the pickup location after the change. In summary, fog computing gives carsharing services the ability to offer users the most suitable car at the time they want, by providing real-time data on car availability, relay procedures, and schedule changes. These are features that will lead to a higher degree of customer satisfaction with the service. There is no need to implement this high-precision monitoring of the latest information constantly. The kind of monitoring and reports that are required are sent to fog nodes as necessary, which allows network and computing resources to be used efficiently.
Figure 17: Scenario 3.
Scenario 3: An accident occurs User B causes a traffic accident while driving. User B performs measures to indicate that an accident has occurred to those nearby and moves the vehicle to a safe location to prevent a secondary accident or traffic congestion. If there are injured parties, User B must assist them and call an ambulance. User B contacts the police and explains the circumstances of the accident, then carries out the instructions of the police. Furthermore, User B also contacts Company B and shares the accident circumstances. User B must contact an insurance company if required and arrange a tow truck depending on the severity of the accident. The tow truck takes the car to a repair facility and the repaired car will be returned to the original parking lot. As part of compensation, User B may be required to pay Company B a non-operation charge. Driving a car always carries with it the risk of an accident. Take a moment to consider the situation of a user that has inadvertently caused an accident and the psychological pressure that he/she is under in
such an emergency. A user that causes an accident must act alone and hurriedly contact the relevant parties for the variety of reporting and arrangements required. In this scenario, there are at least six types of tasks that the user must carry out by themselves. In a situation such as this, fog computing can make it easy to offer a service in which all tasks are performed autonomously without the user having to contact anyone. Contact and autonomous processing such as this would be far quicker than a person performing the actions, and would be indispensable when racing against time, such as when a life is on the line. Fog computing offers constant monitoring and real-time response/processing with the interaction between fog nodes. Like a car moving at high speed, the intelligence of fog nodes constantly captures information regarding the surrounding environment and changing situations with high precision. As a result, the tasks that were traditionally performed by humans, such as accident analysis, can be minimized, and conditions can be ascertained in real time even during high-speed driving, thereby supporting analysis of the accident scene. Cyber security mechanisms based on the OpenFog Reference Architecture also offer the appropriate controls and management of privacy information required for accident processing. Detailed information regarding cars, drivers, and drivers' licenses must be provided to the police. In addition, although it is usually necessary to provide detailed information regarding the condition of the driver to medical facilities, information regarding the driver's license is not required. It may be appropriate for carsharing providers to provide only the details of the vehicle's condition and the minimum amount of personal information required to identify the service user. With fog nodes, it is possible to identify the scope of autonomous operation in accordance with the purpose of exchanging the information, enabling the state of information distribution to be controlled appropriately. Solution summary To summarize the above, fog computing progressively resolves the problem of overcomplicated system design associated with traditional centralized technologies by using the aspects of autonomous distributed operation, security, and scalability, thereby facilitating continuous and adaptive system development corresponding to the business environment. This was illustrated in the three use case scenarios. In addition, improving individual fog node computing power is likely to greatly reduce complicated and time-consuming troublesome tasks that have traditionally been performed by humans. Continuously enhancing user support structures through the design of new and updated fog nodes and mutual connections between fog nodes is likely to raise the degree of satisfaction with services and vastly increase business opportunities.
Figure 18: Scenarios 1, 2 and 3 with fog computing.
Scenario 1: Renting a Shared Car
Scenario 2: A Shared Car Is Relayed Between Users
Fog (Cloud-to-Thing Continuum)
Local processing and cooperation between fog nodes are also utilized
Centralized processing of information and instructions using the cloud
Negotiating an available rental car between carsharing companies can be performed quickly and locally through links between branches (fog) in the respective areas.
May be time-consuming as all exchanges between carsharing companies are executed via each other's cloud.
The real-time information linkage between fog nodes dynamically links the users and allows a smooth relay at a location that is convenient for both parties.
Information regarding the latest circumstances of each service contract is rarely updated, and relays at locations other than those indicated in advance may lead to service delays.
By utilizing local processing using in-car fog nodes, only the information required for and pertinent to analyzing and ascertaining the situation is sent to the cloud. Swift accident response is possible.
Due to centralized batch processing, network communications and concentrated loads may cause delay in ascertaining situations and responding. Increases in the amount of required bandwidth due to communications to the cloud lead to increased service costs.
Scenario 3: An Accident Occurs
Figure 19: Summary of Scenarios 1, 2 and 3, before and after implementing fog.
The OpenFog Reference Architecture provides the basis for the following examples of fog node implementation structures in the carsharing use case:
Figure 20: Hierarchical structure of fog nodes in the carsharing use case.
The fog nodes constructed from the above fog node system implementation example have the following characteristics: •
• • • • • • •
• • •
There are two types of fog nodes: o Edge layer fog nodes - Mainly collect data and respond in real time. o Middle layer fog nodes - Mainly create information from data and exchange mutual information. Fog nodes are virtual systems with network functions; Fog nodes consist of virtual machines, virtual networks, and virtual systems with orchestrators; Fog nodes act as network infrastructure and have functions to control virtual network such as SD-WAN networks; Applications running on fog nodes communicate on the aforementioned virtual networks; Fog nodes have security functions such as authentication, firewalls, and VPNs; To operate fog nodes as systems, each service provider requires orchestrator functions; Orchestrators recognize the data-processing methods and communication methods stipulated in agreements and break down here as the operating rules for fog nodes based on the agreements; Orchestrators deploy applications and the above-mentioned rules on fog nodes; Orchestrators control the connections between fog nodes on virtual networks; The above characteristics come up on fog systems when systems are built with fog nodes and should be understood to effectively utilize fog computing.
Authors The authors are contributors to the OpenFog Japanese Regional Committee. Hisanobu Sato, ITOCHU Techno-Solutions Corporation Toshihiro Ima, Cisco Systems G.K. Hiroki Suenaga, Internet Initiative Japan Inc. Ushirokawa Akihisa, Takahashi Kazuhiro, NEC Corporation Shoji Temma, Akiko Murakami, Fujitsu Limited Kenji Nomura, Katsushi Sindo, Taiga Yoshida, NTT Communications Corporation Tetsushi Matsuda, Mitsubishi Electric Corporation Niki Agata, ABBALab inc.
Fog Computing eBook entitled: Bridging the Cloud-to-Things Continuum: A collection of perspectives on fog computing. Published by Tech Idea...
Published on Oct 12, 2018
Fog Computing eBook entitled: Bridging the Cloud-to-Things Continuum: A collection of perspectives on fog computing. Published by Tech Idea...