Fog Computing Explained
Introduction to Fog Computing
Fog computing, also known as fog networking, is a decentralized computing architecture in which business logic and computing power are distributed in the most logical, efficient place between the things producing data and the cloud. Fog computing essentially extends cloud computing and services to the edge of the network, bringing the advantages and power of the cloud closer to where data is created and acted upon.
The main idea behind Fog computing is to improve efficiency and reduce the amount of data transported to the cloud for processing, analysis and storage. But it also used for security, performance and business logical reasons.
The Fog Computing architecture is used for applications and services within various industries such as industrial IoT, vehicle networks, smart cities, smart buildings and so forth. The architecture can be applied in almost any things-to-cloud scenario.
The metaphor fog originates from the idea of a cloud closer to the ground. A cloud service closer to the data sources. During 2015 Microsoft, Cisco, Intel and a couple of other enterprises were gathered in a joint consortium to push for the idea of Fog Computing, called Open Fog Consortium. The consortium merged with Industrial Internet Consortium in 2018 as there was a significant overlap between the two groups.
The Industrial Internet Consortium is today one of the largest communities that spreads knowledge about the benefits of Fog Computing, Edge Computing and Industrial Internet technologies, use-cases and benefits.
How Fog & Edge computing works
Edge devices, sensors, and applications generate an enormous amount of data on a daily basis. The data-producing devices are often too simple or don't have the resources to perform necessary analytics or machine-learning tasks. They just produce information to the cloud.
The Cloud has the power and ability to manage these computing tasks. But the cloud is often too far away to process the data and respond in time. Connecting all the endpoints directly to the cloud is often not an option. Sending raw data over the internet can have privacy, security and legal implications besides the obvious cost impact of bandwidth and cloud services.
In the Fog Computing architecture, the processing takes place in a smart device close to the source. It can be an IoT gateway, a router or on-premise server, where the software reduces the amount of data sent to the cloud and takes action depending on the business logic applied in the Fog Node.
What is the difference between Fog and Edge Computing?
Both Fog and Edge Computing are frequently used terminologies to describe local processing of sensor data. We at Crosser have decided to only use Edge Computing in order to simplify the communication and to limit the confusion with our customers and partners. But is there a difference?
If we look at the three different deployment scenarios of local processing (software running on hardware controlled by the company and not part of a cloud service) in the picture below there are some differences. All scenarios are Edge (= local processing) whereas the term Fog Computing fits best for the middle scenario where sensor data from multiple edge units are processed in an aggregation layer on a dedicated network equipment such as a router or IoT Gateway.
The Edge Analytics software is deployed on an IoT gateway on a remote unit, or embedded, and processes the sensor data from that single unit.
Field Edge Aggregation
Also called Fog Computing. The Edge Analytics software is typically deployed on an IoT gateway and processes the sensor data from multiple field units.
Typically on a factory shop floor or building with multiple machines. The Edge Analytics software is installed on a server/virtual machine and processes sensor data from multiple on-premise machines and data sources.
For the discussion, it is important to note that Edge & Fog Computing complements - not replaces - cloud computing. Edge & Fog Computing analyze and take decisions on data in motion. While the cloud performs resource-intensive, longer-term analytics on data at rest.
About the author
Johan Jonzon | CMO
CMO & Co-founder
Johan has 15 years background working with marketing in all possible type of projects. A true entrepreneurial spirit operating between strategic and hands-on details. He leads our marketing efforts as well as the product UI design.
Sales and market-oriented with a focus on getting the job done. He has worked with web and communication in Sweden and internationally since 1999. Since 2012, Johan has been focusing on real-time communication, and the business and operational benefits that comes with analyzing streaming data close to the data sources.
I want everything we do to be clean, simple and very, very user-friendly. We strive to be the clear leader in usability among our peers.