Intelligent Edge Computing: Taking Telemetry to the Next Level
Telemetry already offers high value to field-based assets. And with edge computing, you can significantly enhance telemetry capabilities without increasing cost.
If you have connected assets in the field, then the chances are you are benefiting from telemetry.
Whether you are checking on the fuel level of a remote diesel generator or tracking the location of an industrial supply delivery, the process of recording and transmitting the readings of an instrument is an intrinsic part of the Internet of Things (IoT).
And it has the potential to get much better. While telemetry is deployed in a wide range of sectors, from agriculture to oil and gas, in most situations it suffers from limits on the amount of data it can process and transmit.
By implementing an embedded real-time edge processing technology - you can get your telemetry systems to process much greater amounts of information from all sensors.
For this reason, many industrial telemetry applications have historically been relatively unsophisticated. The most obvious examples are tracking devices, pressure sensors and temperature gauges.
Devices such as these generally send only one or two types of data, and if they are unable to connect they largely cease to have value.
Fortunately, telemetry’s uses in areas such as transportation and industry have widened the use cases and improved the versatility of the technology.
Industrial machinery telemetry systems, for example, can track a host of variables, including flow rates, temperature, pressure and so on.
To this day, though, most telemetry devices are limited to carrying out and transmitting measurements, with little onboard data processing capacity. This results in several downsides:
- Telemetry systems are not good at separating useful and useless data, so you generally need to send the whole lot to the cloud for analysis… or risk missing important signals by trying to shut out most of the noise.
- Either way, the amount of data you can collect via telemetry is likely to be limited by the device’s access to bandwidth or budget constraints due to high cost for data storage and analysis.
- Telemetry typically only collect basic machine usage information. The full potential of using data from all sensors goes lost.
- While a major feature of telemetry devices is that they are designed to send data in real time, when a connection is lost you could end up losing the information you need for monitoring and control purposes.
All of these drawbacks can be overcome with one simple addition: Edge Computing.
By implementing an embedded real-time edge processing technology such as the Crosser Edge Node, you can get your telemetry systems to process much greater amounts of information from all sensors.
Edge computing allows remote devices to carry out functions such as data filtering, aggregation and compression, saving on network bandwidth requirements and cloud computing costs.
This, in turn, lets you carry out more measurements per machine, so for example you can go from measuring one or two key performance measures to introducing full condition monitoring on remote machines.
The added intelligence could allow you to introduce anomaly detection processes and predictive maintenance for individual parts of the machine, so you get a full picture of faults as or even before they appear. And monitoring and control can continue even if there is a momentary loss of connection.
Last but not least, with edge computing you can introduce encryption and certificate-handling capabilities to safeguard your telemetry data. This security aspect should not be ignored as malware increasingly targets IoT devices as well as traditional hosts.
Overall, adding edge computing to telemetry gives you the ability to greatly expand the range of remote functions you can carry out, without having to worry about soaring data volumes. Instead of choosing what data you collect, collect it all and process most of it there and then.
Want to know more? Read more about the Crosser Edge Computing solution here →
About the author
Goran Appelquist (Ph.D) | CTO
Göran has 20 years experience in leading technology teams. He’s the lead architect of our end-to-end solution and is extremely focused in securing the lowest possible Total Cost of Ownership for our customers.
“Hidden Lifecycle (employee) cost can account for 5-10 times the purchase price of software. Our goal is to offer a solution that automates and removes most of the tasks that is costly over the lifecycle.
My career started in the academic world where I got a PhD in physics by researching large scale data acquisition systems for physics experiments, such as the LHC at CERN. After leaving academia I have been working in several tech startups in different management positions over the last 20 years.
In most of these positions I have stood with one foot in the R&D team and another in the product/business teams. My passion is learning new technologies, use it to develop innovative products and explain the solutions to end users, technical or non-technical."