OPC UA is one of the most common interfaces between machines, systems and services within today's industries. OPC UA is a series of specifications that define the interface between clients and servers, as well as servers and servers, and is known for being both a reliable and secure way of creating seamless communication on the factory floor. Enabling real-time access to data, monitoring of alarms and events, and access to historical data and applications.
To leverage the value provided by the installed OPC servers in the industry, Crosser offers a set of (connector) modules to radically simplify integrations and data analytics using OPC data.
These OPC modules, combined with the more than 50 analytics modules, and the numerous north bound connectors (cloud, ERP, MES, historians etc.) provided in Crosser’s low-code platform, enables users to build endless numbers of intelligent integrations and advanced analytics projects for their industry.
Crosser’s set of OPC UA modules
In the Crosser Module Library you can find 5 different modules to work with your OPC data.
OPC UA Subscriber Module
Get data when it changes.
Set up subscriptions towards an OPC UA server and then it will give you data whenever it changes. In this module, data is being pushed from the OPC server into your flow to use the data and do something with it.
OPC UA Reader Module
Get data when you want it.
Control when you read the data. The module can be triggered by time, if you want to read data every certain amount of seconds or minutes, or other events depending on the use case. Very useful when you want to control when you read the data and then use the data. In this case the data is pulled into the flow from the OPC server.
OPC UA Writer Module
Send data back to the OPC server.
Connect and send the right data back to the OPC server, to control the machines or take other actions.
OPC UA Events Module
Get events from OPC servers.
Listen to events from the OPC server and take actions based on those events.
OPC UA Browser Module
List available data points.
Browse the address space of your OPC server and find tags that you might be interested in getting the data for
With these 5 modules users can build various numbers of use cases. Below, we have summarized a handful of use-case examples that we meet in the industry:
Use Case 1- Stream OPC data to the cloud
When you want to send live data from your machines to cloud services so that you can analyze or present data as part of your cloud service. In this case you the OPC UA Subscriber Module is a good choice. This is a very simple but common use case.
The subscriber module is typically used to get notification when the data is modified and then you can react and send the data to the cloud services. In these cases typically small packages of data are sent, one data change per message and small volumes of data. Usually no more than a few hundred tags are uploaded.
Before sending this data to the cloud some data transformation is often required, you can remove data that is not valuable, change structure of the messages or rename to user friendly names. Once this is done, then you decide to which cloud service you want to send the data to (e.g. AWS IoT Publisher, Azure IoT Hub D2C, Thingworx Publisher).
When you set up a flow like this you need to tell the subscriber module what tags or node IDs you want to subscribe to. This is done by providing a list of the Node IDs that are used to reference tags in the OPC server. You can also add additional metadata to each of the tags as part of that list such as other names, location of the data, what machine it is coming from etc.
Whatever information that you think is needed to use the data in the cloud can be added before sending it to your designated system or service.
Use Case 2 - Batch Upload OPC data to centralized storage
In this use case, you want to get batches of data that you can store in files to be processed and analyzed. It allows you to handle larger volumes of data.
The OPC UA Reader Module is typically used for this kind of use cases so you can have full control of when you read the data and how often you get the data. Also these use cases often include some data transformation before the data is ready to be written to a file where you choose the desired output format (CSV, JSON, Parquet etc.) by choosing the corresponding output module. The files will first be created and stored in the local environment, which can be in the same machine where you run the Crosser node or somewhere accessible from the Crosser node.
To support continuously receiving data new files need to be created, either after a time interval has passed or the file size has reached a limit.
Complement your batch upload
This use case can be complemented with a second flow if you want to upload batches of data to your cloud. This second flow will then regularly check for new files created by the first part above. Any new files will then be uploaded to your cloud storage. Here you are bypassing the IoT endpoints that are suitable for streaming data and instead you connect directly with cloud storage such as AWS S3 or an Azure Data Lake, among others. As a final step when the files have been successfully uploaded you can then delete the files from the local storage so you just use it as a temporary buffer.
Use Case 3 - Closed-loop process optimisation using ML
In these use cases the data is collected from the OPC server, run it through a Machine Learning model (using Python ML frameworks, for example) and then the output is sent back using the OPC UA Writer Module as the output. The machine learning model will typically not work with the data structure we are getting from the OPC server, so some preparation is needed. For example you may need to provide a list of the values you are interested in and in a specific order. Then you can feed the data into the model and use the result that comes out.
In use cases like this, where you for example use a specific Python Machine Learning framework, that framework needs to be installed in the local Python environment where you decide to run this flow. Same thing applies to the trained ML model that you are using, the model file needs to be available in the local environment where you decide to run the flow. This is all handled automatically by the Crosser tools, when you build the flows you configure the Python module by selecting the libraries that need to be installed and the additional files that have to be available.
When it’s time to deploy the flow, the nodes that receive the information will see that first libraries need to be installed into its local environment and also that the ML model must be downloaded into local storage, before flow execution can start.
Read more about Crosser for MLOps here →
Principle of flexibility and simplicity
A basic principle of the Crosser tools is that once you have built the flow, you should be able to deploy it in any node you have in your set up. They should all behave the same, so you don’t need to know any details about specific nodes, you can treat them as equal. Then of course, there can be performance differences depending on the hardware that you use to run each of these nodes but from a functional perspective they should be identical. Anyone should be able to work with them without knowing the specifics of each flow and node.
Use case 4 - OPC Events & Alarms
By using the OPC UA Events Module you can listen to events from your OPC server. You can either listen to all events from the server, or you can select one or several sub-trees of your address space. When an event is received you can take actions based on the type of event or the data in those events.
An example could be that you notify someone with a text message, or you create a ticket in a support system. Another more drastic action could be to send data back to the machine and stop it to prevent further damage.
In the Crosser Library there are plenty of modules you can use to automatically take actions based on the events listened from your OPC server.
Use Case 5 - Dynamic tag selection
With the OPC UA Browser Module you can create dynamic tag lists.The Subscriber and Reader modules need lists of tags or or nodes IDs that you want to get data from your OPC server. These lists can either be static, i.e. part of the module configuration, or they can be generated dynamically as part of the flow.
For example, you want to use the OPC UA Subscriber Module to stream data up to the cloud. Then you can use the OPC Browser Module to create a dynamic collection of data. The collection of tags might be changing in the server at some frequency and you want to automatically capture these data changes. The Browser module will then search every subsection of the address space in the OPC Server at regular intervals and provide us with a list of values of interest. Once you get the updated list you will use it as input to the Subscriber Module.
Industrial IoT is not about one use case, it is about 1000’s of use cases combined. And OPC is an important piece of that puzzle, often being the interface towards systems and big data analytics in the cloud.
Crosser has extensive experience in connecting and analyzing OPC data for many years. Making IIoT projects reach their goals, and deliver value faster due to the simplicity and flexibility offered by the Crosser solution. If you want to learn more about OPC use cases, Machine Learning and cloud integrations, feel free to contact us at any time.
Interested in simplifying integrations and data analytics using OPC data?
Watch the webinar video: Intelligent OPC Integrations for the Factory Floor and IIoT
Or sign up for a Free trial or Schedule a Demo to learn more.
About the author
Goran Appelquist (Ph.D) | CTO
Göran has 20 years experience in leading technology teams. He’s the lead architect of our end-to-end solution and is extremely focused in securing the lowest possible Total Cost of Ownership for our customers.
“Hidden Lifecycle (employee) cost can account for 5-10 times the purchase price of software. Our goal is to offer a solution that automates and removes most of the tasks that is costly over the lifecycle.
My career started in the academic world where I got a PhD in physics by researching large scale data acquisition systems for physics experiments, such as the LHC at CERN. After leaving academia I have been working in several tech startups in different management positions over the last 20 years.
In most of these positions I have stood with one foot in the R&D team and another in the product/business teams. My passion is learning new technologies, use it to develop innovative products and explain the solutions to end users, technical or non-technical."