Digitization is an omnipresent topic in intralogistics and offers numerous opportunities – from increased productivity to new business models.

Worldwide, the estimated costs caused by unplanned downtime alone amount to up to 56 billion dollars per year. The savings potential for OEMs and end customers is correspondingly large. Machine builders are therefore faced with the challenge of making the availability and condition of their machines transparent.

There are many opinions and approaches to digitalization. Many expect a particularly high benefit in the area of maintenance – the change from time-based to condition-based maintenance increases the efficiency of a plant because components only have to be replaced when necessary.

We often hear ‘condition monitoring’ and ‘predictive maintenance’ in this context – and with it the expectation of immediately reducing the costs of unplanned shutdowns through better visibility into the condition of the components and preventing possible plant faults. This is accompanied by the idea of efficiency gains and error prevention. If we think one step further, application models such as pay-per-use and the resulting business models are possible, for which the full digitalization of the condition of a system and, above all, its performance is a basic prerequisite. Only in this way is billing possible.

Unfortunately, machine builders are often only offered partial solutions, but the desired overall solution of digitalization usually remains unclear and is left to the customer.

APPROACHES

Lenze supports its customers holistically in this transformation process. The basis is a phase model that shows all the necessary steps for digitization. The first step is about visualizing data, obtaining consolidated transparency about the installed base and system performance, and highlighting system downtimes or failures. The focus is on the machine or the entire system. The visualization of system performance and balancing of system utilization allow significant conclusions to be drawn about the processes and operations of the networked system sections.

In the next step, Lenze supports its customers with digital services and cloud services for all aspects of the machine. By reporting the OEE (Overall Equipment Efficiency), for example, the availability, throughput and production yield of the machine or plant can be optimized. The data can be compared across machines and plants. Based on this data and with the existing domain knowledge, initial models are derived that reduce downtimes via condition monitoring. Based on the installed components, they also allow a precise statement about the general condition of the machine. If a fault occurs frequently in a system that does not occur in another identical networked system, the cause can be eliminated. It is clear that for the networking of the system, a high level of transparency and a sufficiently high degree of domain knowledge are essential. The final step is to generate predictive models, which independently point out abnormalities that would lead to a possible plant shutdown. The automotive industry is at the forefront of these innovations.

DESCRIPTION OR PREDICTION?

Predictive maintenance is the prediction of events or the probability of events. Condition monitoring is a preliminary stage that enables a more in-depth description of the current condition from the interpretation of existing data. This requires a deep understanding of machines and processes in order to generate
meaningful information from ‘bare’ data. Analyses based on machine learning (ML) and AI can help to detect anomalies quickly.

Lenze is working intensively on both topics and has demonstrated a model-based and a data-based approach. The model-based approach compares the acquired data with a mathematically assumed model of the application and interprets the detected deviations from the previously defined model. The data-based approach uses a neural network, or colloquially artificial intelligence approaches, and learns the machine behaviour independently. The recorded values are then interpreted with those of the selflearned properties. In this way the several factors can be combined into one behaviour.

The increased motor speed, reduced current consumption and longer conveying time between the light barriers give an indication of slippage or wear of the conveyor belt over the drive drum. However, this approach requires much more computing power, which means that processing currently still has to take place in a higher-level controller or the cloud; the edge controller is used for local data compression. Moore’s Law will drastically increase the technological possibilities within a very short time and lead to more sophisticated models and processing possibilities in the control system and the frequency inverter.

AUTOMOTIVE INDUSTRY

The projects already being implemented use data from around 1,000 drive packages each, which are distributed across several plants, in the first expansion stage. Pure data access already makes it possible to compare data from the plants and detect deviations. The data is stored and evaluated locally within the company’s own network. In this way, initial experience can be gained with the handling, amount of data, processing and analysis of the information.

The system has an open design and distributes the load to several edge controllers, which communicate with the higher-level data servers or data lake via MQTT, and use OPC UA downwards. In addition to the actual Ethernet based fieldbus as a connection to its own components, this also enables external third party components to be connected to the system, and is still scalable upwards via the number of edge controllers. This makes the knowledge gained interesting for large installations such as baggage conveyor systems in airports or fully automated warehouses, where it is not uncommon to find more than 10,000 drive packages and components from a wide range of suppliers in use. The edge controllers make it possible to gain experience with data pre-processing, compression, real-time behaviour, integration of third-party components and connection to the data lake.

The second step is then about predictive maintenance in practice and the application of the algorithms already verified in the laboratory environment for the detection of system anomalies as well as the verification of the required data for the domain. In the laboratory environment, the processed data already provided good results. It will be interesting to find out how the massive data pre-processing with Fast Fourier Transformation, Kalman filter or envelope analysis affects the real data volume and the workload in the producing application domain. An immediate transfer of theoretical and laboratory findings in the practical environment cannot always be guaranteed.

For example, error patterns can have a negative effect on the algorithms due to the components installed around the sensors and drive packages and their own behaviour, and the adaptation and robustness of the data-based model to such influences is an interesting finding from this investigation. We are familiar with such a problem in everyday life when glasses rattle in the wall cupboard as a result of the surrounding compressor of the refrigerator, which has just increased the cooling capacity.

So are we already in the ‘day after tomorrow’ and are the developed processes already sufficient to ensure efficiency gains and avoid unplanned plant shutdowns? The future remains exciting…