Predictive Maintenance System

11/04/2024 – In recent developments within the Dig_IT project, Task 5.5, led by Core, has made significant strides in the implementation of a predictive maintenance system. This task aims at accurately assess the health of the assets and predict their future states in near-real time, enabling early detection of potential failures through anomaly detection.

By implementing anomaly detection models for specific asset types and data to ensure input consistency and model adaptability and ensure adequate data collection by installing edge devices where applicable, the task aims to significantly enhance operational efficiency, reduce downtime, and preemptively address maintenance needs, ensuring the longevity, reliability and availability of critical machinery in the field.

CORE, in collaboration with Marini Marmi, Tampere University, Titania, and Kemi, and the end users being Marini (Milling Machine), Titania (Heavy Trucks), and Kemi (Heavy Trucks), have obtained great results to be visualized in the Decision Support System.

In Task 5.5, partners focused on three unique Use Cases involving the Marini, Kemi, and Titania mines, each presenting its set of assets and necessitating an approach that, while uniform in methodology, was customized to fit each scenario.

An achievement of this task was the data collection phase, which involved gathering data from diverse sources. This included collecting sensor-monitored operational parameters from trucks in the Kemi Mine, such as engine speed, throttle position, and diesel fuel consumption and digitally simulated data from Titania’s truck digital twin developed within the project.

For the Use Case of Marini, to monitor the asset’s operation, we proposed, installed, and included in the task’s scope an implementation of edge devices. Placed on critical parts of the Gangsaw milling machine, sensors of the edge devices measure the ambient temperature and the vibrations in the three axes (x, y, z) from the different components. Data collected, was subjected to an extensive exploratory data analysis (EDA) to identify trends, patterns, outliers, and anomalies, thus offering insights into the intrinsic qualities of the data.

After this preliminary examination, we continued on a data preprocessing stage, designed to rectify issues such as missing values and outliers, as well as to normalize the data. Using specific knowledge from the operations’ field, we cleansed the datasets from anomalies. Since, the known anomalies (faults and damages) may be not known, we proceeded with constructing unsupervised datasets with no known anomalies. This process ensured a structured data form, free of anomalies that could affect anomaly detection models, and transformed the data into an interpretable format.

We developed models based on architectures renowned for their effectiveness in the anomaly detection niche. These models are adept at identifying deviations from normal operational conditions, thereby generating early warnings for potential equipment failures. Utilizing an Unsupervised Anomaly Detection approach, the models are trained on normal operating data, aiming to minimize reconstruction errors. This process involves encoding input data to capture essential features and then decoding it to reconstruct the input.

Anomalies are flagged based on a predetermined threshold of reconstruction error between input and output, established through statistical methods to differentiate between normal, warning, and critical states. In the concluding stage, the models were assimilated into the project’s existing infrastructure, enabling their operation as microservices.

This integration involved monitoring data related to inputs and relaying anomaly detection results to the project’s data warehouse. Such configuration allows for seamless and near real-time integration, making anomaly detection input data and models’ results available to the interested parties.