• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Mipu

Predictive Hub

PREDICT FAILURES WITH MACHINE LEARNING: REAL CASE STUDIES

The following case study reports the methods used and the results achieved by MIPU with a project whose objective was to avoid faults through the application of Machine Learning . The project has been developed for a client company working in the manufacturing industry .

PREVENT FAILURES WITH MACHINE LEARNING: THE PATH

Machine Learning applications for Predictive Maintenance are used to identify the occurrence of a failure, before this happens. Those who are familiar with the P-F Curve know that the quicker you identify a potential defect, the sooner you avoid machine downtime.

  • – The first step of a Machine Learning analysis process requires the creation of an asset’s mathematical model. This model includes all the process parameters associated with that specific asset. These parameters normally are stored in a database, which acquires data from plant DCS, associated PLCs, electronic registers etc… For instance, if you’re designing a pump model, things as suction and discharge pressure, control valve position, bearing temperature and vibration are some good examples of parameters to include in the model. Most of the models have between 10 and 30 parameters, but there are models that have almost 100 parameters.
  • – As second step , parameters historical data are imported into the model. This dataset is generally known as “training” data set and it normally includes a year of data. One-year dataset allows the model to take into consideration the seasonal variations of management operations. An expert in asset functioning knows which data are to be included or excluded within the training set, because he/she has specific competences in this field: a strong domain knowledge .
  • – During the third step , the training dataset is used to develop an asset operational matrix . This matrix identifies how the machine should work in a precise moment, on the basis of the training data used to create it.
  • – In the last step, the software constantly monitors the machine operations and predicts the values of the machine parameters according to the matrix that has received as input. If a parameter deviates from the prediction of the model with a significant percentage, the system creates an alert related to that specific parameter. Then, a technical analysis is executed on the asset in order to evaluate the change of condition and the reasons that might have caused it. (Can your software do it? If not, you may want to upgrade i t)

PREVENT FAILURES WITH MACHINE LEARNING: APPLICATIONS

Picture number 1 shows a bearing vibrational increment of a ventilator fan , caused by an oil leak. This condition generated an alarm. The solution created using Machine Learning predicted a bearing vibration of about 3,5mm, given the operating conditions. The bearing vibration slowly deviated from the predicted value, creating an alarm as soon as it reached the value of 4,7mm. Thus, the plant technical managers were alerted and through fan visual inspection they identified an oil leak. The ventilator vacuum was actually vacuuming up the oil spilled from the leak in the fan lodging. For this reason, there was no leak sign on the ground. The oil on fan blades accumulated dirt and debris, causing a rotation imbalance and consequently a vibration increment. The plant technical managers were able to take corrective actions to stop the leak before the bearing was damaged. .

Predict failures with machine learning

Picture number 2 concerns the lubrication system of a big pulverizer. The lubrication system provides oil to the gearbox and to all the bearings. The asset model predicted a temperature of 90° F, but the real temperature reached 110° F. Therefore, the software generated an alarm for the plant technicians, who discovered that the control valve of the cooling water of the heat exchanger was not functioning. The control valve was replaced and the system started working again.

Predict failures with machine learning

Picture number 3 is about an electro-hydraulic control (EHC) system that verifies the valve position, turbine velocity and security valves. In this case, the differential pressure through the EHC pump “A” filter began to increase. Technicians were alarmed in time and they were able to switch from pump “A” to pump“B”. In this way, it was possible to avoid the emergency shutdown of the turbine and all the connected damages.

Predict failures with machine learning

To know more about this case study or to learn how to create machine learning models for your assets, contact us!

Altre Case History:

Machine performance optimization in the packaging sector.

Read More »

PREDICTIVE MAINTENANCE AND AI FOR THE MONITORING OF WIND TURBINES

ADVANCED PREDICTIVE ANALYSIS THROUGH MACHINE LEARNING

ARTIFICIAL INTELLIGENCE FOR TELECOMMUNICATIONS

Reduce the energy consumption of a production line.

DECREASING WASTE AND INCREASING COMPETITIVENESS

BEARING FAILURE DETECTION WITH WI-CARE 200 SERIES

MACHINE LEARNING FOR PREDICTIVE MAINTENANCE

SPARE PARTS WAREHOUSE MANAGEMENT OPTIMIZATION

WAREHOUSE EFFECTIVE MANAGEMENT, GROWING PRODUCTION

KEY PERFORMACE INDICATORS APPLIED TO A LARGE-SCALE RETAIL COMPANY

MANAGING AND MONITORING THE MAINTENANCE SERVICE

DATASHEET WI-CARE: TECHNICAL CHARACTERISTICS

ALL THE TECHNICAL SPECIFICATIONS OF OUR WIRELESS SENSOR

SUCCESSFUL BENCHMARKING OF ENERGY CONSUMPTION

SPOT CRITICAL AND INEFFICIENT ASSETS TO CUT CONSUMPTION

[email protected] | +39 0365 520098

MILAN | TURIN | VICENZA | ROME | BRESCIA | ROMANIA

VAT number 04172380984| REA BS-594066 | SHARE CAPITAL: € 114.000 | PEC: [email protected]

Informativa Privacy | Informativa Cookies | Codice Etico   | Salute e Sicurezza | Politica Ambientale | Politica della Qualità | Credits

predictive maintenance machine learning case study

We organize AT LEAST one webinar per month on the topics of maintenance engineering, energy efficiency and the latest artificial intelligence trends.

Let’s keep in contact!

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

predictive-maintenance

Here are 157 public repositories matching this topic..., h1st-ai / h1st.

Power Tools for AI Engineers With Deadlines

  • Updated Jul 31, 2023
  • Jupyter Notebook

umbertogriffo / Predictive-Maintenance-using-LSTM

Example of Multiple Multivariate Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras.

  • Updated Feb 12, 2024

firmai / datagene

DataGene - Identify How Similar TS Datasets Are to One Another (by @firmai )

  • Updated Feb 8, 2022

jiaxiang-cheng / PyTorch-Transformer-for-RUL-Prediction

Transformer implementation with PyTorch for remaining useful life prediction on turbofan engine with NASA CMAPSS data set. Inspired by Mo, Y., Wu, Q., Li, X., & Huang, B. (2021). Remaining useful life estimation via transformer encoder enhanced by a gated convolutional unit. Journal of Intelligent Manufacturing, 1-10.

  • Updated Oct 26, 2021

archd3sai / Predictive-Maintenance-of-Aircraft-Engine

In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.

  • Updated Aug 3, 2022

kpeters / exploring-nasas-turbofan-dataset

collection of predictive maintenance solutions for NASAs turbofan (CMAPSS) dataset

  • Updated Jan 24, 2021

lestercardoz11 / fault-detection-for-predictive-maintenance-in-industry-4.0

This research project will illustrate the use of machine learning and deep learning for predictive analysis in industry 4.0.

  • Updated Jul 11, 2021

ashishpatel26 / Predictive_Maintenance_using_Machine-Learning_Microsoft_Casestudy

Predictive_Maintenance_using_Machine-Learning_Microsoft_Casestudy

  • Updated Apr 5, 2018

Charlie5DH / PredictiveMaintenance-and-Vibration-Resources

Papers and datasets for Vibration Analysis

  • Updated Apr 5, 2024

kokikwbt / predictive-maintenance

Datasets for Predictive Maintenance

  • Updated Dec 2, 2023

mohyunho / N-CMAPSS_DL

N-CMAPSS data preparation for Machine Learning and Deep Learning models. (Python source code for new CMAPSS dataset)

  • Updated Apr 13, 2023

awslabs / aws-fleet-predictive-maintenance

Predictive Maintenance for Vehicle Fleets

  • Updated Dec 22, 2022

Western-OC2-Lab / Vibration-Based-Fault-Diagnosis-with-Low-Delay

Python codes “Jupyter notebooks” for the paper entitled "A Hybrid Method for Condition Monitoring and Fault Diagnosis of Rolling Bearings With Low System Delay, IEEE Trans. on Instrumentation and Measurement, Aug. 2022. Techniques used: Wavelet Packet Transform (WPT) & Fast Fourier Transform (FFT). Application: vibration-based fault diagnosis.

  • Updated May 16, 2024

imrahulr / Pred-Maintenance-Siemens

Predictive Maintenance System for Digital Factory Automation

  • Updated Jun 5, 2019

SAP-samples / btp-ai-sustainability-bootcamp

This github repository contains the sample code and exercises of btp-ai-sustainability-bootcamp, which showcases how to build Intelligence and Sustainability into Your Solutions on SAP Business Technology Platform with SAP AI Core and SAP Analytics Cloud for Planning.

  • Updated Dec 6, 2023

limingwu8 / Predictive-Maintenance

time-series prediction for predictive maintenance

  • Updated Feb 19, 2019

dependable-cps / FDIA-PdM

False Data Injection Attacks in Internet of Things and Deep Learning enabled Predictive Analytics

  • Updated Sep 9, 2020

Yi-Chen-Lin2019 / Predictive-maintenance-with-machine-learning

This project is about predictive maintenance with machine learning. It's a final project of my Computer Science AP degree.

  • Updated Sep 29, 2022

mohyunho / NAS_transformer

Evolutionary Neural Architecture Search on Transformers for RUL Prediction

  • Updated Apr 18, 2023

baggepinnen / MatrixProfile.jl

Time-series analysis using the Matrix profile in Julia

  • Updated Oct 29, 2023

Improve this page

Add a description, image, and links to the predictive-maintenance topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the predictive-maintenance topic, visit your repo's landing page and select "manage topics."

Scaling Up Deep Learning Based Predictive Maintenance for Commercial Machine Fleets: a Case Study

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Sensors and Machine Learning for Predictive Maintenance

Machine-learning.png

Policy, Finance

Policy approach(es) used to catalyse investment: Development of a national, regional, or sectoral InfraTech strategy

Finance approach(es) used to catalyse investment: De-risking mechanisms or blended finance

Predictive maintenance utilises monitoring and advanced machine learning methods to develop predictive models about failure of physical and mechanical assets such as pipes, pumps, and motors. These aim to prevent failure and optimise maintenance of critical infrastructure by providing early warning and predictive actions to issues before they occur. Key components include sensors that are installed in the machines, a communication system that allows data to be transmitted in real-time between sensors and a centralised data platform, and machine-learning predictive analytics to identify patterns and generate actionable insights. Predictive maintenance tools enable asset management workforces to automatically diagnose problems of industrial assets’ breakdowns and inefficiencies and optimise maintenance scheduling ahead of asset failure as well as extend the life of the asset.

Traditional asset maintenance activities are beset by limited visibility of asset condition, infrequent monitoring, labour-intensive periodic maintenance, and manual data analysis processes. This leads to a slow response to asset deterioration, driving productivity losses, and unoptimised infrastructure capital and operational expenditure. Development of more sensitive and intelligent monitoring and modelling technologies have created opportunities to minimise labour needs and plan investments better.

Mechanical asset owners face inherent challenges with aging infrastructure and assets reaching their end of life. For example, a 2018 survey showed that USD 472.6 billion will be required over the next 20 years, to maintain and improve drinking water infrastructure in the USA. The majority of this (USD $312.6 billion) is for the replacement or refurbishment of ageing or deteriorating distribution assets [1] . The increasing need for water asset maintenance and renewal optimisation is evident in the USD $90 billion increase in 20-year investments required to repair, replace and renew existing infrastructure while there is a $30 billion decrease in investment requirements for new infrastructure   [2] .    

Knowing which asset to maintain, renew or refurbish will potentially defer substantial amounts of capital expenditure. Increased usage of predictive failure models will be an essential planning tool to shift towards more proactive maintenance and optimise maintenance budgets. Proactive programs will prevent catastrophic failure of water distribution networks, pipe bursts and leaks causing damage to property and public infrastructure. This will assist water utilities to maintain critical water services to communities, eliminate unplanned downtime, reduce maintenance costs, improve asset reliability, and enhance operational efficiency.

As more water utilities start to embrace digitalisation and generate large amounts of data, the technology can access better quality and quantity of data from various inputs to train the models and improve the precision of failures detected. This technology can also be further developed and applied to other sectors and scenarios such as with critical physical infrastructure such as energy generation, transport, and manufacturing. As infrastructure continues to age and renewal investment needs continue to grow, demand for more accurate and robust failure prediction models will grow and be more widely used in all industries. Predictive models from different industries can be combined to optimise maintenance of assets in close proximity i.e. pipes, electricity, communications, gas, roads, etc.

VALUE CREATED

Improving efficiency and reducing costs:

  • Optimises capital investment through deferment of current premature rehabilitation and replacement tasks, rerouting the resources to the assets that are most likely to fail.
  • Reduces operational expenditure and overhead cost investment by keeping assets at optimal conditions reducing power waste, reducing downtime and maintenance costs.

Enhancing economic, social and environmental value:

  • Minimises the break rates of pipes that can cause water damage to surrounding infrastructure.
  • Decreases traffic disruption and water service interruption by minimising unnecessary maintenance activities
  • Extending useful life of assets and reducing material wastage.
  • Minimises the health and safety risks of operators in carrying out rehabilitation work as well as reducing risks during operation and inspection with the remote visibility of the state of assets in real-time.

POLICY TOOLS AND LEVERS

Legislation and regulation:   Governments can develop strategies to drive operators to invest in more efficient and sustainable operations of critical assets. Regulatory driven asset management plans can be implemented to maintain the efficiency of water infrastructure.

Funding and financing:   Greater focus on committing funding to optimise and extend the life of existing assets rather than building new infrastructure is needed.

Transition of workforce capabilities:   Training and upskilling workforce to have the skills to effectively interpret and action the insights from AI technologies.

RISKS AND MITIGATIONS

Implementation risk

Risk: Machine learning can only operate using good quality input data. There is a risk where incorrect data or lack of data can limit functionality or lead to incorrect actions, which can increase project costs and lead to poor infrastructure planning and investment.

Mitigation: Investing in sensors and monitoring solutions before investing in machine learning software.

Social risk

Risk:   The shift from scheduled and reactive maintenance to predictive and proactive maintenance  can create the need for re-training of workers to   interpret and appropriately action results from predictive models.

Mitigation: Industry can assist through training and up-skilling programs to help mitigate these issues.

Safety and (Cyber)security risk

Risk: Control systems, especially in those located in the cloud, are at risk of cyber-attacks. Sensitive information about location and condition of critical infrastructure and potential attacks can have high risks on public health.

Mitigation: Organisations need to ensure a strong level of cyber security in their networks and data storage, for both local servers and cloud services. Focus should be on having strict data ownership models and the appropriate level of data security as needed by the application. Any implementation of data transfer and storage should be undertaken by suitable qualified and experienced professionals.

Example:  Data61

Implementation:   Sydney Water and Data61 are collaboratively researching advanced analytics approaches to solving water industry challenges, including water pipe failure prediction, predicting sewer chokes and prioritising active leakage detection areas.

Cost:   Sydney Water found the potential to reduce maintenance and renewal costs by several million dollars over a four-year period and minimise inconvenience to customers from pipe breaks.

Timeframe:   Projects are undertaken on case-by-case basis and can be completed within a few months. Example:  Voda

Implementation:   Voda AI software have assessed more than 1200 pipes for a Florida water utility, prioritising pipe monitoring, maintenance, and replacements.

Cost:   Voda predicted 18 avoidable breaks saving the water utility more than $100,000 in reactive maintenance and preventing negative coverage of bursts.

Timeframe:   Projects are undertaken on case-by-case basis and can be completed within a 12 months. Example:  Movus

Implementation:   The University of Queensland have installed the FitMachine on 22 chiller units, delivering 24/7 air-conditioning, on campus since March 2016 to detect the early warnings of failures, using machine-learning algorithms4.

Cost:   The University of Queensland realised 135% return on their FitMachine investment. They saved up to $100,000 in repair costs by discovering and preventing machine failure ahead of time

Timeframe:   Movus solution was implemented in a short time frame (within 6 months).

Attachments & Related Links

banner

  • Predictive maintenance in the automotive industry

UNLOCKED GREATER EFFICIENCY WITH MACHINE LEARNING

  • Machine learning
  • Predictive maintenance
  • Sigma Technology

Jump to the Section

Predictive Maintenance: Improved Efficiency & Reduced Costs with Machine Learning

Predictive maintenance is a maintenance strategy that utilizes machine learning algorithms to analyze data from sensors, equipment logs, and other sources to predict when a machine is likely to fail. Machine learning algorithms can also help identify patterns and relationships in the data, enabling organizations to make more informed decisions about maintenance schedules and spare parts inventory management.

In the automotive industry, machine learning can be used to improve predictive maintenance by analyzing vast amounts of data from various sources, such as sensors, telematics systems, and maintenance logs. This data can be used to develop predictive models that identify patterns and relationships between various factors and equipment failures. For example, machine learning algorithms can analyze engine vibration and temperature data to predict when a component is likely to fail. Moreover, predictive models can analyze data on driving patterns, road conditions, and fuel consumption to determine the optimal time for maintenance activities such as oil changes and tire rotations.

Machine learning is a powerful tool that can help organizations in the automotive industry optimize their predictive maintenance programs and improve equipment efficiency, reducing costs and increasing overall operational efficiency. Yixin Zhang, Data Scientist at Sigma Technology Insight Solutions

In this case study, we uncover how Sigma Technology Insight Solutions supported the Swedish vehicle manufacturer and contributed to the development of the ML model to understand and predict the lifetime of brake pads that are used in their trucks.

About the client

The client is a Swedish manufacturer of heavy-duty commercial vehicles. The company is known for its commitment to safety, sustainability, and efficiency and offers a wide range of trucks for various applications, including long-haul, construction, and distribution. In addition, the company provides a range of services and solutions to its customers, including maintenance, financing, and telematics solutions.

The challenge: goals of predictive maintenance in the automotive industry

Brake pads are a c ritical component of a vehicle’s braking system and need to be maintained regularly to ensure their proper functioning. Predicting when brake pads are likely to fail can help prevent unexpected failures and reduce the risk of accidents. The client required a predictive maintenance solution for brake pads to calculate the lifetime of their brake pads. As a result, data-driven decisions would ensure the safety and reliability of vehicles, as well as reduce maintenance costs and increase the efficiency of flee t operations.

Our involvement  

The client approached the team with a need for an expert data scientist to assist in the development of a machine-learning model. Yixin Zhang, a highly skilled Data Scientist at Sigma Technology Insight Solutions , joined the effort and provided crucial consulting services. The objective was to create a model that could predict the durability of brake pads based on historical data that included factors such as road quality, vehicle speed, temperature, and others.

However, th e team faced a challenge as some of the data was missing, which had to be restored and refilled to ensure the accuracy of the results. To overcome this, the team used advanced data restoration techniques to refill the missing data and ran the data through the model to identify data relations. 

The end result of their efforts was an ML model written in Python that predicts the impact of various factors on brake pad durability. This tool will serve as a valuable asset to the client, allowing them to make data-driven decisions and continuously enhance their product offerings.

predictive maintenance machine learning case study

Further steps  

The machine learning solution has the potential to be adapted and expanded to improve the wear resistance and durability of a wide range of spare parts beyond its current application. In addition to this, innovative technology could be leveraged in other industries, such as manufacturing and consumer electronics, to enhance the performance of their products and prolong their lifespan. The scalability of the ML solution makes it a versatile tool that can be applied in multiple contexts, delivering a significant competitive advantage to businesses that adopt it. Furthermore, the solution can be customized to meet the specific requirements of each industry, ensuring that it provides the optimal outcome in each case. The potential of the ML solution to improve the performance and longevity of products in various industries makes it an invaluable tool for organizations seeking to remain at the forefront of their respective fields.

Discover our Automotive IT Solutions

author

ROBERT ÅBERG

President at Sigma Technology Insight Solutions

Contact: [email protected]

Share this article:

Related articles

slide

Sigma Technology starts partnership with digital delegations’ startup Delori

slide

Route optimization algorithm – SAS® Nordic Hackathon

slide

App enables intuitive worldwide access to employee information

slide

COMBINING HIGH-QUALITY DESIGN AND CONNECTIVITY

slide

Custom Content Management System for BMW Group

slide

Enhancing Hybrid Work experience through digital interactions

slide

Sigma and Lynk & Co evolve user information experience

slide

Supporting premium chauffeur service with a digital booking app

slide

Discover more articles in our insights library

linkedin

This is the era of AI in predictive maintenance

Share This:

Three workers in PPE talk using laptop in factory

Maintenance has been evolving with new technologies and strategies since the days of CH Waddington during World War II who questioned why the Royal Air Force (RAF) was performing maintenance the way it was – grounding about half the planes at a time for maintenance following a mission. His theory was that the regular maintenance (preventive or planned maintenance) was increasing breakdowns. He and a handful of other scientists recommended performing maintenance based on the condition of the equipment. And after five months of trying the new procedure, the number of available planes at any given time increased by 61 percent.

Since then, manufacturers used preventive maintenance strategies including sensors placed in devices to determine when equipment might fail. But the results weren’t consistent because the data was difficult to access. Now, with today’s IIoT, machine learning and artificial intelligence, predictive maintenance is a reality.

What is Predictive Maintenance and What are the Benefits?

Predictive maintenance is based on detecting small changes and aberrations in normal operations which usually indicates a larger problem. From digital preventive maintenance came predictive maintenance (PdM) that uses data-driven maintenance strategies to analyze operation and predict and prepare for potential failures. With 24/7 remote monitoring, data-driven insights from machine learning and innovative predictive analytics technology to alert about potential equipment failures, manufacturers can benefit in many ways. Cost savings and ROI of predictive maintenance include:

  • Reduced downtime
  • More targeted maintenance
  • Higher productivity
  • Efficient inventory management
  • Enhanced data analysis
  • Reduced labor and material costs
  • Increased plant safety
  • Optimized maintenance activities
  • Increased overall equipment effectiveness (OEE)

Predictive Maintenance via Condition-Based Monitoring

Another transformative step in evolving maintenance strategies and capabilities came the advent of condition-based monitoring (CBM) which monitors key performance indicators (KPIs) to identify anomalies. Companies can check through measurements, visual equipment inspections, reviews of performance data or scheduled tests, as well as through IoT and historical data. The KPIs are gathered at certain intervals, or continuously—as is done when a machine has internal sensors. CBM can be applied to all assets.

CBM, like all predictive maintenance, also operates on the principle that maintenance should only be performed when there are signs of decreasing equipment performance or an upcoming critical failure. Compared to traditional preventive maintenance, CBM only requires equipment to be shut down for maintenance on an as-needed basis, increasing the time between maintenance repairs.

CBM can reduce machine downtime by 30 to 60 percent and increase machine life by an average of 30 percent. Predictive maintenance plays a key role in detecting and addressing machine issues before it goes into complete failure mode. According to a PWC study , predictive maintenance improves uptime by 51%. Using predictive maintenance, companies can avoid accidents and can achieve increased safety for their employees and customers.

Implementing a Successful Condition-Based Maintenance Program

FactoryTalk® Analytics™ GuardianAI™ is a new software by Rockwell Automation that provides predictive maintenance insights via continuous condition-based monitoring. The software helps maintenance engineers get the right information at the right time to optimize maintenance activities and reduce unplanned downtime. Armed with this information, maintenance engineers have the insight to understand the current condition of the assets on the plant floor. They receive early notice as soon as an asset begins deviating from normal.

Use Your Existing Variable Frequency Drives as Sensors

When using FactoryTalk Analytics GuardianAI, there’s no need to purchase additional sensors or monitoring equipment. The software provides early warning of potential asset failures based on data that’s already available from variable frequency drives (VFDs). FactoryTalk Analytics GuardianAI software uses the VFD’s electrical signal to monitor the condition of a plant asset. When it detects a deviation in the electrical signal, it alerts the user to the anomaly so that manufacturers can investigate and plan the correct response. FactoryTalk Analytics GuardianAI provides premier integration with PowerFlex® 755, 755T and 6000T drives for key process applications like pumps, fans, and blowers.

No Data Science Required

When deploying innovative solutions in an operations environment, time to value is key. FactoryTalk Analytics GuardianAI software saves time with intuitive and streamlined workflows via a self-service, browser-based experience. Just deploy the application on an edge PC, specify your drive and asset information, and train the predictive maintenance model on live plant data with no impact to operations. When the training is complete, the software will automatically switch to monitoring mode and you can oversee the condition of your plant assets.

Starting from an overview of all assets, you can select any at risk asset to learn more about its condition. You’ll discover key information like the root cause of the deviation, how far it peaked above baseline and deviation duration. You can also include context about the severity of the failure risk and the estimated time to resolve the issue. These details support your maintenance team with the prioritization and planning required for repair.

Advance from anomaly detection to anomaly identification

FactoryTalk Analytics GuardianAI software comes out-of-the-box with embedded expertise about the most probable cause of failure for common plant asset types. If you’re monitoring a pump, fan or blower application, FactoryTalk Analytics GuardianAI understands and recognizes the electrical signature of the associated first principle faults and will provide this context when it alerts you of a deviation. By providing maintenance engineers with information about what type of failure is about to occur, you can reduce investigation time and minimize any downtime required.

The embedded expertise provides a great start for anomaly identification. But you’re not limited to the out-of-the-box functionality. You also have the flexibility to train FactoryTalk Analytics GuardianAI software on process specific faults. After you investigate and identify the source of the issue, you can label the anomaly. When the same issue occurs again, the software will recognize it and notify you.

Analyze at the edge

FactoryTalk Analytics GuardianAI software is deployed, learns and runs right at the edge for near real-time predictions.

Since CH Waddinton and his mission to keep RAF planes in the sky, manufacturers have been seeking to drive more efficient maintenance decision-making and derive more value from equipment. Evolving from reactive and proactive to preventive and predictive, maintenance engineers are now empowered with easy-to-use machine learning through an intuitive user experience that doesn’t require data science knowledge. Find out more at FactoryTalk Analytics GuardianAI | FactoryTalk (rockwellautomation.com)

Subscribe to Rockwell Automation and receive the latest news, thought leadership and information directly to your inbox.

Machine Learning and image analysis towards improved energy management in Industry 4.0: a practical case study on quality control

  • Original Article
  • Open access
  • Published: 13 May 2024
  • Volume 17 , article number  48 , ( 2024 )

Cite this article

You have full access to this open access article

predictive maintenance machine learning case study

  • Mattia Casini 1 ,
  • Paolo De Angelis 1 ,
  • Marco Porrati 2 ,
  • Paolo Vigo 1 ,
  • Matteo Fasano 1 ,
  • Eliodoro Chiavazzo 1 &
  • Luca Bergamasco   ORCID: orcid.org/0000-0001-6130-9544 1  

155 Accesses

1 Altmetric

Explore all metrics

With the advent of Industry 4.0, Artificial Intelligence (AI) has created a favorable environment for the digitalization of manufacturing and processing, helping industries to automate and optimize operations. In this work, we focus on a practical case study of a brake caliper quality control operation, which is usually accomplished by human inspection and requires a dedicated handling system, with a slow production rate and thus inefficient energy usage. We report on a developed Machine Learning (ML) methodology, based on Deep Convolutional Neural Networks (D-CNNs), to automatically extract information from images, to automate the process. A complete workflow has been developed on the target industrial test case. In order to find the best compromise between accuracy and computational demand of the model, several D-CNNs architectures have been tested. The results show that, a judicious choice of the ML model with a proper training, allows a fast and accurate quality control; thus, the proposed workflow could be implemented for an ML-powered version of the considered problem. This would eventually enable a better management of the available resources, in terms of time consumption and energy usage.

Similar content being viewed by others

predictive maintenance machine learning case study

Towards Operation Excellence in Automobile Assembly Analysis Using Hybrid Image Processing

predictive maintenance machine learning case study

Deep Learning Based Algorithms for Welding Edge Points Detection

predictive maintenance machine learning case study

Artificial Intelligence: Prospect in Mechanical Engineering Field—A Review

Avoid common mistakes on your manuscript.

Introduction

An efficient use of energy resources in industry is key for a sustainable future (Bilgen, 2014 ; Ocampo-Martinez et al., 2019 ). The advent of Industry 4.0, and of Artificial Intelligence, have created a favorable context for the digitalisation of manufacturing processes. In this view, Machine Learning (ML) techniques have the potential for assisting industries in a better and smart usage of the available data, helping to automate and improve operations (Narciso & Martins, 2020 ; Mazzei & Ramjattan, 2022 ). For example, ML tools can be used to analyze sensor data from industrial equipment for predictive maintenance (Carvalho et al., 2019 ; Dalzochio et al., 2020 ), which allows identification of potential failures in advance, and thus to a better planning of maintenance operations with reduced downtime. Similarly, energy consumption optimization (Shen et al., 2020 ; Qin et al., 2020 ) can be achieved via ML-enabled analysis of available consumption data, with consequent adjustments of the operating parameters, schedules, or configurations to minimize energy consumption while maintaining an optimal production efficiency. Energy consumption forecast (Liu et al., 2019 ; Zhang et al., 2018 ) can also be improved, especially in industrial plants relying on renewable energy sources (Bologna et al., 2020 ; Ismail et al., 2021 ), by analysis of historical data on weather patterns and forecast, to optimize the usage of energy resources, avoid energy peaks, and leverage alternative energy sources or storage systems (Li & Zheng, 2016 ; Ribezzo et al., 2022 ; Fasano et al., 2019 ; Trezza et al., 2022 ; Mishra et al., 2023 ). Finally, ML tools can also serve for fault or anomaly detection (Angelopoulos et al., 2019 ; Md et al., 2022 ), which allows prompt corrective actions to optimize energy usage and prevent energy inefficiencies. Within this context, ML techniques for image analysis (Casini et al., 2024 ) are also gaining increasing interest (Chen et al., 2023 ), for their application to e.g. materials design and optimization (Choudhury, 2021 ), quality control (Badmos et al., 2020 ), process monitoring (Ho et al., 2021 ), or detection of machine failures by converting time series data from sensors to 2D images (Wen et al., 2017 ).

Incorporating digitalisation and ML techniques into Industry 4.0 has led to significant energy savings (Maggiore et al., 2021 ; Nota et al., 2020 ). Projects adopting these technologies can achieve an average of 15% to 25% improvement in energy efficiency in the processes where they were implemented (Arana-Landín et al., 2023 ). For instance, in predictive maintenance, ML can reduce energy consumption by optimizing the operation of machinery (Agrawal et al., 2023 ; Pan et al., 2024 ). In process optimization, ML algorithms can improve energy efficiency by 10-20% by analyzing and adjusting machine operations for optimal performance, thereby reducing unnecessary energy usage (Leong et al., 2020 ). Furthermore, the implementation of ML algorithms for optimal control can lead to energy savings of 30%, because these systems can make real-time adjustments to production lines, ensuring that machines operate at peak energy efficiency (Rahul & Chiddarwar, 2023 ).

In automotive manufacturing, ML-driven quality control can lead to energy savings by reducing the need for redoing parts or running inefficient production cycles (Vater et al., 2019 ). In high-volume production environments such as consumer electronics, novel computer-based vision models for automated detection and classification of damaged packages from intact packages can speed up operations and reduce waste (Shahin et al., 2023 ). In heavy industries like steel or chemical manufacturing, ML can optimize the energy consumption of large machinery. By predicting the optimal operating conditions and maintenance schedules, these systems can save energy costs (Mypati et al., 2023 ). Compressed air is one of the most energy-intensive processes in manufacturing. ML can optimize the performance of these systems, potentially leading to energy savings by continuously monitoring and adjusting the air compressors for peak efficiency, avoiding energy losses due to leaks or inefficient operation (Benedetti et al., 2019 ). ML can also contribute to reducing energy consumption and minimizing incorrectly produced parts in polymer processing enterprises (Willenbacher et al., 2021 ).

Here we focus on a practical industrial case study of brake caliper processing. In detail, we focus on the quality control operation, which is typically accomplished by human visual inspection and requires a dedicated handling system. This eventually implies a slower production rate, and inefficient energy usage. We thus propose the integration of an ML-based system to automatically perform the quality control operation, without the need for a dedicated handling system and thus reduced operation time. To this, we rely on ML tools able to analyze and extract information from images, that is, deep convolutional neural networks, D-CNNs (Alzubaidi et al., 2021 ; Chai et al., 2021 ).

figure 1

Sample 3D model (GrabCAD ) of the considered brake caliper: (a) part without defects, and (b) part with three sample defects, namely a scratch, a partially missing letter in the logo, and a circular painting defect (shown by the yellow squares, from left to right respectively)

A complete workflow for the purpose has been developed and tested on a real industrial test case. This includes: a dedicated pre-processing of the brake caliper images, their labelling and analysis using two dedicated D-CNN architectures (one for background removal, and one for defect identification), post-processing and analysis of the neural network output. Several different D-CNN architectures have been tested, in order to find the best model in terms of accuracy and computational demand. The results show that, a judicious choice of the ML model with a proper training, allows to obtain fast and accurate recognition of possible defects. The best-performing models, indeed, reach over 98% accuracy on the target criteria for quality control, and take only few seconds to analyze each image. These results make the proposed workflow compliant with the typical industrial expectations; therefore, in perspective, it could be implemented for an ML-powered version of the considered industrial problem. This would eventually allow to achieve better performance of the manufacturing process and, ultimately, a better management of the available resources in terms of time consumption and energy expense.

figure 2

Different neural network architectures: convolutional encoder (a) and encoder-decoder (b)

The industrial quality control process that we target is the visual inspection of manufactured components, to verify the absence of possible defects. Due to industrial confidentiality reasons, a representative open-source 3D geometry (GrabCAD ) of the considered parts, similar to the original one, is shown in Fig. 1 . For illustrative purposes, the clean geometry without defects (Fig.  1 (a)) is compared to the geometry with three possible sample defects, namely: a scratch on the surface of the brake caliper, a partially missing letter in the logo, and a circular painting defect (highlighted by the yellow squares, from left to right respectively, in Fig.  1 (b)). Note that, one or multiple defects may be present on the geometry, and that other types of defects may also be considered.

Within the industrial production line, this quality control is typically time consuming, and requires a dedicated handling system with the associated slow production rate and energy inefficiencies. Thus, we developed a methodology to achieve an ML-powered version of the control process. The method relies on data analysis and, in particular, on information extraction from images of the brake calipers via Deep Convolutional Neural Networks, D-CNNs (Alzubaidi et al., 2021 ). The designed workflow for defect recognition is implemented in the following two steps: 1) removal of the background from the image of the caliper, in order to reduce noise and irrelevant features in the image, ultimately rendering the algorithms more flexible with respect to the background environment; 2) analysis of the geometry of the caliper to identify the different possible defects. These two serial steps are accomplished via two different and dedicated neural networks, whose architecture is discussed in the next section.

Convolutional Neural Networks (CNNs) pertain to a particular class of deep neural networks for information extraction from images. The feature extraction is accomplished via convolution operations; thus, the algorithms receive an image as an input, analyze it across several (deep) neural layers to identify target features, and provide the obtained information as an output (Casini et al., 2024 ). Regarding this latter output, different formats can be retrieved based on the considered architecture of the neural network. For a numerical data output, such as that required to obtain a classification of the content of an image (Bhatt et al., 2021 ), e.g. correct or defective caliper in our case, a typical layout of the network involving a convolutional backbone, and a fully-connected network can be adopted (see Fig. 2 (a)). On the other hand, if the required output is still an image, a more complex architecture with a convolutional backbone (encoder) and a deconvolutional head (decoder) can be used (see Fig. 2 (b)).

As previously introduced, our workflow targets the analysis of the brake calipers in a two-step procedure: first, the removal of the background from the input image (e.g. Fig. 1 ); second, the geometry of the caliper is analyzed and the part is classified as acceptable or not depending on the absence or presence of any defect, respectively. Thus, in the first step of the procedure, a dedicated encoder-decoder network (Minaee et al., 2021 ) is adopted to classify the pixels in the input image as brake or background. The output of this model will then be a new version of the input image, where the background pixels are blacked. This helps the algorithms in the subsequent analysis to achieve a better performance, and to avoid bias due to possible different environments in the input image. In the second step of the workflow, a dedicated encoder architecture is adopted. Here, the previous background-filtered image is fed to the convolutional network, and the geometry of the caliper is analyzed to spot possible defects and thus classify the part as acceptable or not. In this work, both deep learning models are supervised , that is, the algorithms are trained with the help of human-labeled data (LeCun et al., 2015 ). Particularly, the first algorithm for background removal is fed with the original image as well as with a ground truth (i.e. a binary image, also called mask , consisting of black and white pixels) which instructs the algorithm to learn which pixels pertain to the brake and which to the background. This latter task is usually called semantic segmentation in Machine Learning and Deep Learning (Géron, 2022 ). Analogously, the second algorithm is fed with the original image (without the background) along with an associated mask, which serves the neural networks with proper instructions to identify possible defects on the target geometry. The required pre-processing of the input images, as well as their use for training and validation of the developed algorithms, are explained in the next sections.

Image pre-processing

Machine Learning approaches rely on data analysis; thus, the quality of the final results is well known to depend strongly on the amount and quality of the available data for training of the algorithms (Banko & Brill, 2001 ; Chen et al., 2021 ). In our case, the input images should be well-representative for the target analysis and include adequate variability of the possible features to allow the neural networks to produce the correct output. In this view, the original images should include, e.g., different possible backgrounds, a different viewing angle of the considered geometry and a different light exposure (as local light reflections may affect the color of the geometry and thus the analysis). The creation of such a proper dataset for specific cases is not always straightforward; in our case, for example, it would imply a systematic acquisition of a large set of images in many different conditions. This would require, in turn, disposing of all the possible target defects on the real parts, and of an automatic acquisition system, e.g., a robotic arm with an integrated camera. Given that, in our case, the initial dataset could not be generated on real parts, we have chosen to generate a well-balanced dataset of images in silico , that is, based on image renderings of the real geometry. The key idea was that, if the rendered geometry is sufficiently close to a real photograph, the algorithms may be instructed on artificially-generated images and then tested on a few real ones. This approach, if properly automatized, clearly allows to easily produce a large amount of images in all the different conditions required for the analysis.

In a first step, starting from the CAD file of the brake calipers, we worked manually using the open-source software Blender (Blender ), to modify the material properties and achieve a realistic rendering. After that, defects were generated by means of Boolean (subtraction) operations between the geometry of the brake caliper and ad-hoc geometries for each defect. Fine tuning on the generated defects has allowed for a realistic representation of the different defects. Once the results were satisfactory, we developed an automated Python code for the procedures, to generate the renderings in different conditions. The Python code allows to: load a given CAD geometry, change the material properties, set different viewing angles for the geometry, add different types of defects (with given size, rotation and location on the geometry of the brake caliper), add a custom background, change the lighting conditions, render the scene and save it as an image.

In order to make the dataset as varied as possible, we introduced three light sources into the rendering environment: a diffuse natural lighting to simulate daylight conditions, and two additional artificial lights. The intensity of each light source and the viewing angle were then made vary randomly, to mimic different daylight conditions and illuminations of the object. This procedure was designed to provide different situations akin to real use, and to make the model invariant to lighting conditions and camera position. Moreover, to provide additional flexibility to the model, the training dataset of images was virtually expanded using data augmentation (Mumuni & Mumuni, 2022 ), where saturation, brightness and contrast were made randomly vary during training operations. This procedure has allowed to consistently increase the number and variety of the images in the training dataset.

The developed automated pre-processing steps easily allows for batch generation of thousands of different images to be used for training of the neural networks. This possibility is key for proper training of the neural networks, as the variability of the input images allows the models to learn all the possible features and details that may change during real operating conditions.

figure 3

Examples of the ground truth for the two target tasks: background removal (a) and defects recognition (b)

The first tests using such virtual database have shown that, although the generated images were very similar to real photographs, the models were not able to properly recognize the target features in the real images. Thus, in a tentative to get closer to a proper set of real images, we decided to adopt a hybrid dataset, where the virtually generated images were mixed with the available few real ones. However, given that some possible defects were missing in the real images, we also decided to manipulate the images to introduce virtual defects on real images. The obtained dataset finally included more than 4,000 images, where 90% was rendered, and 10% was obtained from real images. To avoid possible bias in the training dataset, defects were present in 50% of the cases in both the rendered and real image sets. Thus, in the overall dataset, the real original images with no defects were 5% of the total.

Along with the code for the rendering and manipulation of the images, dedicated Python routines were developed to generate the corresponding data labelling for the supervised training of the networks, namely the image masks. Particularly, two masks were generated for each input image: one for the background removal operation, and one for the defect identification. In both cases, the masks consist of a binary (i.e. black and white) image where all the pixels of a target feature (i.e. the geometry or defect) are assigned unitary values (white); whereas, all the remaining pixels are blacked (zero values). An example of these masks in relation to the geometry in Fig. 1 is shown in Fig. 3 .

All the generated images were then down-sampled, that is, their resolution was reduced to avoid unnecessary large computational times and (RAM) memory usage while maintaining the required level of detail for training of the neural networks. Finally, the input images and the related masks were split into a mosaic of smaller tiles, to achieve a suitable size for feeding the images to the neural networks with even more reduced requirements on the RAM memory. All the tiles were processed, and the whole image reconstructed at the end of the process to visualize the overall final results.

figure 4

Confusion matrix for accuracy assessment of the neural networks models

Choice of the model

Within the scope of the present application, a wide range of possibly suitable models is available (Chen et al., 2021 ). In general, the choice of the best model for a given problem should be made on a case-by-case basis, considering an acceptable compromise between the achievable accuracy and computational complexity/cost. Too simple models can indeed be very fast in the response yet have a reduced accuracy. On the other hand, more complex models can generally provide more accurate results, although typically requiring larger amounts of data for training, and thus longer computational times and energy expense. Hence, testing has the crucial role to allow identification of the best trade-off between these two extreme cases. A benchmark for model accuracy can generally be defined in terms of a confusion matrix, where the model response is summarized into the following possibilities: True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). This concept can be summarized as shown in Fig. 4 . For the background removal, Positive (P) stands for pixels belonging to the brake caliper, while Negative (N) for background pixels. For the defect identification model, Positive (P) stands for non-defective geometry, whereas Negative (N) stands for defective geometries. With respect to these two cases, the True/False statements stand for correct or incorrect identification, respectively. The model accuracy can be therefore assessed as Géron ( 2022 )

Based on this metrics, the accuracy for different models can then be evaluated on a given dataset, where typically 80% of the data is used for training and the remaining 20% for validation. For the defect recognition stage, the following models were tested: VGG-16 (Simonyan & Zisserman, 2014 ), ResNet50, ResNet101, ResNet152 (He et al., 2016 ), Inception V1 (Szegedy et al., 2015 ), Inception V4 and InceptionResNet V2 (Szegedy et al., 2017 ). Details on the assessment procedure for the different models are provided in the Supplementary Information file. For the background removal stage, the DeepLabV3 \(+\) (Chen et al., 2018 ) model was chosen as the first option, and no additional models were tested as it directly provided satisfactory results in terms of accuracy and processing time. This gives preliminary indication that, from the point of view of the task complexity of the problem, the defect identification stage can be more demanding with respect to the background removal operation for the case study at hand. Besides the assessment of the accuracy according to, e.g., the metrics discussed above, additional information can be generally collected, such as too low accuracy (indicating insufficient amount of training data), possible bias of the models on the data (indicating a non-well balanced training dataset), or other specific issues related to missing representative data in the training dataset (Géron, 2022 ). This information helps both to correctly shape the training dataset, and to gather useful indications for the fine tuning of the model after its choice has been made.

Background removal

An initial bias of the model for background removal arose on the color of the original target geometry (red color). The model was indeed identifying possible red spots on the background as part of the target geometry as an unwanted output. To improve the model flexibility, and thus its accuracy on the identification of the background, the training dataset was expanded using data augmentation (Géron, 2022 ). This technique allows to artificially increase the size of the training dataset by applying various transformations to the available images, with the goal to improve the performance and generalization ability of the models. This approach typically involves applying geometric and/or color transformations to the original images; in our case, to account for different viewing angles of the geometry, different light exposures, and different color reflections and shadowing effects. These improvements of the training dataset proved to be effective on the performance for the background removal operation, with a validation accuracy finally ranging above 99% and model response time around 1-2 seconds. An example of the output of this operation for the geometry in Fig.  1 is shown in Fig. 5 .

While the results obtained were satisfactory for the original (red) color of the calipers, we decided to test the model ability to be applied on brake calipers of other colors as well. To this, the model was trained and tested on a grayscale version of the images of the calipers, which allows to completely remove any possible bias of the model on a specific color. In this case, the validation accuracy of the model was still obtained to range above 99%; thus, this approach was found to be particularly interesting to make the model suitable for background removal operation even on images including calipers of different colors.

figure 5

Target geometry after background removal

Defect recognition

An overview of the performance of the tested models for the defect recognition operation on the original geometry of the caliper is reported in Table 1 (see also the Supplementary Information file for more details on the assessment of different models). The results report on the achieved validation accuracy ( \(A_v\) ) and on the number of parameters ( \(N_p\) ), with this latter being the total number of parameters that can be trained for each model (Géron, 2022 ) to determine the output. Here, this quantity is adopted as an indicator of the complexity of each model.

figure 6

Accuracy (a) and loss function (b) curves for the Resnet101 model during training

As the results in Table 1 show, the VGG-16 model was quite unprecise for our dataset, eventually showing underfitting (Géron, 2022 ). Thus, we decided to opt for the Resnet and Inception families of models. Both these families of models have demonstrated to be suitable for handling our dataset, with slightly less accurate results being provided by the Resnet50 and InceptionV1. The best results were obtained using Resnet101 and InceptionV4, with very high final accuracy and fast processing time (in the order \(\sim \) 1 second). Finally, Resnet152 and InceptionResnetV2 models proved to be slightly too complex or slower for our case; they indeed provided excellent results but taking longer response times (in the order of \(\sim \) 3-5 seconds). The response time is indeed affected by the complexity ( \(N_p\) ) of the model itself, and by the hardware used. In our work, GPUs were used for training and testing all the models, and the hardware conditions were kept the same for all models.

Based on the results obtained, ResNet101 model was chosen as the best solution for our application, in terms of accuracy and reduced complexity. After fine-tuning operations, the accuracy that we obtained with this model reached nearly 99%, both in the validation and test datasets. This latter includes target real images, that the models have never seen before; thus, it can be used for testing of the ability of the models to generalize the information learnt during the training/validation phase.

The trend in the accuracy increase and loss function decrease during training of the Resnet101 model on the original geometry are shown in Fig. 6 (a) and (b), respectively. Particularly, the loss function quantifies the error between the predicted output during training of the model and the actual target values in the dataset. In our case, the loss function is computed using the cross-entropy function and the Adam optimiser (Géron, 2022 ). The error is expected to reduce during the training, which eventually leads to more accurate predictions of the model on previously-unseen data. The combination of accuracy and loss function trends, along with other control parameters, is typically used and monitored to evaluate the training process, and avoid e.g. under- or over-fitting problems (Géron, 2022 ). As Fig. 6 (a) shows, the accuracy experiences a sudden step increase during the very first training phase (epochs, that is, the number of times the complete database is repeatedly scrutinized by the model during its training (Géron, 2022 )). The accuracy then increases in a smooth fashion with the epochs, until an asymptotic value is reached both for training and validation accuracy. These trends in the two accuracy curves can generally be associated with a proper training; indeed, being the two curves close to each other may be interpreted as an absence of under-fitting problems. On the other hand, Fig. 6 (b) shows that the loss function curves are close to each other, with a monotonically-decreasing trend. This can be interpreted as an absence of over-fitting problems, and thus of proper training of the model.

figure 7

Final results of the analysis on the defect identification: (a) considered input geometry, (b), (c) and (d) identification of a scratch on the surface, partially missing logo, and painting defect respectively (highlighted in the red frames)

Finally, an example output of the overall analysis is shown in Fig. 7 , where the considered input geometry is shown (a), along with the identification of the defects (b), (c) and (d) obtained from the developed protocol. Note that, here the different defects have been separated in several figures for illustrative purposes; however, the analysis yields the identification of defects on one single image. In this work, a binary classification was performed on the considered brake calipers, where the output of the models allows to discriminate between defective or non-defective components based on the presence or absence of any of the considered defects. Note that, fine tuning of this discrimination is ultimately with the user’s requirements. Indeed, the model output yields as the probability (from 0 to 100%) of the possible presence of defects; thus, the discrimination between a defective or non-defective part is ultimately with the user’s choice of the acceptance threshold for the considered part (50% in our case). Therefore, stricter or looser criteria can be readily adopted. Eventually, for particularly complex cases, multiple models may also be used concurrently for the same task, and the final output defined based on a cross-comparison of the results from different models. As a last remark on the proposed procedure, note that here we adopted a binary classification based on the presence or absence of any defect; however, further classification of the different defects could also be implemented, to distinguish among different types of defects (multi-class classification) on the brake calipers.

Energy saving

Illustrative scenarios.

Given that the proposed tools have not yet been implemented and tested within a real industrial production line, we analyze here three perspective scenarios to provide a practical example of the potential for energy savings in an industrial context. To this, we consider three scenarios, which compare traditional human-based control operations and a quality control system enhanced by the proposed Machine Learning (ML) tools. Specifically, here we analyze a generic brake caliper assembly line formed by 14 stations, as outlined in Table 1 in the work by Burduk and Górnicka ( 2017 ). This assembly line features a critical inspection station dedicated to defect detection, around which we construct three distinct scenarios to evaluate the efficacy of traditional human-based control operations versus a quality control system augmented by the proposed ML-based tools, namely:

First Scenario (S1): Human-Based Inspection. The traditional approach involves a human operator responsible for the inspection tasks.

Second Scenario (S2): Hybrid Inspection. This scenario introduces a hybrid inspection system where our proposed ML-based automatic detection tool assists the human inspector. The ML tool analyzes the brake calipers and alerts the human inspector only when it encounters difficulties in identifying defects, specifically when the probability of a defect being present or absent falls below a certain threshold. This collaborative approach aims to combine the precision of ML algorithms with the experience of human inspectors, and can be seen as a possible transition scenario between the human-based and a fully-automated quality control operation.

Third Scenario (S3): Fully Automated Inspection. In the final scenario, we conceive a completely automated defect inspection station powered exclusively by our ML-based detection system. This setup eliminates the need for human intervention, relying entirely on the capabilities of the ML tools to identify defects.

For simplicity, we assume that all the stations are aligned in series without buffers, minimizing unnecessary complications in our estimations. To quantify the beneficial effects of implementing ML-based quality control, we adopt the Overall Equipment Effectiveness (OEE) as the primary metric for the analysis. OEE is a comprehensive measure derived from the product of three critical factors, as outlined by Nota et al. ( 2020 ): Availability (the ratio of operating time with respect to planned production time); Performance (the ratio of actual output with respect to the theoretical maximum output); and Quality (the ratio of the good units with respect to the total units produced). In this section, we will discuss the details of how we calculate each of these factors for the various scenarios.

To calculate Availability ( A ), we consider an 8-hour work shift ( \(t_{shift}\) ) with 30 minutes of breaks ( \(t_{break}\) ) during which we assume production stop (except for the fully automated scenario), and 30 minutes of scheduled downtime ( \(t_{sched}\) ) required for machine cleaning and startup procedures. For unscheduled downtime ( \(t_{unsched}\) ), primarily due to machine breakdowns, we assume an average breakdown probability ( \(\rho _{down}\) ) of 5% for each machine, with an average repair time of one hour per incident ( \(t_{down}\) ). Based on these assumptions, since the Availability represents the ratio of run time ( \(t_{run}\) ) to production time ( \(t_{pt}\) ), it can be calculated using the following formula:

with the unscheduled downtime being computed as follows:

where N is the number of machines in the production line and \(1-\left( 1-\rho _{down}\right) ^{N}\) represents the probability that at least one machine breaks during the work shift. For the sake of simplicity, the \(t_{down}\) is assumed constant regardless of the number of failures.

Table  2 presents the numerical values used to calculate Availability in the three scenarios. In the second scenario, we can observe that integrating the automated station leads to a decrease in the first factor of the OEE analysis, which can be attributed to the additional station for automated quality-control (and the related potential failure). This ultimately increases the estimation of the unscheduled downtime. In the third scenario, the detrimental effect of the additional station compensates the beneficial effect of the automated quality control on reducing the need for pauses during operator breaks; thus, the Availability for the third scenario yields as substantially equivalent to the first one (baseline).

The second factor of OEE, Performance ( P ), assesses the operational efficiency of production equipment relative to its maximum designed speed ( \(t_{line}\) ). This evaluation includes accounting for reductions in cycle speed and minor stoppages, collectively termed as speed losses . These losses are challenging to measure in advance, as performance is typically measured using historical data from the production line. For this analysis, we are constrained to hypothesize a reasonable estimate of 60 seconds of time lost to speed losses ( \(t_{losses}\) ) for each work cycle. Although this assumption may appear strong, it will become evident later that, within the context of this analysis – particularly regarding the impact of automated inspection on energy savings – the Performance (like the Availability) is only marginally influenced by introducing an automated inspection station. To account for the effect of automated inspection on the assembly line speed, we keep the time required by the other 13 stations ( \(t^*_{line}\) ) constant while varying the time allocated for visual inspection ( \(t_{inspect}\) ). According to Burduk and Górnicka ( 2017 ), the total operation time of the production line, excluding inspection, is 1263 seconds, with manual visual inspection taking 38 seconds. For the fully automated third scenario, we assume an inspection time of 5 seconds, which encloses the photo collection, pre-processing, ML-analysis, and post-processing steps. In the second scenario, instead, we add an additional time to the pure automatic case to consider the cases when the confidence of the ML model falls below 90%. We assume this happens once in every 10 inspections, which is a conservative estimate, higher than that we observed during model testing. This results in adding 10% of the human inspection time to the fully automated time. Thus, when \(t_{losses}\) are known, Performance can be expressed as follows:

The calculated values for Performance are presented in Table  3 , and we can note that the modification in inspection time has a negligible impact on this factor since it does not affect the speed loss or, at least to our knowledge, there is no clear evidence to suggest that the introduction of a new inspection station would alter these losses. Moreover, given the specific linear layout of the considered production line, the inspection time change has only a marginal effect on enhancing the production speed. However, this approach could potentially bias our scenario towards always favouring automation. To evaluate this hypothesis, a sensitivity analysis which explores scenarios where the production line operates at a faster pace will be discussed in the next subsection.

The last factor, Quality ( Q ), quantifies the ratio of compliant products out of the total products manufactured, effectively filtering out items that fail to meet the quality standards due to defects. Given the objective of our automated algorithm, we anticipate this factor of the OEE to be significantly enhanced by implementing the ML-based automated inspection station. To estimate it, we assume a constant defect probability for the production line ( \(\rho _{def}\) ) at 5%. Consequently, the number of defective products ( \(N_{def}\) ) during the work shift is calculated as \(N_{unit} \cdot \rho _{def}\) , where \(N_{unit}\) represents the average number of units (brake calipers) assembled on the production line, defined as:

To quantify defective units identified, we consider the inspection accuracy ( \(\rho _{acc}\) ), where for human visual inspection, the typical accuracy is 80% (Sundaram & Zeid, 2023 ), and for the ML-based station, we use the accuracy of our best model, i.e., 99%. Additionally, we account for the probability of the station mistakenly identifying a caliper as with a defect even if it is defect-free, i.e., the false negative rate ( \(\rho _{FN}\) ), defined as

In the absence of any reasonable evidence to justify a bias on one mistake over others, we assume a uniform distribution for both human and automated inspections regarding error preference, i.e. we set \(\rho ^{H}_{FN} = \rho ^{ML}_{FN} = \rho _{FN} = 50\%\) . Thus, the number of final compliant goods ( \(N_{goods}\) ), i.e., the calipers that are identified as quality-compliant, can be calculated as:

where \(N_{detect}\) is the total number of detected defective units, comprising TN (true negatives, i.e. correctly identified defective calipers) and FN (false negatives, i.e. calipers mistakenly identified as defect-free). The Quality factor can then be computed as:

Table  4 summarizes the Quality factor calculation, showcasing the substantial improvement brought by the ML-based inspection station due to its higher accuracy compared to human operators.

figure 8

Overall Equipment Effectiveness (OEE) analysis for three scenarios (S1: Human-Based Inspection, S2: Hybrid Inspection, S3: Fully Automated Inspection). The height of the bars represents the percentage of the three factors A : Availability, P : Performance, and Q : Quality, which can be interpreted from the left axis. The green bars indicate the OEE value, derived from the product of these three factors. The red line shows the recall rate, i.e. the probability that a defective product is rejected by the client, with values displayed on the right red axis

Finally, we can determine the Overall Equipment Effectiveness by multiplying the three factors previously computed. Additionally, we can estimate the recall rate ( \(\rho _{R}\) ), which reflects the rate at which a customer might reject products. This is derived from the difference between the total number of defective units, \(N_{def}\) , and the number of units correctly identified as defective, TN , indicating the potential for defective brake calipers that may bypass the inspection process. In Fig.  8 we summarize the outcomes of the three scenarios. It is crucial to note that the scenarios incorporating the automated defect detector, S2 and S3, significantly enhance the Overall Equipment Effectiveness, primarily through substantial improvements in the Quality factor. Among these, the fully automated inspection scenario, S3, emerges as a slightly superior option, thanks to its additional benefit in removing the breaks and increasing the speed of the line. However, given the different assumptions required for this OEE study, we shall interpret these results as illustrative, and considering them primarily as comparative with the baseline scenario only. To analyze the sensitivity of the outlined scenarios on the adopted assumptions, we investigate the influence of the line speed and human accuracy on the results in the next subsection.

Sensitivity analysis

The scenarios described previously are illustrative and based on several simplifying hypotheses. One of such hypotheses is that the production chain layout operates entirely in series, with each station awaiting the arrival of the workpiece from the preceding station, resulting in a relatively slow production rate (1263 seconds). This setup can be quite different from reality, where slower operations can be accelerated by installing additional machines in parallel to balance the workload and enhance productivity. Moreover, we utilized a literature value of 80% for the accuracy of the human visual inspector operator, as reported by Sundaram and Zeid ( 2023 ). However, this accuracy can vary significantly due to factors such as the experience of the inspector and the defect type.

figure 9

Effect of assembly time for stations (excluding visual inspection), \(t^*_{line}\) , and human inspection accuracy, \(\rho _{acc}\) , on the OEE analysis. (a) The subplot shows the difference between the scenario S2 (Hybrid Inspection) and the baseline scenario S1 (Human Inspection), while subplot (b) displays the difference between scenario S3 (Fully Automated Inspection) and the baseline. The maps indicate in red the values of \(t^*_{line}\) and \(\rho _{acc}\) where the integration of automated inspection stations can significantly improve OEE, and in blue where it may lower the score. The dashed lines denote the breakeven points, and the circled points pinpoint the values of the scenarios used in the “Illustrative scenario” Subsection.

A sensitivity analysis on these two factors was conducted to address these variations. The assembly time of the stations (excluding visual inspection), \(t^*_{line}\) , was varied from 60 s to 1500 s, and the human inspection accuracy, \(\rho _{acc}\) , ranged from 50% (akin to a random guesser) to 100% (representing an ideal visual inspector); meanwhile, the other variables were kept fixed.

The comparison of the OEE enhancement for the two scenarios employing ML-based inspection against the baseline scenario is displayed in the two maps in Fig.  9 . As the figure shows, due to the high accuracy and rapid response of the proposed automated inspection station, the area representing regions where the process may benefit energy savings in the assembly lines (indicated in red shades) is significantly larger than the areas where its introduction could degrade performance (indicated in blue shades). However, it can be also observed that the automated inspection could be superfluous or even detrimental in those scenarios where human accuracy and assembly speed are very high, indicating an already highly accurate workflow. In these cases, and particularly for very fast production lines, short times for quality control can be expected to be key (beyond accuracy) for the optimization.

Finally, it is important to remark that the blue region (areas below the dashed break-even lines) might expand if the accuracy of the neural networks for defect detection is lower when implemented in an real production line. This indicates the necessity for new rounds of active learning and an augment of the ratio of real images in the database, to eventually enhance the performance of the ML model.

Conclusions

Industrial quality control processes on manufactured parts are typically achieved by human visual inspection. This usually requires a dedicated handling system, and generally results in a slower production rate, with the associated non-optimal use of the energy resources. Based on a practical test case for quality control on brake caliper manufacturing, in this work we have reported on a developed workflow for integration of Machine Learning methods to automatize the process. The proposed approach relies on image analysis via Deep Convolutional Neural Networks. These models allow to efficiently extract information from images, thus possibly representing a valuable alternative to human inspection.

The proposed workflow relies on a two-step procedure on the images of the brake calipers: first, the background is removed from the image; second, the geometry is inspected to identify possible defects. These two steps are accomplished thanks to two dedicated neural network models, an encoder-decoder and an encoder network, respectively. Training of these neural networks typically requires a large number of representative images for the problem. Given that, one such database is not always readily available, we have presented and discussed an alternative methodology for the generation of the input database using 3D renderings. While integration of the database with real photographs was required for optimal results, this approach has allowed fast and flexible generation of a large base of representative images. The pre-processing steps required for data feeding to the neural networks and their training has been also discussed.

Several models have been tested and evaluated, and the best one for the considered case identified. The obtained accuracy for defect identification reaches \(\sim \) 99% of the tested cases. Moreover, the response of the models is fast (in the order of few seconds) on each image, which makes them compliant with the most typical industrial expectations.

In order to provide a practical example of possible energy savings when implementing the proposed ML-based methodology for quality control, we have analyzed three perspective industrial scenarios: a baseline scenario, where quality control tasks are performed by a human inspector; a hybrid scenario, where the proposed ML automatic detection tool assists the human inspector; a fully-automated scenario, where we envision a completely automated defect inspection. The results show that the proposed tools may help increasing the Overall Equipment Effectiveness up to \(\sim \) 10% with respect to the considered baseline scenario. However, a sensitivity analysis on the speed of the production line and on the accuracy of the human inspector has also shown that the automated inspection could be superfluous or even detrimental in those cases where human accuracy and assembly speed are very high. In these cases, reducing the time required for quality control can be expected to be the major controlling parameter (beyond accuracy) for optimization.

Overall the results show that, with a proper tuning, these models may represent a valuable resource for integration into production lines, with positive outcomes on the overall effectiveness, and thus ultimately leading to a better use of the energy resources. To this, while the practical implementation of the proposed tools can be expected to require contained investments (e.g. a portable camera, a dedicated workstation and an operator with proper training), in field tests on a real industrial line would be required to confirm the potential of the proposed technology.

Agrawal, R., Majumdar, A., Kumar, A., & Luthra, S. (2023). Integration of artificial intelligence in sustainable manufacturing: Current status and future opportunities. Operations Management Research, 1–22.

Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., & Farhan, L. (2021). Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions. Journal of big Data, 8 , 1–74.

Article   Google Scholar  

Angelopoulos, A., Michailidis, E. T., Nomikos, N., Trakadas, P., Hatziefremidis, A., Voliotis, S., & Zahariadis, T. (2019). Tackling faults in the industry 4.0 era-a survey of machine—learning solutions and key aspects. Sensors, 20 (1), 109.

Arana-Landín, G., Uriarte-Gallastegi, N., Landeta-Manzano, B., & Laskurain-Iturbe, I. (2023). The contribution of lean management—industry 4.0 technologies to improving energy efficiency. Energies, 16 (5), 2124.

Badmos, O., Kopp, A., Bernthaler, T., & Schneider, G. (2020). Image-based defect detection in lithium-ion battery electrode using convolutional neural networks. Journal of Intelligent Manufacturing, 31 , 885–897. https://doi.org/10.1007/s10845-019-01484-x

Banko, M., & Brill, E. (2001). Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th annual meeting of the association for computational linguistics (pp. 26–33).

Benedetti, M., Bonfà, F., Introna, V., Santolamazza, A., & Ubertini, S. (2019). Real time energy performance control for industrial compressed air systems: Methodology and applications. Energies, 12 (20), 3935.

Bhatt, D., Patel, C., Talsania, H., Patel, J., Vaghela, R., Pandya, S., Modi, K., & Ghayvat, H. (2021). Cnn variants for computer vision: History, architecture, application, challenges and future scope. Electronics, 10 (20), 2470.

Bilgen, S. (2014). Structure and environmental impact of global energy consumption. Renewable and Sustainable Energy Reviews, 38 , 890–902.

Blender. (2023). Open-source software. https://www.blender.org/ . Accessed 18 Apr 2023.

Bologna, A., Fasano, M., Bergamasco, L., Morciano, M., Bersani, F., Asinari, P., Meucci, L., & Chiavazzo, E. (2020). Techno-economic analysis of a solar thermal plant for large-scale water pasteurization. Applied Sciences, 10 (14), 4771.

Burduk, A., & Górnicka, D. (2017). Reduction of waste through reorganization of the component shipment logistics. Research in Logistics & Production, 7 (2), 77–90. https://doi.org/10.21008/j.2083-4950.2017.7.2.2

Carvalho, T. P., Soares, F. A., Vita, R., Francisco, R., d. P., Basto, J. P., & Alcalá, S. G. (2019). A systematic literature review of machine learning methods applied to predictive maintenance. Computers & Industrial Engineering, 137 , 106024.

Casini, M., De Angelis, P., Chiavazzo, E., & Bergamasco, L. (2024). Current trends on the use of deep learning methods for image analysis in energy applications. Energy and AI, 15 , 100330. https://doi.org/10.1016/j.egyai.2023.100330

Chai, J., Zeng, H., Li, A., & Ngai, E. W. (2021). Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Machine Learning with Applications, 6 , 100134.

Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV) (pp. 801–818).

Chen, L., Li, S., Bai, Q., Yang, J., Jiang, S., & Miao, Y. (2021). Review of image classification algorithms based on convolutional neural networks. Remote Sensing, 13 (22), 4712.

Chen, T., Sampath, V., May, M. C., Shan, S., Jorg, O. J., Aguilar Martín, J. J., Stamer, F., Fantoni, G., Tosello, G., & Calaon, M. (2023). Machine learning in manufacturing towards industry 4.0: From ‘for now’to ‘four-know’. Applied Sciences, 13 (3), 1903. https://doi.org/10.3390/app13031903

Choudhury, A. (2021). The role of machine learning algorithms in materials science: A state of art review on industry 4.0. Archives of Computational Methods in Engineering, 28 (5), 3361–3381. https://doi.org/10.1007/s11831-020-09503-4

Dalzochio, J., Kunst, R., Pignaton, E., Binotto, A., Sanyal, S., Favilla, J., & Barbosa, J. (2020). Machine learning and reasoning for predictive maintenance in industry 4.0: Current status and challenges. Computers in Industry, 123 , 103298.

Fasano, M., Bergamasco, L., Lombardo, A., Zanini, M., Chiavazzo, E., & Asinari, P. (2019). Water/ethanol and 13x zeolite pairs for long-term thermal energy storage at ambient pressure. Frontiers in Energy Research, 7 , 148.

Géron, A. (2022). Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow . O’Reilly Media, Inc.

GrabCAD. (2023). Brake caliper 3D model by Mitulkumar Sakariya from the GrabCAD free library (non-commercial public use). https://grabcad.com/library/brake-caliper-19 . Accessed 18 Apr 2023.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

Ho, S., Zhang, W., Young, W., Buchholz, M., Al Jufout, S., Dajani, K., Bian, L., & Mozumdar, M. (2021). Dlam: Deep learning based real-time porosity prediction for additive manufacturing using thermal images of the melt pool. IEEE Access, 9 , 115100–115114. https://doi.org/10.1109/ACCESS.2021.3105362

Ismail, M. I., Yunus, N. A., & Hashim, H. (2021). Integration of solar heating systems for low-temperature heat demand in food processing industry-a review. Renewable and Sustainable Energy Reviews, 147 , 111192.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 (7553), 436–444.

Leong, W. D., Teng, S. Y., How, B. S., Ngan, S. L., Abd Rahman, A., Tan, C. P., Ponnambalam, S., & Lam, H. L. (2020). Enhancing the adaptability: Lean and green strategy towards the industry revolution 4.0. Journal of cleaner production, 273 , 122870.

Liu, Z., Wang, X., Zhang, Q., & Huang, C. (2019). Empirical mode decomposition based hybrid ensemble model for electrical energy consumption forecasting of the cement grinding process. Measurement, 138 , 314–324.

Li, G., & Zheng, X. (2016). Thermal energy storage system integration forms for a sustainable future. Renewable and Sustainable Energy Reviews, 62 , 736–757.

Maggiore, S., Realini, A., Zagano, C., & Bazzocchi, F. (2021). Energy efficiency in industry 4.0: Assessing the potential of industry 4.0 to achieve 2030 decarbonisation targets. International Journal of Energy Production and Management, 6 (4), 371–381.

Mazzei, D., & Ramjattan, R. (2022). Machine learning for industry 4.0: A systematic review using deep learning-based topic modelling. Sensors, 22 (22), 8641.

Md, A. Q., Jha, K., Haneef, S., Sivaraman, A. K., & Tee, K. F. (2022). A review on data-driven quality prediction in the production process with machine learning for industry 4.0. Processes, 10 (10), 1966. https://doi.org/10.3390/pr10101966

Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., & Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 44 (7), 3523–3542.

Google Scholar  

Mishra, S., Srivastava, R., Muhammad, A., Amit, A., Chiavazzo, E., Fasano, M., & Asinari, P. (2023). The impact of physicochemical features of carbon electrodes on the capacitive performance of supercapacitors: a machine learning approach. Scientific Reports, 13 (1), 6494. https://doi.org/10.1038/s41598-023-33524-1

Mumuni, A., & Mumuni, F. (2022). Data augmentation: A comprehensive survey of modern approaches. Array, 16 , 100258. https://doi.org/10.1016/j.array.2022.100258

Mypati, O., Mukherjee, A., Mishra, D., Pal, S. K., Chakrabarti, P. P., & Pal, A. (2023). A critical review on applications of artificial intelligence in manufacturing. Artificial Intelligence Review, 56 (Suppl 1), 661–768.

Narciso, D. A., & Martins, F. (2020). Application of machine learning tools for energy efficiency in industry: A review. Energy Reports, 6 , 1181–1199.

Nota, G., Nota, F. D., Peluso, D., & Toro Lazo, A. (2020). Energy efficiency in industry 4.0: The case of batch production processes. Sustainability, 12 (16), 6631. https://doi.org/10.3390/su12166631

Ocampo-Martinez, C., et al. (2019). Energy efficiency in discrete-manufacturing systems: Insights, trends, and control strategies. Journal of Manufacturing Systems, 52 , 131–145.

Pan, Y., Hao, L., He, J., Ding, K., Yu, Q., & Wang, Y. (2024). Deep convolutional neural network based on self-distillation for tool wear recognition. Engineering Applications of Artificial Intelligence, 132 , 107851.

Qin, J., Liu, Y., Grosvenor, R., Lacan, F., & Jiang, Z. (2020). Deep learning-driven particle swarm optimisation for additive manufacturing energy optimisation. Journal of Cleaner Production, 245 , 118702.

Rahul, M., & Chiddarwar, S. S. (2023). Integrating virtual twin and deep neural networks for efficient and energy-aware robotic deburring in industry 4.0. International Journal of Precision Engineering and Manufacturing, 24 (9), 1517–1534.

Ribezzo, A., Falciani, G., Bergamasco, L., Fasano, M., & Chiavazzo, E. (2022). An overview on the use of additives and preparation procedure in phase change materials for thermal energy storage with a focus on long term applications. Journal of Energy Storage, 53 , 105140.

Shahin, M., Chen, F. F., Hosseinzadeh, A., Bouzary, H., & Shahin, A. (2023). Waste reduction via image classification algorithms: Beyond the human eye with an ai-based vision. International Journal of Production Research, 1–19.

Shen, F., Zhao, L., Du, W., Zhong, W., & Qian, F. (2020). Large-scale industrial energy systems optimization under uncertainty: A data-driven robust optimization approach. Applied Energy, 259 , 114199.

Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 .

Sundaram, S., & Zeid, A. (2023). Artificial Intelligence-Based Smart Quality Inspection for Manufacturing. Micromachines, 14 (3), 570. https://doi.org/10.3390/mi14030570

Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI conference on artificial intelligence (vol. 31).

Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9).

Trezza, G., Bergamasco, L., Fasano, M., & Chiavazzo, E. (2022). Minimal crystallographic descriptors of sorption properties in hypothetical mofs and role in sequential learning optimization. npj Computational Materials, 8 (1), 123. https://doi.org/10.1038/s41524-022-00806-7

Vater, J., Schamberger, P., Knoll, A., & Winkle, D. (2019). Fault classification and correction based on convolutional neural networks exemplified by laser welding of hairpin windings. In 2019 9th International Electric Drives Production Conference (EDPC) (pp. 1–8). IEEE.

Wen, L., Li, X., Gao, L., & Zhang, Y. (2017). A new convolutional neural network-based data-driven fault diagnosis method. IEEE Transactions on Industrial Electronics, 65 (7), 5990–5998. https://doi.org/10.1109/TIE.2017.2774777

Willenbacher, M., Scholten, J., & Wohlgemuth, V. (2021). Machine learning for optimization of energy and plastic consumption in the production of thermoplastic parts in sme. Sustainability, 13 (12), 6800.

Zhang, X. H., Zhu, Q. X., He, Y. L., & Xu, Y. (2018). Energy modeling using an effective latent variable based functional link learning machine. Energy, 162 , 883–891.

Download references

Acknowledgements

This work has been supported by GEFIT S.p.a.

Open access funding provided by Politecnico di Torino within the CRUI-CARE Agreement.

Author information

Authors and affiliations.

Department of Energy, Politecnico di Torino, Turin, Italy

Mattia Casini, Paolo De Angelis, Paolo Vigo, Matteo Fasano, Eliodoro Chiavazzo & Luca Bergamasco

R &D Department, GEFIT S.p.a., Alessandria, Italy

Marco Porrati

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Luca Bergamasco .

Ethics declarations

Conflict of interest statement.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 354 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casini, M., De Angelis, P., Porrati, M. et al. Machine Learning and image analysis towards improved energy management in Industry 4.0: a practical case study on quality control. Energy Efficiency 17 , 48 (2024). https://doi.org/10.1007/s12053-024-10228-7

Download citation

Received : 22 July 2023

Accepted : 28 April 2024

Published : 13 May 2024

DOI : https://doi.org/10.1007/s12053-024-10228-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Industry 4.0
  • Energy management
  • Artificial intelligence
  • Machine learning
  • Deep learning
  • Convolutional neural networks
  • Computer vision
  • Find a journal
  • Publish with us
  • Track your research

Unifying manufacturing data with Fivetran and Databricks

Samir Patel

Manufacturing is not just evolving; it’s undergoing a revolutionary change, fueled by data and AI. 

Manufacturers are re-imagining their businesses to go beyond efficiently providing a unit of production (the next machine, vehicle or airplane) to also focus on creating technology-enabled businesses that are more scalable and resilient. The goal is to achieve higher growth through better end user experiences and stickier revenue streams.  

AI-driven technologies leveraging machine learning (ML) such as predictive maintenance, forecasting and inventory optimization — and more recently generative AI — are emerging as pivotal investments.

The challenge, however, lies in unifying the mountains of data collected from supply chain, production, product performance, sensors and customer feedback. 

One of those key data sources is SAP , a critical system of record at the heart of many manufacturing operations.

In order to achieve their AI ambitions, data leaders in manufacturing need to first achieve data readiness via centralizing their many varied disparate sources in a manner that is secure, efficient and governed — including SAP data. 

Unifying data unlocks AI and advanced analytics

In their current state, many organizations lack the data movement and access tooling to transition from measuring manufacturing reactively to predicting and prescribing action. 

Ensuring data is unified, accessible and governed is the foundation for these proactive projects — enabling a holistic view for supply chain optimization, quality control, customer support and more. 

For manufacturing organizations, that requires tackling SAP as a data source . They must navigate their own SAP application environment, complexity across cloud and on-prem environments, licensing requirements and an expansive data model with over 100,000 tables directly complicate data accessibility. 

The ripple effect means that manufacturing data professionals, as well as their stakeholders, often can't easily leverage critical ERP data to proactively predict equipment failures to prevent downtime, optimize inventory levels or personalize product offerings. 

How to move data from disparate data sources

The first step to unifying data involves getting data from all of your different sources, including SAP, into a unified data platform that brings the power of AI to your data and people.

Even if you have the resources available to build these pipelines initially, and the time required to do so, your engineers will spend outsized time on maintenance such as updating for APIs and fixing broken pipelines. 

Fivetran offers a robust and efficient solution with an extensive library of 500+ pre-built, fully-managed connectors allowing seamless and automated data replication. This also includes high volume, low-latency and low impact data replication from SAP. 

With features like Netweaver, Table Explorer and low-latency, log-based CDC replication , Fivetran can help you adhere to SAP licensing restrictions — while facilitating secure and efficient data movement. 

This empowers manufacturers to unlock the full potential of their operational data in SAP, unifying it with other existing data sources to:

  • Enable a holistic view of supply chain optimization, quality control, customer support and more. 
  • Predict equipment failures to prevent downtime
  • Optimize inventory levels to personalize product offerings and best meet demand 

Choosing a next-generation data platform to unify SAP and non-SAP data  

Companies need a data platform that is tailor-made to address manufacturing’s most pressing needs while allowing the entire organization to use data and AI. 

The Databricks Data Intelligence Platform for Manufacturing is built on a lakehouse architecture and combines the industry's best data management, governance and sharing with the industry’s first built-in intelligence engine to democratize every person and process — supported with an ecosystem of manufacturing-specific Solution Accelerators and partners. 

The solution accelerators are developed for the most common and high-impact manufacturing use cases such as customer entity resolution, overall equipment effectiveness and predictive maintenance (IoT). They come with fully functional code and best practices that take you from idea to proof of concept (PoC) in as little as two weeks. 

This simplifies and unifies all of your data workloads key to manufacturing operations, including data processing, streaming analytics, business intelligence with SQL, machine learning and generative AI.

This will empower all of your data users, from data engineers, data scientists and business analysts. By integrating generative AI, Databricks democratizes data access to non-technical users with natural language interfaces, enabling data-driven decision making to enhance supply chain, production operations, field service and customer experiences

Manufacturers can take advantage of the full power of all their data and deliver powerful real-time decisions.

Leverage a modern data stack built for manufacturing 

The combination of Fivetran and Databricks equates to the critical pieces of a modern data stack. A modern data stack allows manufacturers to seamlessly move and harness all their data through a unified platform — reducing total cost of ownership and enabling AI at scale. 

This combination has helped global enterprises seamlessly integrate their SAP systems, enabling them to gain insights across their supply chain and manufacturing, resulting in improved efficiency, reduced downtime and increased productivity.  

Dive deeper into Databricks and Fivetran to start your journey towards becoming a data-forward, AI-empowered manufacturer today.

[CTA_MODULE]

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Unifying manufacturing data with Fivetran and Databricks

Related blog posts

How to win at GenAI: Advice from Fivetran & Databricks CEOs

How to win at GenAI: Advice from Fivetran & Databricks CEOs

Fivetran + Databricks: Level up your manufacturing operations

Fivetran + Databricks: Level up your manufacturing operations

How to replicate and analyze SAP ERP data with Fivetran

How to replicate and analyze SAP ERP data with Fivetran

Speedrun your analytics with Fivetran and Databricks Serverless

Speedrun your analytics with Fivetran and Databricks Serverless

predictive maintenance machine learning case study

*dbt Core is a trademark of dbt Labs, Inc. All rights therein are reserved to dbt Labs, Inc. Fivetran Transformations is not a product or service of or endorsed by dbt Labs, Inc.

Artificial intelligence in strategy

Can machines automate strategy development? The short answer is no. However, there are numerous aspects of strategists’ work where AI and advanced analytics tools can already bring enormous value. Yuval Atsmon is a senior partner who leads the new McKinsey Center for Strategy Innovation, which studies ways new technologies can augment the timeless principles of strategy. In this episode of the Inside the Strategy Room podcast, he explains how artificial intelligence is already transforming strategy and what’s on the horizon. This is an edited transcript of the discussion. For more conversations on the strategy issues that matter, follow the series on your preferred podcast platform .

Joanna Pachner: What does artificial intelligence mean in the context of strategy?

Yuval Atsmon: When people talk about artificial intelligence, they include everything to do with analytics, automation, and data analysis. Marvin Minsky, the pioneer of artificial intelligence research in the 1960s, talked about AI as a “suitcase word”—a term into which you can stuff whatever you want—and that still seems to be the case. We are comfortable with that because we think companies should use all the capabilities of more traditional analysis while increasing automation in strategy that can free up management or analyst time and, gradually, introducing tools that can augment human thinking.

Joanna Pachner: AI has been embraced by many business functions, but strategy seems to be largely immune to its charms. Why do you think that is?

Subscribe to the Inside the Strategy Room podcast

Yuval Atsmon: You’re right about the limited adoption. Only 7 percent of respondents to our survey about the use of AI say they use it in strategy or even financial planning, whereas in areas like marketing, supply chain, and service operations, it’s 25 or 30 percent. One reason adoption is lagging is that strategy is one of the most integrative conceptual practices. When executives think about strategy automation, many are looking too far ahead—at AI capabilities that would decide, in place of the business leader, what the right strategy is. They are missing opportunities to use AI in the building blocks of strategy that could significantly improve outcomes.

I like to use the analogy to virtual assistants. Many of us use Alexa or Siri but very few people use these tools to do more than dictate a text message or shut off the lights. We don’t feel comfortable with the technology’s ability to understand the context in more sophisticated applications. AI in strategy is similar: it’s hard for AI to know everything an executive knows, but it can help executives with certain tasks.

When executives think about strategy automation, many are looking too far ahead—at AI deciding the right strategy. They are missing opportunities to use AI in the building blocks of strategy.

Joanna Pachner: What kind of tasks can AI help strategists execute today?

Yuval Atsmon: We talk about six stages of AI development. The earliest is simple analytics, which we refer to as descriptive intelligence. Companies use dashboards for competitive analysis or to study performance in different parts of the business that are automatically updated. Some have interactive capabilities for refinement and testing.

The second level is diagnostic intelligence, which is the ability to look backward at the business and understand root causes and drivers of performance. The level after that is predictive intelligence: being able to anticipate certain scenarios or options and the value of things in the future based on momentum from the past as well as signals picked in the market. Both diagnostics and prediction are areas that AI can greatly improve today. The tools can augment executives’ analysis and become areas where you develop capabilities. For example, on diagnostic intelligence, you can organize your portfolio into segments to understand granularly where performance is coming from and do it in a much more continuous way than analysts could. You can try 20 different ways in an hour versus deploying one hundred analysts to tackle the problem.

Predictive AI is both more difficult and more risky. Executives shouldn’t fully rely on predictive AI, but it provides another systematic viewpoint in the room. Because strategic decisions have significant consequences, a key consideration is to use AI transparently in the sense of understanding why it is making a certain prediction and what extrapolations it is making from which information. You can then assess if you trust the prediction or not. You can even use AI to track the evolution of the assumptions for that prediction.

Those are the levels available today. The next three levels will take time to develop. There are some early examples of AI advising actions for executives’ consideration that would be value-creating based on the analysis. From there, you go to delegating certain decision authority to AI, with constraints and supervision. Eventually, there is the point where fully autonomous AI analyzes and decides with no human interaction.

Because strategic decisions have significant consequences, you need to understand why AI is making a certain prediction and what extrapolations it’s making from which information.

Joanna Pachner: What kind of businesses or industries could gain the greatest benefits from embracing AI at its current level of sophistication?

Yuval Atsmon: Every business probably has some opportunity to use AI more than it does today. The first thing to look at is the availability of data. Do you have performance data that can be organized in a systematic way? Companies that have deep data on their portfolios down to business line, SKU, inventory, and raw ingredients have the biggest opportunities to use machines to gain granular insights that humans could not.

Companies whose strategies rely on a few big decisions with limited data would get less from AI. Likewise, those facing a lot of volatility and vulnerability to external events would benefit less than companies with controlled and systematic portfolios, although they could deploy AI to better predict those external events and identify what they can and cannot control.

Third, the velocity of decisions matters. Most companies develop strategies every three to five years, which then become annual budgets. If you think about strategy in that way, the role of AI is relatively limited other than potentially accelerating analyses that are inputs into the strategy. However, some companies regularly revisit big decisions they made based on assumptions about the world that may have since changed, affecting the projected ROI of initiatives. Such shifts would affect how you deploy talent and executive time, how you spend money and focus sales efforts, and AI can be valuable in guiding that. The value of AI is even bigger when you can make decisions close to the time of deploying resources, because AI can signal that your previous assumptions have changed from when you made your plan.

Joanna Pachner: Can you provide any examples of companies employing AI to address specific strategic challenges?

Yuval Atsmon: Some of the most innovative users of AI, not coincidentally, are AI- and digital-native companies. Some of these companies have seen massive benefits from AI and have increased its usage in other areas of the business. One mobility player adjusts its financial planning based on pricing patterns it observes in the market. Its business has relatively high flexibility to demand but less so to supply, so the company uses AI to continuously signal back when pricing dynamics are trending in a way that would affect profitability or where demand is rising. This allows the company to quickly react to create more capacity because its profitability is highly sensitive to keeping demand and supply in equilibrium.

Joanna Pachner: Given how quickly things change today, doesn’t AI seem to be more a tactical than a strategic tool, providing time-sensitive input on isolated elements of strategy?

Yuval Atsmon: It’s interesting that you make the distinction between strategic and tactical. Of course, every decision can be broken down into smaller ones, and where AI can be affordably used in strategy today is for building blocks of the strategy. It might feel tactical, but it can make a massive difference. One of the world’s leading investment firms, for example, has started to use AI to scan for certain patterns rather than scanning individual companies directly. AI looks for consumer mobile usage that suggests a company’s technology is catching on quickly, giving the firm an opportunity to invest in that company before others do. That created a significant strategic edge for them, even though the tool itself may be relatively tactical.

Joanna Pachner: McKinsey has written a lot about cognitive biases  and social dynamics that can skew decision making. Can AI help with these challenges?

Yuval Atsmon: When we talk to executives about using AI in strategy development, the first reaction we get is, “Those are really big decisions; what if AI gets them wrong?” The first answer is that humans also get them wrong—a lot. [Amos] Tversky, [Daniel] Kahneman, and others have proven that some of those errors are systemic, observable, and predictable. The first thing AI can do is spot situations likely to give rise to biases. For example, imagine that AI is listening in on a strategy session where the CEO proposes something and everyone says “Aye” without debate and discussion. AI could inform the room, “We might have a sunflower bias here,” which could trigger more conversation and remind the CEO that it’s in their own interest to encourage some devil’s advocacy.

We also often see confirmation bias, where people focus their analysis on proving the wisdom of what they already want to do, as opposed to looking for a fact-based reality. Just having AI perform a default analysis that doesn’t aim to satisfy the boss is useful, and the team can then try to understand why that is different than the management hypothesis, triggering a much richer debate.

In terms of social dynamics, agency problems can create conflicts of interest. Every business unit [BU] leader thinks that their BU should get the most resources and will deliver the most value, or at least they feel they should advocate for their business. AI provides a neutral way based on systematic data to manage those debates. It’s also useful for executives with decision authority, since we all know that short-term pressures and the need to make the quarterly and annual numbers lead people to make different decisions on the 31st of December than they do on January 1st or October 1st. Like the story of Ulysses and the sirens, you can use AI to remind you that you wanted something different three months earlier. The CEO still decides; AI can just provide that extra nudge.

Joanna Pachner: It’s like you have Spock next to you, who is dispassionate and purely analytical.

Yuval Atsmon: That is not a bad analogy—for Star Trek fans anyway.

Joanna Pachner: Do you have a favorite application of AI in strategy?

Yuval Atsmon: I have worked a lot on resource allocation, and one of the challenges, which we call the hockey stick phenomenon, is that executives are always overly optimistic about what will happen. They know that resource allocation will inevitably be defined by what you believe about the future, not necessarily by past performance. AI can provide an objective prediction of performance starting from a default momentum case: based on everything that happened in the past and some indicators about the future, what is the forecast of performance if we do nothing? This is before we say, “But I will hire these people and develop this new product and improve my marketing”— things that every executive thinks will help them overdeliver relative to the past. The neutral momentum case, which AI can calculate in a cold, Spock-like manner, can change the dynamics of the resource allocation discussion. It’s a form of predictive intelligence accessible today and while it’s not meant to be definitive, it provides a basis for better decisions.

Joanna Pachner: Do you see access to technology talent as one of the obstacles to the adoption of AI in strategy, especially at large companies?

Yuval Atsmon: I would make a distinction. If you mean machine-learning and data science talent or software engineers who build the digital tools, they are definitely not easy to get. However, companies can increasingly use platforms that provide access to AI tools and require less from individual companies. Also, this domain of strategy is exciting—it’s cutting-edge, so it’s probably easier to get technology talent for that than it might be for manufacturing work.

The bigger challenge, ironically, is finding strategists or people with business expertise to contribute to the effort. You will not solve strategy problems with AI without the involvement of people who understand the customer experience and what you are trying to achieve. Those who know best, like senior executives, don’t have time to be product managers for the AI team. An even bigger constraint is that, in some cases, you are asking people to get involved in an initiative that may make their jobs less important. There could be plenty of opportunities for incorpo­rating AI into existing jobs, but it’s something companies need to reflect on. The best approach may be to create a digital factory where a different team tests and builds AI applications, with oversight from senior stakeholders.

The big challenge is finding strategists to contribute to the AI effort. You are asking people to get involved in an initiative that may make their jobs less important.

Joanna Pachner: Do you think this worry about job security and the potential that AI will automate strategy is realistic?

Yuval Atsmon: The question of whether AI will replace human judgment and put humanity out of its job is a big one that I would leave for other experts.

The pertinent question is shorter-term automation. Because of its complexity, strategy would be one of the later domains to be affected by automation, but we are seeing it in many other domains. However, the trend for more than two hundred years has been that automation creates new jobs, although ones requiring different skills. That doesn’t take away the fear some people have of a machine exposing their mistakes or doing their job better than they do it.

Joanna Pachner: We recently published an article about strategic courage in an age of volatility  that talked about three types of edge business leaders need to develop. One of them is an edge in insights. Do you think AI has a role to play in furnishing a proprietary insight edge?

Yuval Atsmon: One of the challenges most strategists face is the overwhelming complexity of the world we operate in—the number of unknowns, the information overload. At one level, it may seem that AI will provide another layer of complexity. In reality, it can be a sharp knife that cuts through some of the clutter. The question to ask is, Can AI simplify my life by giving me sharper, more timely insights more easily?

Joanna Pachner: You have been working in strategy for a long time. What sparked your interest in exploring this intersection of strategy and new technology?

Yuval Atsmon: I have always been intrigued by things at the boundaries of what seems possible. Science fiction writer Arthur C. Clarke’s second law is that to discover the limits of the possible, you have to venture a little past them into the impossible, and I find that particularly alluring in this arena.

AI in strategy is in very nascent stages but could be very consequential for companies and for the profession. For a top executive, strategic decisions are the biggest way to influence the business, other than maybe building the top team, and it is amazing how little technology is leveraged in that process today. It’s conceivable that competitive advantage will increasingly rest in having executives who know how to apply AI well. In some domains, like investment, that is already happening, and the difference in returns can be staggering. I find helping companies be part of that evolution very exciting.

Explore a career with us

Related articles.

Floating chess pieces

Strategic courage in an age of volatility

Bias Busters collection

Bias Busters Collection

IMAGES

  1. Predictive Maintenance Modeling for Industrial Machinery

    predictive maintenance machine learning case study

  2. Predictive Maintenance Using Machine Learning

    predictive maintenance machine learning case study

  3. Predictive Maintenance as a Powerful Tool to Boost Manufacturing Workflow

    predictive maintenance machine learning case study

  4. What is Predictive Maintenance ? #machinelearning #ai

    predictive maintenance machine learning case study

  5. Predictive Maintenance (for Supply Chain)

    predictive maintenance machine learning case study

  6. What is Predictive Maintenance? Definition and FAQs

    predictive maintenance machine learning case study

VIDEO

  1. CCCA Real-World Learning Case Study

  2. Predictive Maintenance Machine Failure Detection Using Supervised Learning By Omkar Balekundri

  3. Case Study

  4. Use Machine Learning to Implement Effective Predictive Maintenance

  5. 33/90 Bias Variance Trade off in Machine Learning Case 3 #datascience #shorts #datatechnology

  6. The truth about predictive maintenance

COMMENTS

  1. Predictive maintenance enabled by machine learning: Use cases and

    Machine learning for predictive maintenance. Since this paper discusses machine learning (ML) for predictive maintenance, in this section, the ML fundamentals relevant for PdM are reviewed and ML is related to PdM. ... In a case study they show that simple statistical PdM is indeed superior to the currently used preventive maintenance strategy ...

  2. Advanced ML for predictive maintenance: a case study on remaining

    Machine learning (ML) techniques are increasingly being used in the field of predictive maintenance to predict failures and calculate estimated remaining useful life (RUL) of equipment. A case study is proposed in this research paper based on a maintenance dataset from the aerospace industry.

  3. Predict Failures With Machine Learning: Real Case Studies

    CASE 1. Picture number 1 shows a bearing vibrational increment of a ventilator fan, caused by an oil leak. This condition generated an alarm. The solution created using Machine Learning predicted a bearing vibration of about 3,5mm, given the operating conditions. The bearing vibration slowly deviated from the predicted value, creating an alarm ...

  4. Predictive Maintenance using Machine Learning: A Case Study in

    Predictive maintenance has become an important area of focus for many manufacturers in recent years, as it allows for the proactive identification of equipment issues before they become critical. In this paper, we present a case study in the application of machine learning for predictive maintenance in a manufacturing management setting. Through the implementation of various algorithms such as ...

  5. (PDF) Machine Learning for Predictive Maintenance in Industrial

    Case studies from various industries, including manufacturing, energy, transportation, and aerospace, are presented to illustrate real-world applications of machine learning for predictive ...

  6. Explainable Predictive Maintenance: A Survey of Current Methods

    Mississippi State University's Predictive Analytics and Technology Integration (PATENT) Laboratory for its support. ABSTRACT Predictive maintenance is a well studied collection of techniques that aims to prolong the life of a mechanical system by using artificial intelligence and machine learning to predict the optimal time to perform ...

  7. PDF Explainable AI in Manufacturing: A Predictive Maintenance Case Study

    This paper describes an example of an explainable AI (Ar- ti cial Intelligence) (XAI) in a form of Predictive Maintenance (PdM) scenario for manufacturing. Predictive maintenance has the potential of saving a lot of money by reducing and predicting machine breakdown. In this case study we work with generalized data to show how this sce- nario ...

  8. Machine Learning Based Predictive Maintenance: Review ...

    The literature includes various machine learning techniques, each with specific characteristics. We focus in the following on the most used ML techniques in the field of predictive maintenance which are classified into conventional machine learning based models, and deep learning-based models as described in [] which reviews ML techniques in a predictive maintenance perspective.

  9. PDF CASE STUDIES OF PREDICTIVE MAINTENANCE

    CASE STUDIES OF. PREDICTIVE MAINTENANCE. for Motors & Pumps. 2. Motors and Pumps are vital to industries including water treatment and wastewater facilities, power generation, oil and gas, food processing and more. In the oil and gas industry, the uptime of industrial pumps is especially critical. The total world consumption of global petroleum ...

  10. Machine learning-based predictive maintenance: A cost-oriented model

    They presented a new cost-driven predictive maintenance (CDPM) strategy derived from the framework and applied it to a case study about the airframe maintenance, showing relevant cost savings in respect to traditional PvM. ... Machine learning for predictive maintenance: a multiple classifier approach. IEEE Transactions on Industrial ...

  11. predictive-maintenance · GitHub Topics · GitHub

    Predictive_Maintenance_using_Machine-Learning_Microsoft_Casestudy. learning machine predictive maintenance case-study cortana-intelligence-gallery microsoft-cortana predictive-maintenance Updated Apr 5, 2018; Jupyter Notebook; Charlie5DH / PredictiveMaintenance-and-Vibration-Resources Star 96. Code ...

  12. Designs

    Predictive maintenance is one of the most important topics within the Industry 4.0 paradigm. We present a prototype decision support system (DSS) that collects and processes data from many sensors and uses machine learning and artificial intelligence algorithms to report deviations from the optimal process in a timely manner and correct them to the correct parameters directly or indirectly ...

  13. Scaling Up Deep Learning Based Predictive Maintenance for Commercial

    Scaling Up Deep Learning Based Predictive Maintenance for Commercial Machine Fleets: a Case Study Abstract: Developing predictive maintenance algorithms for industrial systems is a growing trend in numerous application fields. Whereas applied research methods have been rapidly advancing, implementations in commercial systems are still lagging ...

  14. Predictive Maintenance using Machine Learning: A Case Study in

    Predictive maintenance combines Industrial IoT technologies with machine learning to forecast the exact time in which manufacturing equipment will need maintenance, allowing problems to be solved ...

  15. Predictive Maintenance using Machine Learning

    Predictive maintenance using Machine Learning techniques tries to learn from data collected over a certain period of time and use live data to identify certain patterns of system failure, as opposed to conventional maintenance procedures relying on the life cycle of machine parts. The ML-based predictive approach analyses the live data

  16. Sensors

    Only such a representation will allow the use of machine learning methods to generate a model that can predict machine breakdown in the future. ... The case studies tested the performance of the method on real data ... An Ensemble Learning Solution for Predictive Maintenance of Wind Turbines Main Bearing. Sensors 2021, 21, 1512. [Google Scholar ...

  17. Sensors and Machine Learning for Predictive Maintenance

    SUMMARY. Predictive maintenance utilises monitoring and advanced machine learning methods to develop predictive models about failure of physical and mechanical assets such as pipes, pumps, and motors. These aim to prevent failure and optimise maintenance of critical infrastructure by providing early warning and predictive actions to issues ...

  18. (PDF) Predictive Maintenance using Machine Learning

    Predictive Maintenance using Machine Learning. May 2022; ... Some of the methods for predicting RUL are analyzed with the case studies such as Automotive Component, Rotating machinery, aero engine ...

  19. Predictive Maintenance for the Automotive Industry: Case Study

    In this case study, we uncover how we contributed to the development of the ML model to understand and predict the lifetime of brake pads that are used in their trucks. ... Improved Efficiency & Reduced Costs with Machine Learning. Predictive maintenance is a maintenance strategy that utilizes machine learning algorithms to analyze data from ...

  20. Scaling Up Deep Learning Based Predictive Maintenance for Commercial

    (DOI: 10.1109/SDS54800.2022.00014) Developing predictive maintenance algorithms for industrial systems is a growing trend in numerous application fields. Whereas applied research methods have been rapidly advancing, implementations in commercial systems are still lagging behind. One of the main reasons for this delay is the fact that most methodological advances have been focusing on ...

  21. This is the era of AI in predictive maintenance

    Predictive maintenance plays a key role in detecting and addressing machine issues before it goes into complete failure mode. According to a PWC study, predictive maintenance improves uptime by 51%. Using predictive maintenance, companies can avoid accidents and can achieve increased safety for their employees and customers.

  22. Predictive Maintenance & Monitoring using Machine Learning: Demo & Case

    Learn how to build advanced predictive maintenance solution. Learn what is predictive monitoring and new scenarios you can unlock for competitive advantage.M...

  23. Predictive Maintenance using Machine Learning: A Case Study in

    Predictive maintenance has become an important area of focus for many manufacturers in recent years, as it allows for the proactive identification of equipment issues before they become critical. In this paper, we present a case study in the application of machine learning for predictive maintenance in a manufacturing management setting.

  24. Machine Learning and image analysis towards improved energy ...

    With the advent of Industry 4.0, Artificial Intelligence (AI) has created a favorable environment for the digitalization of manufacturing and processing, helping industries to automate and optimize operations. In this work, we focus on a practical case study of a brake caliper quality control operation, which is usually accomplished by human inspection and requires a dedicated handling system ...

  25. Unifying manufacturing data with Fivetran and Databricks

    Case studies. Resource center. Documentation. connect. Events. News. Support. ... AI-driven technologies leveraging machine learning (ML) such as predictive maintenance, ... key to manufacturing operations, including data processing, streaming analytics, business intelligence with SQL, machine learning and generative AI. This will empower all ...

  26. Smart Cities

    The machine learning algorithms used are from the scikit-learn package , which contains the functions used in the "Training ML model" and "Testing/predict alarms" processes on the flowchart; these are discussed shortly. How each machine learning algorithm works is presented separately in Section 3.1.1.

  27. A predictive model for oil well maintenance: a case study in Kazakhstan

    This paper proposes a predictive model to help oil workers build a reliable model for identifying oilwell failures. It can help geologists experienced with Machine Learning to improve the accuracy of failure identification and a more accurate approach to well-maintenance planning. This study is based on output data statistics such as per-well daily oil flowmeter readings.

  28. AI strategy in business: A guide for executives

    Predictive AI is both more difficult and more risky. Executives shouldn't fully rely on predictive AI, but it provides another systematic viewpoint in the room. Because strategic decisions have significant consequences, a key consideration is to use AI transparently in the sense of understanding why it is making a certain prediction and what ...

  29. Dataset size versus homogeneity: A machine learning study on pooling

    ObjectiveThis study proposes a way of increasing dataset sizes for machine learning tasks in Internet-based Cognitive Behavioral Therapy through pooling interventions. ... which have previously been found to be predictive of both dropout and health outcomes. 46 To ... Scharfenberger J, Boß L, et al. Finding the best match—a case study on the ...

  30. Predicting the Spread of a Pandemic Using Machine Learning: A Case

    Pandemics can result in large morbidity and mortality rates that can cause significant adverse effects on the social and economic situations of communities. Monitoring and predicting the spread of pandemics helps the concerned authorities manage the required resources, formulate preventive measures, and control the spread effectively. In the specific case of COVID-19, the UAE (United Arab ...