top of page

Are You Prepared to Manage Your AI Models?

The countless benefits of artificial intelligence within the manufacturing landscape are well documented. From reducing equipment downtime and slope to improving efficiency and production yield, the advantages of AI models will continue to impact how manufacturing businesses operate. However, with the seemingly endless possibilities AI offers, it’s easy for businesses to lose sight of the requirements to build, implement and sustain such advanced models. And given 70 percent of AI projects fail, these requirements are real.


Consider the process for just one AI model:

First, a data scientist performs data preprocessing, cleaning, feature selection and model training. At each stage, there are endless tools, frameworks and libraries to choose from based on each individual data scientist’s preference. After completion of these stages, the model can be exported as code, binary files or wrapped in a docker container.


Next, a system administrator collaborates with the data scientist to define a connection to the data sources in order to deliver specific inputs to the AI model. Once the model is trained and tested, a DevOps engineer deploys the model in production and — in a perfect scenario — prepares the tools for the data scientist to monitor the model performance.


This process is complicated when a large number of business, IT and engineering stakeholders are involved. In these scenarios, roles and responsibilities become hazy and the ability to track a model in deployment becomes even more difficult. Further complicating matters are the scalability and the real-time data streaming requirements to address model development in production environments. Simply put, organizations invest a lot of money to get a model to production, and they should be prepared to protect those investments.


The good news is that there has been a recent surge in development of machine learning tools and libraries including AutoML and dedicated data science environments. These developments have begun streamlining the model development process and will support an increase in machine learning model deployment. But new challenges will arise as companies evolve from model proof of concept to production.


When it comes to IoT, models do not change, but the physical world does; equipment degrades, which in turn changes data characteristics. When a model receives data it does not recognize, or data that is “out of training range,” the model will continue to predict the outcome as it has been trained to do, regardless of poor-quality input data. When this happens, the model output accuracy will drop, eventually leading to production and business risk. AI models deployed in production require continuous evaluation to find the exact moment a model requires retraining and redeployment.


To help organizations protect their AI investments, Cerebre built an AI Model Manager.

Our Model Manager allows its users to manage all deployed models from one screen, regardless of the internal group or external vendor source that built it and the cloud that runs it. The Model Manager connects to defined data sources, deploys AI models automatically, monitors model performance and schedules model retraining when needed. With our cortex library, already deployed models in production can be connected to the Model Manager.


Deployed models are monitored for accuracy of predictions, but also for input-data quality. The results are compared against the thresholds indicating the acceptance point. Once the lowest acceptable accuracy level is reached, the Model Manager can initiate the retraining process and/or alert the administrator. All information including data inputs, outputs and versions of the models are stored and easily accessible.


To learn more about Cerebre’s Model Manager, please contact support@cerebre.io

Comments


bottom of page