Connecting the Business to Data Science with ModelOps

The inner workings of data science—much like that of multi-parameter, opaque machine learning models—have traditionally been an enigma to the average business user. While the latter simply desires accurate predictions to do his or her job better, how exactly cognitive computing aids this objective, and if it actually is doing so, has rarely been clear to these professionals.

Will all the media fervor and enterprise spending on statistical applications of Artificial Intelligence, it’s no longer acceptable for organizations to continue investing in data science without some assurance of the impact, positive or otherwise, their initiatives are producing. According to Datatron CEO Harish Doddi, “Over the last few years, so many organizations have invested in AI talent. ‘What is the ROI of these models?’ is the question that’s coming from the business.”

The concept of ModelOps (aspects of which have been retooled from model management) was designed to answer that longstanding question and others. When properly implemented, it provides a number of metrics that link data science concepts to tangible business outputs to ascertain whether or not machine learning deployments are achieving their objectives.

This practice clarifies some of the murkier points of data science, issues measurable standards by which its progress or lack thereof is gauged, and provides one final boon with the potential to supersede almost all others. “If you want to really apply AI for business value, you need to have the proper connection between the development side of the models and the production side of the models,” Doddi noted.

ModelOps is the link between producing cognitive computing models with data science and operationalizing them to ensure they’re properly functioning so organizations truly profit from what’s otherwise an easily misunderstood business domain.

Dashboard Intelligence

One of the most accessible means of practicing ModelOps is by monitoring machine learning models with interactive dashboards attuned to the impact of data science concepts on business outcomes. The goal is to avoid fairly common situations in which “models make decisions from years back and never get updated for quite some time,” Doddi remarked. Thus, critical ModelOps metrics include things like bias and model drift that can potentially compromise the business value of data science. Solutions in this space concentrate on these and other metrics pertaining to governing models while actuating them “as part of a dashboard and actually compute something called a health score,” Doddi revealed. “The health score is a representation of how effective that model is to the organization. This is a score from 0 to 100.”

Dashboards are customizable, utilize charts and graphs, and provide quick, at-a-glance intelligence into the performance of models for business objectives like store optimization of resources for managing statewide or nationwide retail locations, for example. “It’s extremely difficult for a business person to understand all these metrics because some are relating to data science, some are relating to IT, some are relating to risk and governance factors,” Doddi commented. “The score is like a KPI metric for the model so you can go and say what are the models that are at risk? And the risk is something that is defined by the customer.”

Drift

The notion of model drift is central to data science and ModelOps. It occurs between the time when models are initially calibrated (or even recalibrated) and when they’re in production. Drift is the process in which machine learning models lose their effectiveness, diminish their confidence, and lower their accuracy. It often occurs subtly so organizations may not even know it’s taking place. However, with accomplished ModelOps approaches, the aforementioned model score is an indicator of drift and other metrics.

Once users “observe the score they can actually drilldown into individual categories, whether that’s drift or a bias issue,” Doddi disclosed. Drift happens for several reasons. In some cases, models might be constructed for one purpose or geographic reason and deployed in another. Other times, the data the model encounters in production is at variance with those when it was trained. Intelligent dashboards can detect all these instances.

Bias and Beyond

Bias is induced in cognitive computing models when there are limitations in the initial training data causing similar limits—or bias—in the predictions they make. ModelOps dashboards can recognize instances in which, for example, models for approving loans are only selecting men. They also have capabilities for anomaly detection and additional metrics that display the intricacies of a model’s performance in production that are the impetus for recalibrating, retraining, or altering how or where they’re deployed. Thus, they’re an effective intermediary between data science and the business.

“Using these type of dashboards actually really helps someone who is a business person, because for these models that were actually developed in the last few years, it shows the amount of business that they’re generating: whether it is revenue, or new users, or all of these things,” Doddi commented. “And that is driven by the score, because the models are subject to change depending on what data they’re receiving from their customers. That’s why the score capability is a good interface for them.”   

Image Credits
Featured Image: NeedPix 

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap