How to Make AI/ML Initiatives Count: Optimizing MLOps for Production

Artificial intelligence (AI) and machine learning (ML) are evolving at a rapid pace. At the same time, more and more organizations are attempting to become data-driven, only to learn that AI/ML models are going to play a key role. As companies look to answer critical business questions in a more powerful way, these models provide them with the ability to uncover competitive patterns in the data that are not immediately apparent to human beings.

One of the most prevalent use cases for AI/ML is in the area of personalization, in which organizations need to ascertain subtleties such as the customer’s intent, which is an element that is inherently difficult to quantify using numerical data. Personalization is occurring everywhere and is even playing a role when entering a word such as “sports car” into a search engine. Behind the scenes, the engine needs to use AI/ML to determine whether or not the images in the search results are appropriate to the word and if they include the necessary related attributes.

Challenges Facing AI/ML Operationalization

Today, it is relatively straightforward for a data scientist to build an AI/ML model, especially with the proliferation of open-source tools. However, challenges arise when organizations need to deploy numerous models, many of which need to be categorized in groups that have to work in concert with each other.

Consider the person searching for a ”sports car” as an example. To display relevant ads to this person based on the content of his or her search, an organization would need to deploy multiple models and then monitor their results by performing A/B tests to generate statistics that show which ones are working best, and under which specific circumstances. Then, they need to swap them out based on minute-by-minute performance data and new learning. Ideally, the models would also be “explainable,” which is to say that the models would be provisioned so that the internal mechanics of the model can be made clear to the user in human terms. Without this layer of explainability, data scientists, much less business users, cannot measure the success of any individual model, so the overall AI/ML initiative cannot be considered successful.

AI Time Journal Resources
Are you learning data science?

Check out what books helped 20+ successful data scientists grow in their career.

In addition, deploying multiple models opens up the risk of poor data quality, as it multiplies the number of data-use instances, as well as the risk of models becoming unstable over time. Finally, it becomes difficult to ensure that each individual model is able to consistently produce reliable results.

To surmount these challenges, organizations need to take a step back, and think beyond the deployment of multiple, one-off models. They need to consider model operations as a core function, and implement an operations platform for their models, one that provides model operations (MLOps) as well as model governance.

Model Operations and Governance as a Core Foundation

As with any major undertaking, it is best to begin a holistic approach to AI/ML initiatives with business leaders, because without their support, the effort is bound to fail. Fortunately, many business leaders are realizing the importance of AI/ML and the role it plays, but it is critical to start from the perspective of the business. It is also important to maintain steady collaboration between business and technology teams throughout the entire effort.

On the technology side, it is prudent to demonstrate a few quick wins for the business teams. This means first bringing all of the necessary data together and then deploying an MLOps/governance platform to run a suite of models for solving a few discreet problems. Using the monitoring and testing capabilities found in the platform, the next step is to demonstrate consistent results over a relatively short period of time. Once accomplished, the business should begin trusting the AI/ML capabilities, and the initiative can be expanded. Finally, it’s important to communicate that because the models are well trained, they continue to perform well even as data changes due to seasonality and other factors, proven through A/B tests.

The MLOps/governance platform should also provide effective model explainability – which aids business users in understanding and appreciating model results – since further AI/ML adoption and acceptance depend on trust, transparency, and validation. This improves business collaboration and engages the business users to communicate their ideas to the data science teams so that these can be incorporated in the model in a virtuous feedback loop.

Achieving seamless collaboration between business and technology stakeholders, facilitated by an MLOps/governance platform, is extremely powerful for maximizing the potential of AI/ML initiatives to solve real-world business problems. To achieve this, business stakeholders must educate their technology-based counterparts as to the ultimate goals of AI/ML initiatives. In turn, it is equally important that the technology stakeholders educate their business colleagues on the proper use of models, because if a business user were to deploy a model for other than its intended use, it would be bound to fail. Technology stakeholders should also showcase different capabilities to business users, such as monitoring, dashboards, analytic reports, or any other features that enhance their experience using the models.

The governance capabilities of MLOps/governance platforms should provide users with a bird’s eye view into all models in production, so they can be immediately apprised of any model behavior that could raise a red flag. It is important to mention that many organizations build tools that perform several of the same functions as full-featured MLOps/governance platforms. However, organizations cannot always maintain a homegrown tool and keep it up to date. Companies that use homegrown solutions to operationalize models may be losing both time and money.

Developers of MLOps/governance platforms often have separate innovation teams dedicated to evolving every aspect of the platform, including all algorithms and ways of solving problems. MLOps/governance platforms also provide advanced features like “shadow AI” in which companies can monitor all aspects of model behavior, both good and bad, in a completely safe staging environment that is nonetheless realistic.

Gut-based Decisions to Data-driven Insights

Soon, all data-driven initiatives will touch on AI/ML capabilities if they are not already founded upon them. Traditional BI and analytics are not going away, but they will certainly be enhanced by AI/ML capabilities. Despite many technological advances, most businesses today still operate by a form of “gut thinking.”

As organizations evolve, they will need broader and deeper kinds of insights, which will certainly be provided by AI/ML. Until recently, the focus was on data scientists and the model-creation process but now, the focus is shifting to the ML engineers who are responsible for bringing all models into production. Today, and for the foreseeable future, it is important that the models in production do what the business wants them to do and are constantly learning to deliver improved performance.

About the Author

Lakshmi Randall is Vice President of Product Marketing at Datatron, a pioneer in AI ModelOps and governance at scale. For more information visit www.datatron.com or follow them @LakshmiLJ

Contributor

Lakshmi Randall is global software marketing leader with proven track record on delivering rapid growth through innovative marketing and GTM strategies. Global leadership roles in category-defining companies. Managed and led global functions such as product marketing, solution marketing, customer marketing, sales enablement, competitive intelligence, analyst relations, and public relations. Extensive experience in driving strategic portfolio and product positioning and growth with deep understanding of market dynamics, competitor and customer insights.

Opinions expressed by contributors are their own.

About Lakshmi Randall

Lakshmi Randall is global software marketing leader with proven track record on delivering rapid growth through innovative marketing and GTM strategies. Global leadership roles in category-defining companies. Managed and led global functions such as product marketing, solution marketing, customer marketing, sales enablement, competitive intelligence, analyst relations, and public relations. Extensive experience in driving strategic portfolio and product positioning and growth with deep understanding of market dynamics, competitor and customer insights.

View all posts by Lakshmi Randall →