Prioritizing Responsible AI with Hybrid AI

The growth of artificial intelligence (AI) has been phenomenal, and it is not slowing down. According to Fortune Business Insights, the market, valued at $27.23 billion in 2019, will reach $266.92 billion by 2027. That is nearly a tenfold increase in only eight years, driven by a compound annual growth rate (CAGR) of over 33%. Across just about every industry, within applications and systems we never imagined, AI is making significant inroads.   

Despite this rosy outlook, public trust in AI is strained. With new (and very public) privacy laws taking hold, consumers are becoming hypervigilant about their personal information and protective over how organizations use it. This has put the onus on the enterprise to act more responsibly when it comes to their AI initiatives. 

A Matter of Trust

We know the power of AI. It can enable everything from better decision making to streamlined workflows to improved cost efficiency. But in truth, we know very little about the AI mechanism itself. Can we explain how a model turns a given input into an output? This is no longer a question of intrigue. It is a matter of corporate integrity and data governance. 

As companies seek to ensure fair and equal treatment of all people, they must avoid “black box” approaches to AI that leave them vulnerable. They will never completely eradicate bias from their platform, but they should know where and why it exists. Moreover, they should be able to use that information to resolve the issue in an expeditious manner. 

A system that performs tasks and reaches results must be auditable, especially as concerns and questions over issues like privacy and bias grow. It presents a level of operational risk no enterprise should accept. Any issue, big or small, could quickly damage a company’s reputation, chase away or even harm consumers and undermine support for current and future AI initiatives.   

This is leading smarter enterprises to reexamine their AI approaches. After all, if they cannot trust or explain what will occur, how can consumers?   

Behind the “Black Box”

Not every AI system needs to be explainable, but it should share enough for a layperson to understand. For instance, a chatbot that follows a basic logic tree is understandable. It is easy to see how a routine consumer question (input) leads to a given response (output). At the same time, a system that leverages several input parameters, such as application or claims processing, while being more complex and often leading to more ambiguous results should still provide some kind of visibility on the logic the generated the results. 

Machine learning (ML) and deep learning (DL) models are notoriously difficult AI approaches to explain and, thus, are commonly labeled as “black box” approaches.

  • ML uses algorithms to identify patterns and extract information through inference rather than actual knowledge. This information is often used for predictive measures such as product recommendations or dynamic pricing.
  • DL is a subset of ML but with many more layers to the algorithm. This creates an artificial neural network that can learn without human oversight and handle more involved challenges, such as detecting fraud or money laundering. 

In both ML and DL models, explainability (even at a high level) is generally not feasible considering the complex and data-saturated nature of each system. To achieve any level of explainability, you need some semblance of human knowledge within your model. 

Knowledge is the Answer to Explainability

Symbolic AI is the only approach that adds knowledge to the AI equation. It is based on high-level, “human-readable” representations of problems, logic and knowledge. Though less hyped than ML, symbolic has proven essential to successful natural language understanding (NLU) technology. This branch of AI understands unstructured text and uses it to facilitate human-computer interaction. 

While symbolic and machine learning approaches are often segmented and viewed as alternatives to one another, they are more complementary than you think. In fact, a hybrid model that combines symbolic and ML has proven exceptionally value to the enterprise, especially when it comes to explainability. 

Doing More with Responsible AI

All ML models rely on large volumes of data to train themselves and learn over time. However, because ML mechanisms exist in a black box, resulting errors cannot be pinpointed. This requires the entire system to be retrained when a problem arises and is both a costly and time-consuming endeavor. 

On the other hand, symbolic systems are founded on a rules-based approach. Because these rules are visible for all to see, you gain the transparency to understand how decisions are made. This also makes it easy to identify errors and quickly establish new rules to rectify issues. Thus, it is all explainable. By combining ML with symbolic AL in a hybrid model, you can bring together the best of both worlds: a human-like understanding of language with the data processing capabilities of ML.   

Enterprises are responsible for the way their systems behave. They must know exactly how they work and be capable of discussing issues with anyone from prospects to customers to a select user base. Their success with this depends a great deal on the technology they adopt. 

Hybrid AI paves the way for responsible and explainable AI. That is one approach that will take your efforts much further, faster. 

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap