The Return of Symbolic Reasoning…and Why It Never Left

Amidst the current cries of the ‘newfound’ efficacy of composite Artificial Intelligence (which the analyst community has embraced), and the impending ascendance of neuro-symbolic AI, lies one simple, extremely notable fact with ramifications for organizations in nearly every industry.

Symbolic reasoning is not only no longer being overlooked, but is now actively sought to do what it arguably does best—solve an array of natural language technologies problems by fostering an understanding that supersedes that of even machine learning.

Granted, neuro-symbolic AI and composite AI are predicated on pairing the statistical and knowledge base sides that have always comprised AI as a formal discipline. But the broadening acknowledgement of symbolic reasoning as vital to these endeavors for the multiple dimensions of natural language technologies is critical to organizations’ capabilities to actually master these use cases for everything from enhanced search to text analytics.

According to expert.ai CTO Marco Varone, the most credible “approach to language understanding uses semantics and symbolic reasoning. Language understanding is for sure the most complex problem in Artificial Intelligence.”

Because of the inordinate complexity surrounding language understanding with AI, the staples of symbolic reasoning—creating formal definitions and terms for enterprise knowledge, implementing taxonomies, and arranging this information in a knowledge graph for machine reasoning—are perfect for addressing them.

This combination “is the best way to get to the level of precision that you need when you move from a very shallow understanding of any type of concepts or content to something that is more deeper,” Varone commented.

Linguistic Complexity

It’s easy to grasp why Varone characterized language understanding as the most exacting problem for AI. Computer vision tasks for image recognition, for instance, are comparatively simple because they’re objective: a chair, for example, is always a chair. However, there’s an intrinsic subjectivity that exacerbates linguistic understanding, particularly for computers. “For any software implementation, language is really tough,” Varone noted. “It’s something that’s very rich, with very different contexts, meanings, a lot of ambiguities, and very rich amounts of information.” Specific challenges computers face for comprehending natural language include:

  • Linguistic Heterogeneity: The sheer diversity of languages complicates language understanding tasks. “Language is different in every country,” Varone remarked. “In some countries, some individual regions have their own language.”
  • Cultural Implications: There are also certain cultural nuances associated with language, including everything from dialects to different meanings for the same term, that increases the difficulty of understanding it. Consequently, “there are many elements: not only the language but the culture that is reflected in the language.” Varone posited.
  • Bi-Lingual Issues: Other complications regarding computational understanding of natural language include applications in which more than one language is used at a time, whether that’s for translations or a bi-lingual speaker shifting between languages.

Knowledge Graphs

The resurgence around symbolic reasoning as a reliable approach to address these and other issues for Natural Language Understanding is partly attributed to the growing popularity of knowledge graphs, which seemingly populate nearly every vendor’s solution these days—from data preparation to analytics. Quintessential knowledge graphs employing the semantic technologies Varone mentioned are integral to symbolic reasoning. They contain knowledge about specific domains or, in ideal cases, the world itself, for applications of Natural Language Processing.

“To represent the knowledge we created a knowledge graph, which is our representation of the knowledge of the world,” Varone explained. “And, you can add specific rules. Symbolic reasoning is based on rules that can get very good results in a very lean and fast way.” With this methodology, enterprise knowledge at the taxonomical or terminology level is the information on knowledge graphs that organizations can use to create rules-based approaches to reason about them.

Thus, language understanding systems comprehend the terms they encounter and, significantly, their relationships and connections to other terms. Users can create specific rules for certain applications like customer service, for example, in which text analytics or even speech recognition systems can understand customer’s questions and provide answers. This rules-based symbolic approach to AI underpins everything from conversational AI to natural language search, and is renowned for its accuracy. “The knowledge graph that we did for the world is more or less the same for every language,” Varone indicated. “And then there is the linguistic part where you have the specific grammar of the language, the specific dictionary, and also some specific semantic elements.”

Statistical AI, Too

Although aspects of symbolic reasoning have always been used for the most challenging deployments of NLP, it’s been overshadowed more recently by the zeal of machine learning aficionados for the pattern recognition capabilities of connectionist approaches. In truth, organizations get optimal results by pairing these techniques, which can accelerate the knowledge engineering requisite for symbolic reasoning, for example. There’s little doubt that the future of natural language technologies lies in coupling these approaches.

Featured Image: NeedPix

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap