The One-Shot Learning Phenomenon

Machine learning detractors frequently cite three limitations of this statistical expression of Artificial Intelligence. These alleged drawbacks include:

  • Comprehension: For natural language technology applications and others, naysayers claim machine learning algorithms don’t actually understand the underlying language they parse—certainly not the way that systems predicated on AI’s knowledge base do.
  • Applicability: Another frequent criticism of machine learning is that, despite its cross-domain utility, it’s applicable to a very narrow problem set (pattern recognition).
  • Training Data: The tried and true shortfall of machine learning is often purported to be the exorbitant amounts of training data required to teach models to solve business problems. This shortcoming is exacerbated by deployments of supervised learning, in which users must devise labeled examples with which to train models.

Nonetheless, there are several statistical AI approaches that are either specifically designed to counteract the quantities of training data necessary to inform effective models, or do so as a byproduct of their particular area of focus. Reinforcement learning, for example, isn’t predicated on training data; it simply involves an agent dynamically interacting with an environment. Transfer learning was calculated as a means of ‘cheating’ the training data requirements for models to learn well.

Certain types of representation learning—specifically one-stop learning—were seemingly devised to “reduce dramatically the amount of training data,” commented Slater Victoroff, Indico Data CTO. In doing so, one-stop learning and other such approaches make machine learning much more accessible for enterprise purposes, while perhaps expanding this technology’s scope for solving business problems.

One Labeled Example

There’s not necessarily a homogeneous approach to one-shot learning. It’s useful for deep learning applications and has a number of proponents for computer vision use cases, too. The brilliance of this method is, in certain implementations, it simply combines the two requisites for supervised learning: training data and labeled examples of model outcomes. Subsequently, “Really what you’re doing is using the label as your one example,” Victoroff revealed.

This approach accelerates model learning because a deep neural network for computer vision, for example, can learn to recognize a pedestrian by that label alone, sans the surplus of training data required with other techniques. Granted, there are instances in which there may be more than one label/example involved with this approach, such as what’s termed few-shot learning. However, as Victoroff observed, “They’re not that different because people aren’t that strict about it. You can give it one example, or zero, or four, and it’s all considered the same sort of thing.” 

Cognitive Psychology Roots

The crux of one-stop learning largely lies in the representation that is the basis of the example or the label involved. The better, more complete, and more representative the example is of the task the model is designed to learn, the more viable one-shot learning becomes. Like many aspects of AI, the foundation of this concept is traced back to certain elements of cognitive psychology. Despite its machine learning parallel in contemporary data science, “There’s a phenomenon in psychology called one-stop learning,” Franz CEO Jan Aasman mentioned.

The relationship between how stark the representation for the single example is and its impact on learning is readily gleaned with a driving anecdote. “I was one time driving with my future wife and two friends in a tiny car in Germany,” Aasman recounted. “I was tired and I wanted to switch lanes, and the entire car started yelling and screaming at me because there was a truck coming that I hadn’t seen. For the rest of my life, I’m now looking over my shoulder to see if there’s a car.”

Robust Representations

When applied to machine learning, representations that are this convincing only require a scant number of examples to train models. The reduction in the number of examples enables organizations to expedite facets of the data science process to spur models into production to detect, for example, anomalies like product defects, security concerns, or public safety hazards such as wildfires. The decreased time to insight also advantageously affects the cost and effort of creating models, both of which decline.

These boons become possible when the psychological underpinnings Aasman referenced are successfully transferred to cognitive computing as effective representations for models to learn. For example, “Some people think that if I drive fast enough over a pothole, I will fly over the pothole,” Aasman laughed. “And so you fly over it and you break your axel or your tire. In this case, we’d talk about one-shot learning.”

Fewer Examples, Better Learning

With the right representation, machine learning models can learn from such singular examples, too. The increased efficiency of this method is more than simply intriguing. It’s a practical way of transforming machine learning from an arduous process requiring hundreds of thousands of examples (and hours of training models), to a celeritous one with just a fraction of the labels and examples otherwise necessary.

It also helps build the case for representation learning.

Featured Image: NeedPix

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap