Michael Campbell is the Founder and CEO of Glossika, a company that applies Artificial Intelligence to aid language learning.
Mr. Campbell is also a committee member in the AI for Education 2019 Initiative, an AI Time Journal initiative which aims to identify and highlight the most beneficial and impactful applications of artificial intelligence technology in education.
In this interview, Mr. Campbell shares insights on how AI can be leveraged in the world of language learning to meet every student’s need, as well as his views on the future of education as new tools are changing the way languages are being taught in the classroom.
What is your background?
I would describe myself as somebody who thrives off of “pain”. I started a company as an immigrant in a foreign country in East Asia, leading a team in a foreign language with the most complex writing system in the world that I frequently have to read financial reports and documentation in. Due to my pre-immigration status, I had to build a company without proper access to banking or credit or credit cards or any outside investment or loans, because I essentially started my adult life in that location without any previous connections elsewhere. I had to rely on connections and put a lot of things together in order to make it work.
My life is structured around building more pain into it, so I do a variety of mental and physical challenges. I run 10km up and down a mountain every morning. I “enjoy” practicing a variety of complex piano concertos in what free time I have in the evenings. I try to learn an indigenous language every other year or so. And I’m a fan of inspirational people like Wim Hof. I’ll refer later to dealing with “pain” in education.
There are definitely easier ways to start a company, easier ways to be successful, and easier ways to make money. But I believe that we become stronger and more resilient through adversity, and I have always lead my life in this way.
Luckily, I’m in the business of language.
I’ve been surrounded by languages ever since I was a child, mostly English, Russian, German, and Italian. By the age of 20, I had trained as an interpreter and translator of Chinese. As I got more exposed to opportunities over the next couple of years, I phased out of it and ended up doing a variety of consulting projects for companies in the Greater China region for about a decade, projects of which included solving bottlenecks in company processes, international marketing strategies, cross-culture communication, fluency training for the salesforce, etc and gained a lot of valuable experience during this time.
I love learning for the sake of learning and I read widely on a variety of topics. But human communication, specifically how real world events get turned into human expression has always been of particular interest, and deep understanding of it has allowed me to pick up languages that contain highly different structures.
I have taken this to an extreme and have acquired several indigenous languages, one of them particularly well. There were only 5 native speakers of the Thao language in 2010 when I learned it and this is why I put more urgency on acquiring a high proficiency in the language. Today, these speakers have all passed away, and I’m one of only a handful of non-native speakers remaining. Linguists claim that this ergative language has more than 5000 years of history and reflects some of the earliest known traits of a very old language family known as Austronesian. Within Thao alone, I can find word roots that connect the language to Thai and to Hawaiian, and even to Greek, especially its word for “I”, which is perhaps the oldest word in the world. The language lacks a lot of words for modern inventions, but it has an extremely rich vocabulary in flora and fauna and a great deal of information tied with it, unfortunately, most of which has probably been taken to the grave and lost forever.
I still make it a point to get out and learn another indigenous language every other year or so, no matter if I remember it or lose it. The important thing is the adventure and the experiences, the stories that come out of it and the people you meet. Learning an indigenous language is like solving a Sudoku puzzle a thousand times bigger, and it brings so much greater reward. But with each language, the puzzle gets smaller and easier to solve!
People are surprisingly similar to the world over. If only the world would open their ears and truly listen to people and attempt to understand those who seem different on the surface, there would be a lot less hatred in the world. One of the most unique but yet unifying features of humanity is our gift for language. We all possess it, and every language is capable of expressing the full realm of our common experience here on earth. No one language is better than another, and indigenous languages can be full of a vast array of complexity that many mainstream languages have lost over time.
How was Glossika started? What is Glossika’s mission?
One of the problems in language pedagogy, whether that be in the analog or digital sense, is that there is a major lack of organisation in the intermediate stages of language learning to produce results in fluency. This lack of organisation manifests in random approaches and methods that each teacher develops individually.
Interpreters have built a great system for training fluency with extremely high efficiency. Unfortunately, very few people know about these learning methods. Glossika was born out of the idea of making such a system that allowed me to improve some of my more passive knowledge languages, something that I could do in my free time without too much extra effort, but producing similarly efficient results.
In many East Asian countries, I frequently see that individuals may have a working vocabulary of 500-1000 English words, have a rudimentary understanding of English grammar or what they passively remember from their school days, but have absolutely no ability to open their mouth to speak. This is due to the extremely low frequency with which they encounter the language. In fact, pronunciation can vary widely from horrible to quite passable between individuals. No native speaker of English would understand one of these people speaking only single English words, because the pronunciation detracts from any meaning that they intend.
However, with a little bit of training on what they already know, you can turn this person into someone who can communicate with just a few practice sessions. Even with the same pronunciation, once they’re able to string what they know into sentences, even native speakers can start to understand the meaning even when the pronunciation cannot be cured. When you can change a person’s confidence to communicate, the more they communicate, the better their pronunciation will get. They’ll start to hear themselves and be able to understand others better. It’s the first step in a positive cycle that leads them to communicate more effectively.
So the key in communicating in a foreign language is being able to put together sentences, in a way that a native speaker would understand. Pronunciation, vocabulary, even grammar, are secondary considerations. The reason is that practicing well-structured sentences produced by native speakers already have vocabulary and grammar properly used in the sentence. The next step is simply training these through pattern recognition.
In order to put together sentences, we as the designer of such a system must have a strong syntactic framework. Because it is with this framework that we’re able to tease out the patterns that appear in any language, and from those patterns, you get language acquisition occurring. Without the patterns, everything just seems random and nothing much happens. This can be likened to living in a foreign country or watching all your movies in your target language for decades without gaining any progress. Random noise equates to random results.
Deciding to do this at scale at Glossika, we needed to solve the hardest problems first, solving for the most challenging pairs of languages, and then using a cascading set of solutions for all other languages below that. Several years ago I developed our syntactic framework based on first-order logic that describes real-world events, and we now use AI to map all the patterns back to the myriad surface varieties of languages. For example, whenever we’re faced with a new syntactic challenge, one of my go-to tests is how it is expressed in widely divergent languages such as Dyirbal (an indigenous language of Australia) or in Pirahã (an indigenous language of Brazil), or any one of many ergative languages in the world. Because if it passes the test, then we’ve probably got the right approach. All other languages fall in line under that cascade.
Adding lots of languages to Glossika is important not because English speakers want to learn lots of languages. We can’t guess where people are from or where they’re going, so to make things less complicated, our platform is an open-door: everything we have is available to everybody. It’s important because people speak lots of languages and want to use their own language to learn English, French, Chinese or other major languages. Once we have a language like Bengali in our system, this means anybody who speaks Bengali can now learn Japanese or Chinese or Arabic on our system. And they do because they often travel to these countries for work.
But more important than the languages is the actual language content. We’re pioneering right now in adding and defining content related to specific industries and building products specifically for these industries.
Who are Glossika’s customers and how do you create value for them? What challenges do they face, or will they face in the future, that Glossika can help them overcome?
The algorithms that we’ve built into the 2019 product-releases benefit people in many ways, and every day I’m discovering new ways in which these benefit different kinds of customers.
Glossika is one of the best tools that a language teacher or tutor could ask for. Especially in terms of curriculum development. Glossika solves curriculum problems all happening deep in the database and is tailored for every student. Glossika is able to read the complexity of a vast number of sentences, written in any of the languages we support, and sort it for complexity in a way that a student can acquire that data one step at a time without being overburdened. For example, we can take a whole novel or a textbook in any subject in any language, and our algorithms can sort all of that data from the easiest sentence to the hardest in terms of comprehension. This is because Glossika focuses on language and communication, rather than the subject matter. So it universally applies to any subject matter. We just haven’t had the chance to start endeavouring into other subject matters yet. But this is ripe for opportunity to engage with other pioneering companies in other fields of education. Bill Gates just mentioned in a June 2019 interview at Stanford that “reading in general” and building such an “agent” that could help students like a tutor, is one of the biggest unsolved problems right now. Gates and others pioneering in these fields tend to think of the problems as “science” or “math” problems rather than linguistic problems. It’s more the latter than the former.
For teachers and tutors of language in particular and most importantly, their students practice on Glossika daily and have contact with their teacher once a week. That whole week of practice culminates in an awesome session with the teacher where the student has a higher rate of engagement, more conversational use of the target language, and where the teacher can also help answer questions that grew from curiosity, exposure, and practice of the language over the past week.
Additionally, since Glossika has a variety of business tools in place for teachers and organisations, Glossika can supplement their income and even give them more sustainable income over time as students come and go. One way is by combining Glossika into their set of services, and they can track the progress of all of their students from the interface. The other way is by partnering with Glossika and making commissions from sales through their websites.
Glossika’s 2019 product-release provides more accountability to teachers and administrators of what their students are learning, especially in terms of how much of the language they’ve acquired in terms of syntactic patterns and vocabulary, how strong their memories are for each item, and which items are particularly difficult. We’re working on applying machine learning to our algorithms so that they can predict the farthest point in time to ask a student a question in which they are still highly probable to answer correctly, which builds endless positive reinforcement but yet means they’re making lots of progress in their learning. These rates of memory decay are then tailored for every item in the database across every single user so that we can analyse a large dataset for the most difficult items and allow that to influence the sorting algorithms. This means that over time and through more frequent use, not only does the sort improve, students’ success improves, but redundancies are also removed from the system resulting in much higher efficiency.
Tell us about the Glossika team.
As of mid-2019, we’re completely focused on our R&D team based in Taipei, Taiwan. This is where our linguists and developers come together to solve a variety of problems. The company functions as a laboratory where we’re always exploring and testing and solving hard problems.
Our linguist team is mostly made up of academics who come and go from the university system, so we have a variety of semester-long projects with those teams. These projects cover specific disciplines such as syntax, semantics, lexicon, phonology, and phonetics. Knowledge of one or more foreign languages is nice to have, but we always hire for exceptional skills in these disciplines paired with computational skills first. This is where all of the AI happens.
Join our weekly newsletter to receive:
- Latest articles & interviews
- AI events: updates, free passes and discount codes
- Opportunities to join AI Time Journal initiatives
Algorithms are so versatile at solving a huge number of problems across many company functions, that I think it’s important for founders or CEOs to understand algorithm design. This is what I spend most of my time on.
I design and integrate the algorithms for all of our systems. Only some of these algorithms incorporate machine learning. One of our master algorithms takes the results of all our linguistics algorithms, which in turn contains ML, sorts that data by complexity, filters it for the user, and delivers the results to the user. As the user learns more, more and more gets filtered, and this adapts to difficulty and memory strength for each learner across every item.
And finally, our developer team focuses on setting all our algorithms to code and linking back-end systems with the front-end interfaces that synchronise and function on a wide variety of devices.
None of this would have been possible without the COO, Sheena Chen, who has worn so many hats since the inception and has helped the company run smoothly, taking on diverse roles as running human resources, running the production team, redesigning our whole production line, establishing our marketing efforts, establishing rules of conduct with customers, employees, and communications outside of the company, and above all, taking on the role of product manager who communicates all my ideas, figures out the UI, communicates with the developers, and turns these ideas into viable products.
AI in Education Q&A
What are the major challenges in the educational system today? How can AI technology help solve or mitigate them?
One of the biggest challenges is curriculum development. In other words, students rely on textbooks and the order in which information is presented to build a framework to understand complex subjects. There’s debate about how this is done. Many teachers take liberty to what chapters are presented or taught, often constrained by schedules and deadlines, but also confined by their own individual strengths in the specific subject matter. Then there’s the complex interaction with the students. Maintaining healthy classroom environments sometimes requires degrees in psychology. Delivering highly efficient and accurate results for all students can be likened to building a house of cards in a war zone.
And maybe our “definitions” within the subject matter are not the best for comprehension. I have examples from language learning, and I believe there are many more waiting to be discovered across other sciences.
A good example is that “subject” and “object” is a really horrible paradigm that a lot of languages don’t adhere to. Even in English, a sentence like “the food is cooking” is quite misleading to its real-world counterpart. It doesn’t mean that a carrot or some other piece of food is standing at the stove cooking dinner. This is an example of unergativity appearing in English as it lacks the ergative agent. So the subject and object paradigm doesn’t really mean a real subject or real object the way we like to “define” them. Defining languages using ergativity aligns with real world scenarios very well, and also explains very well what most teachers like to call “exceptions to the rule”. I don’t accept “exceptions to the rule”. “Exceptions” means you’re probably teaching with the wrong paradigm. In half-ergative languages like Georgian, our syntactic framework predicts accurately exactly where ergativity marking should and should not occur in Georgian syntax, based purely on our verb classification. Take, for example, languages with datives and genitives or any other case marking. There’s no reason a student has to memorise all the verbs in German that take a dative case when the syntacto-semantic algorithms actually predict it accurately and can group all of these “beneficiary” role patterns together and the student can learn it through the “intersocial” topic of these verbs. The algorithms figure this out without the need to define German grammar anywhere in our systems. This is because human languages actually pattern themselves off of real-world events in predictable ways, and that has been shown to be true from mainstream languages to indigenous languages the world over.
Defining the grammar is harder than detecting the patterns. That’s why so many of the world’s languages’ grammars are written as PhD dissertations! So, by some miracle, students who read grammars or textbooks will be able to reverse-engineer the real-world situations that these patterns are supposed to match and lead them to fluency? We’ve seen this fail almost 100% all over the world.
The question of literacy often comes up in these arguments. Many educators argue that being literate in French is more important than being fluent. I argue that since the effort to gain fluency and literacy is now equalized, then put fluency first, as its reward is much greater and allows for literacy to co-occur during the acquisition process.
Traditionally, the students who reach fluency did it by alternate methods driven by their own motivations.
AI isn’t magically going to give everybody the motivation to learn. But it will perhaps give us an alternate way of approaching a hard problem. We believe that students who are motivated in reducing their effort and finding more efficient ways to acquire a language, will use AI to enhance their own Human Intelligence (HI), and that’s the role that Glossika plays.
What are the major opportunities brought by AI in Education today?
Solving the problems in linguistics has wide-reaching effects because we’re solving problems in all of the human communication. And education relies on communication to learn.
The premise of education is that we present new sentences to students that they’ve never heard, seen, or read before. These sentences need to be constructed in a way that the parts are “n+1”, in other words, comprehensible input: 90% of the data is easy to comprehend, and 10% is something new that can be comprehended within the context of the sentence.
But when students who show up to class only comprehend less than 50% of the sentence, due to a lack of education elsewhere, or lack of vocabulary, or lack of fundamental concepts, then the teacher faces an insurmountable burden. The students are unable to learn, fall behind the rest of the students. This leads to a vicious cycle that can lead to life-devastating results. No student should ever be expected to learn or be exposed to rates as low as 50%.
On the other side, you have the quick learners. Those who think that 90% is too patronising and prefer 70% to 80%, because they feed off the challenge of acquiring a bigger chunk of learning on fewer data. I recognise that I’m one of these people who love such “pain”, but this is exactly what is hated by most people in the world. They don’t want any pain in their learning. So this is why identifying the threshold of pain in learning is important to delivering a better solution.
For example, machine learning can start with basic assumptions that a user’s pain threshold is above 90%, then adjust it over time as the user picks up the pace and the threshold is lowered, but never crossing that threshold by more than a percentage point.
Just as there are less painful paths one can choose in building a company, I believe AI gives more people a less painful path to learning. Somebody needs to go through the pain of building the solution, however, because the solution is not easy. Many of the problems we face every day in the company are NP hard problems. The solution is extremely easy to check, but automating to that solution is extremely difficult and AI can only assist in limited ways.
Take, for example, Python spaCy. It can only help us detect with a less than optimum level of accuracy the split syntactic collocations in a sentence. And we have to hard code a lot around it in order to tease out the non-subject-object paradigms that we require. It’s great that such tools exist, but it’s equally important to understand their limitations for specific use cases.
Sometimes it’s amusing when companies throw tons of data to see what the machine can “learn” from it. I don’t truly believe this is the way to solve hard problems. Perhaps you can do this with the right frameworks set up, but how can we trust that companies have gone through the “pain” to figure out the frameworks? The last thing we want is self-flying airplanes falling out of the sky no matter how hard the pilots try to stop it.
I frequently see ads for the next revolution in speaking languages, a device that can interpret for you. Sure, I have Google Translate on my phone that has the same functions. But using a translator is mostly as a tool that I use to enhance my human capabilities. I learn from it and then I acquire what I’ve learned for use elsewhere. It can be great in emergencies. But I don’t want to rely on a translator to do everything. Possessing the power of communication in a foreign language within your own body is a very rewarding experience and I only wish that more people could experience this.
Machine Translation has its limitations. To be used properly one needs to feed the MT the right sentence structures to tease the correct grammar structure expected out of it. Most people don’t know how to do that, which is great for machine learning of course. But much of what people say is going to get “lost in translation”. There are too many dependencies in human speech, whether anaphora, cataphora or otherwise that can lead an MT astray. The only way to speak accurately through MT is to restructure your own speech like that of a robot, everything clearly demarcated and defined unambiguously, and your MT will work wonders. AI is closing the gap of course, but the amount of data required to close the gap just makes one wonder… the human brain is so much more powerful and can do so much in so little time with such little data. I can take a fraction of that data and sort it in a learning sequence for a human and give that human the gift of a foreign language for the rest of one’s life. This takes less time, and vastly less data, than training MT to work properly. AI needs to be positioned properly in empowering “Human Intelligence” and taking over all the mundane tasks in our lives.
AI isn’t smart like humans are. If AI were so smart, it would have thought about language education already, created a company on its own and released a solution to the world already. But no, we rely on people to solve these problems, start companies, organise teams, and create solutions. Therefore, we need smarter people, solving harder problems, and stronger Human Intelligence. It’s hard to say whether AI will ever replace the entrepreneur.
How can teachers prepare for an AI-powered education?
Focusing on pioneering Human Intelligence (HI) with AI assistance is the priority here. How far can we take the human brain in solving more complex problems, and how can AI assist us rather than trying to replace us? Continuously solving more complex problems is how we can evolve during our own lifetime, and I’d like to see the average HI this century far surpass the smartest people of the 20th century. There’s no reason that a human should have to memorise more than is required to solve difficult problems. It’s absurd to memorise Pi or vast amounts of data unless these are tools that you can access and use in real problems. Basic maths standards, like multiplication tables, are a must, but more ingraining the habits of using tools (AI, calculators) and finding the solutions quickly, which then become a new internalized grammar that we use to solve ever more complex problems is the only way we’re going to evolve the next generation into a more powerful HI.
Where do you see AI for education in the next 5-10 years?
There are two things I’m excited about: voice and video.
Bringing video to our platform means that we can train more interpreters for the deaf, in any sign language around the world, at the same time empowering the deaf to learn spoken languages through our script training interfaces.
There’s a lot we can do with voice and video technologies in the near future. I’m very excited about the applications of GAN and what kinds of hard problems we can solve using it.
The following anecdote is also worth mentioning: A video went viral a few years ago of a Norwegian man screaming at his car to call up the music playlist, while the car responds by doing everything except what he wants, resulting in him giving up in exasperation. His accent in English is completely unintelligible to the AI but not difficult for English speakers to understand.
Many large companies have cracked the problem of identifying the needs of a user’s voice input, but only within a certain standard deviation. Personally, as a speaker of multiple Chinese languages — what the rest of the world refers to as dialects — I find it fascinating that my smartphone can understand my Mandarin but only in the proper accent, but can’t even begin to understand Taiwanese or Hakka. I know local companies in Taiwan that are pioneering these solutions, but I doubt they are scalable. Most companies build very narrow solutions or narrow AI for specific problems. (Because that’s less painful, it’s easier, and you make money faster.) The kind of “hard” solution I’d be interested in pursuing, which would definitely take longer, would be how to understand any variety of languages within a specific group of languages, no matter how much phonological or syntactic variation involved. This means it would be “adaptable” to any new form of speech that it encountered within that language group, and would not require to be “trained”. For example, a system trained on Bantu languages would be able to understand any of a thousand variations (a thousand other languages and dialects) within that system the first time it heard it. This, recognising everybody’s own voice that can be understood, in turn, empowers everybody in the world to be as unique as they want to be!