Interview with Charles Simon, CEO and Founder, FutureAI

FutureAI has recently launched Brain Simulator II, an experimental platform to advance the research and development of Artificial General Intelligence (AGI).

We thank Charles Simon for taking part in this Q&A and sharing his views on AGI, his predictions on the future developments in this field, and his plans with FutureAI.

What is your background and how was FutureAI started?

I began with Electrical Engineering/Computer Science and have spent most of my career managing teams and developing software. My experience developing one of the first digital EEG (brainwave) systems coupled several artificial intelligence projects led me to a number of conclusions about the future of artificial intelligence which I explain in my 2018 book, “Will Computers Revolt?”.

One of the primary issues with today’s AI involve its basic lack of understanding, a capability found in any three-year-old. Emulating this type of very basic capability could dramatically improve AI usefulness and lead toward Artificial General Intelligence (AGI).

What is FutureAI’s mission?

I founded FutureAI to demonstrate and prove many of the principles outlined in the book and answer related questions.

For example, if we create a system which can play with blocks, will it learn to reason and plan the way a child might? Can the abilities of such a system be transferred more generally to AGI? These are huge unknowns and creating a non-for-profit development company is the best way to address this type of long-term development.

Tell us about FutureAI’s recent launch of Brain Simulator II, what problems it solves, and what opportunities it brings.

The Brain Simulator II provides an experimental platform to try out various new concepts in AGI. There is a basic neural simulator which underpins the system and modules which handle higher-level computation.

As an example, the module which allows the virtual entity, “Sallie”, to navigate mazes is only a few hundred lines of code on top of the platform which provides for Knowledge Store, vision, other senses, etc. By providing this platform we can develop the navigating code more easily and learn that navigating a maze is basically the same process as any goal-based task such as learning to understand words. The maze system shares lots of code with the module which associates words with behaviors.

What are the technical challenges you currently face at FutureAI?

The real problem is that no one knows the final answer to AGI. Accordingly, we are writing experimental code which leads to numerous dead-ends and trying out multiple techniques as we learn more about solving the problems.

Further, we are learning that many problems have common solutions, but we only learn that only after solving many problems individually. Then we rewrite to consolidate lots of code. This type of programming is much slower than application development where you have a specified target and a known algorithm.

What is the company’s biggest achievement in the last 12 months that you are most proud of?

The development of the Universal Knowledge Store (UKS) module within the Brain Simulator allows the storage of any type of information in terms of the links between data items. Within the UKS an abstract Thing is referenced by its visual appearance, the words used to describe it, and any other properties which make it unique.

Unlike a narrow AI, the UKS system learns everything in the context of everything else; just as a three-year-old learns everything based on information she already knows. Sequences of words can represent poetry. When the poem is retrieved from the UKS, all the contexts of the multiple nuances of the words and the images they evoke can likewise be retrieved. I contend that having all this context is one of the bases of true understanding.

Tell us about your book “Will Computers Revolt?”.

The book explains the overall philosophy behind they system. Based on biology, we know that the human mind consists of a relatively small number of unique algorithms implemented across many billions of neurons. The book explains that to build an AGI, you must start with a robotic system—no system can understand that objects exist in reality unless the AGI has multiple senses and the ability to manipulate the objects.

To be able to plan, the AGI must have an internal model of its surroundings so it can think, “If I do this, that will happen.” If an AGI is not in a physical environment (either real or simulated), it can never learn about the passage of time or cause and effect.

The premise of the book is first that developing these capabilities is well within the scope of the coming decade and that with these capabilities today’s AI will catapult to a new level toward AGI.

Further, the book makes that case (which is being borne out by current development) that the computation capacity of the human brain is much lower than most estimates. Perhaps your mind can learn tens of millions of things. Very soon, we’ll have computers which put a million things in a UKS. Perhaps that is sufficient to manifest actual intelligence. We just don’t know yet.

How do you view concerns that AGI may pose a threat to human existence?

Like genetic engineering or nuclear technology, AGI offers huge potential benefits but also some degree of risk—particularly the risk of abuse by human developers.

In AGI more than other fields, we should expect to get back what we put in. If we create AGIs to be explorers and innovators, we will achieve huge rewards. If we instead create AGIs to be machines of conquest, we should reasonably expect that at some time in the future, they might choose to conquer us.

Right now, the choice is up to us.

What have been the most relevant breakthroughs in the development of AGI in recent years, and what do you foresee for the coming years?

AIs have made huge progress toward AGI in terms of human interaction. Unfortunately, the idea that you can give an AI enough training so that it can achieve AGI is held back by the basic conceptual issues mentioned above.

We are beginning to see applications which depend on multiple senses and huge advances in robotics will spill over into AGI. I should also point out that AGI is not an all-or-nothing proposition. In coming years, we’ll see various facets of AGI introduced and added to existing applications and we’ll argue about whether or not it is true AGI.

Eventually an AGI will exist which outstrips human abilities in most areas and we’ll grudgingly admit that our creations are smarter than we are.

What are your future plans with Brain Simulator II and FutureAI?

In the near term, we’re adding the ability for Sallie to manipulate objects in her environment and learn about the basic physics, “When I push on an object here, it moves there.” Armed with this, Sallie will be given goals like assembling various shapes which will require planning and forethought.

In the longer term, we hope to make an impact on the overall AI industry in showing how low-level AGI concepts will have far-reaching benefits.

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap