Charles Simon on Brain Simulator II, GPT-3 and AGI

We thank Charles Simon from FutureAI for taking part in this second interview (read first interview here) to dive deeper into the future of Artificial General Intelligence (AGI) and how FutureAI has progressed in its journey of using Brain Simulator II, an open source software platform, to prove how AGI will emerge.

During the interview, Charles shared valuable insights on AGI, GPT-3 and the future of Brain Simulator II including:

  • The limits to which Artificial General Intelligence can stretch (AGI)
  • The way FutureAI is adapting to the increasing complexity of AGI and Brain Simulator II
  • Various industry applications where AGI can be appreciated

We could build out a system today which exceeds the computational power of the human brain.

– Charles Simon, CEO and Founder, FutureAI

How far has Brain Simulator II – “Sallie” – progressed in terms of being able to emulate basic physics? How have things progressed since our last conversation?

For those not familiar with the project, “Sallie” is the name of a simple virtual entity used to test and demonstrate various aspects of AGI development within the Brain Simulator II. In our last conversation, Sallie could navigate mazes which she learned through remembering “experiences.” For any given experience, you remember aspects of the situation, remember an action you took, and the outcome which followed. In Sallie’s mazes, the situation is limited to landmarks, the decision is which way did she turn, and the outcome is the goal (or next landmark) she reached.

New since our last conversation, moveable objects in Sallie’s environment are handled the same way. If you push on an object at the center it moves, but if you push off-center it also rotates one way or the other. The situation is Sallie’s position relative to the center of the object (which is the same as a landmark) and the outcome is the motion achieved from a push at that position. Sallie learns by experimenting with the object and then can use these experiences to move an object to a goal location.

It looks like these experience triples (situation, action, outcome) will have broad application across many behaviors such as learning language, understanding basic physics, even playing a rudimentary game of chess. The key will be to find efficient ways of searching the stored situations so Sallie can know that the current situation is similar to an experience which was encountered previously. Then the action can be selected for the best outcome.

How far do you think Brain Simulator II can stretch AGI given that cognitive abilities are not easy for AI to learn?

At this point, AGI development is all about shortcuts. Even if we have machines which equal the power of the human brain, we are not willing to wait three years for our systems to develop the cognitive abilities of a three-year-old. So instead of Sallie’s brain having to figure out how to store an experience triple, or a landmark, or a phrase, we code up a module in C# which performs the function. Building shortcuts is essentially the path taken by all AI development.

The question is whether or not all these shortcuts will merge together into something like general intelligence. A key aspect of the Brain Simulator II project is that it supports these shortcuts in a general way by storing everything in its Universal Knowledge Store. That way functionality like the experience triples can apply to senses of vision, touch, hearing, etc., so important basic concepts like the existence of objects in a physical environment and persistence of objects over the passage of time will be possible. Sallie’s ability to move objects in her environment represents a first step toward comprehending cause and effect relationships.

The key is the recognition that the brain has only a small number of underlying functionalities which are applied in different ways to different problems. So if learning a maze can be built from the same functionality as moving an object or learning a command, this represents a great step forward.

With Brain Simulator II growing in terms of complexity, how intensive do you see the computation becoming? Would there be any bottlenecks in the future in terms of infrastructure or resources to host and run AGI?

At some point in the future, yes. Today, we are limiting the problems to just a few kinds of objects or just a few words so the computational load is modest. Once we progress to thousands of objects and thousands of words, and millions of experience triples, we anticipate needing bigger computers.

That’s why we just completed a conversion of the underlying Neuron Engine to a C++ .dll. This not only increased performance to processing 2.5 billion synapses per second on a single desktop machine, it opens the door to running the neuron engine on a High Performance Computing (HPC) cluster. That means that it could operate on a large-scale parallel system each processing many millions of neurons. The open question remains about what proportion of synapses connect to distant neurons which would require machine-to-machine networked communication which would be a serious performance issue. I believe that the overwhelming majority of synapses will connect to nearby neurons so that the system should scale well across multiple machines.

Do you see true AGI being achieved in the near future?

Based on the performance milestones above, we could build out a system today which exceeds the computational power of the human brain. But without the software to make such a system useful, it’s impossible to justify the expense. So, the question is: when will AGI software become useful.

There are numerous groups working on AGI so I anticipate success within the coming decade. AGI won’t spring into being at any specific point in time. Instead, there will be systems with various aspects and various levels of ability, and we’ll argue about whether or not true AGI exists. It won’t be until broad superhuman mental abilities are demonstrated that most of us will agree that AGI exists—and that may be another 10 or 20 years later.

With the gradual emergence of AGI, most of us probably won’t even notice but we’ll appreciate the benefits. Today, my Alexa seems smarter than a three-year-old. In the future I expect it will be as smart as a twelve-year-old, then an average adult, then a genius, then beyond. At every step along the way, AGI progress will seem like a good idea.

Even then, AGIs won’t be just like humans. We generally think about intelligence in terms of language or logic or mathematics and we tend to ignore the fact that the overwhelming majority of human behavior is about food, shelter, human interaction, or sex. None of these will be important to an AGI unless the AGI is programmed to impersonate a human which is a much more difficult AGI challenge than just being intelligent.

There have always been controversial arguments around AGI being a potential threat to human existence. Do you foresee any such problems arising with the progress in the use and research of Brain Simulator II?

I’m not concerned about Terminator-style robots trying to eradicate us. A malicious AGI would be much more likely to manipulate elections and install leaders who would do its bidding. With that kind of power at its disposal, an AGI would not be motivated to resort to messy, costly, human-like warfare.

Whether or not such a system poses a threat to human existence depends entirely on how we handle the next few decades. If mankind gets its house in order, particularly on the environmental front, then we can provide a benefit to AGIs at the same time they provide benefits to us.

I see the development of AGI as inevitable, and my participation makes it possible to help keep it benign. Instead of weaponizing AGI or using it to game the stock market, I look to setting goals which will promote AGIs cooperation with us and encourage our participation with it.

What is the further roadmap of Brain Simulator II and what are some of the industry applications which could benefit from its use?

Consider the ways in which today’s narrow AI applications could benefit from true understanding. In the field of machine vision, necessary for self-driving vehicles, facial recognition, security, etc., consider how the applications would be different if the software understood that things and people are entities which exist—not just arrangements of pixels.

Consider a word processor which understands underlying meaning. GPT-3, for example, has remarkable abilities but it has no understanding and still meanders off into tangents unrelated to the topic. A GPT-3 system with added understanding would be extremely valuable.

Since I see that most of the shortfalls of today’s AI are primarily in capabilities common to any three-year-old, development is focused on these. When Brain Simulator II is successful at demonstrating “understanding” at any level, many general applications will become apparent.

I’m not concerned about Terminator-style robots trying to eradicate us. A malicious AGI would be much more likely to manipulate elections and install leaders who would do its bidding.

– Charles Simon, CEO & Founder, FutureAI

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap