Interview with Forrest Iandola, CEO and Co-Founder of DeepScale

Forrest Iandola, CEO and Co-Founder of DeepScale

Forrest Iandola is the CEO and co-founder of DeepScale, a company that develops AI perception software for driver-assistance and autonomous driving, with a focus on implementing efficient neural networks on automotive-grade processors.

DeepScale has recently been listed in the AI Time Journal TOP 25 Artificial Intelligence Companies 2018.

DeepScale was founded by Forrest Iandola after completing a PhD in EECS at UC Berkeley, and by Kurt Keutzer, EECS professor at UC Berkeley.

The company, based in Mountain View, USA, now provides automotive OEMs and tier-1 suppliers with autonomous driving software that is more lightweight and efficient to run, therefore drastically reducing the requirements for hardware resources.


How was DeepScale started?

We started about three years ago, and before that, my co​-founder and I were both at University of California, Berkeley.

I did a PhD at Berkeley in Computer Science, working on neural networks and artificial intelligence.

My specialty was in how to reduce the computing resources required for deep neural network computations, and how we could deploy deep neural networks on very small computing platforms like smartphones and internet of things devices.

As I was getting ready to graduate, I was starting to look at what business I might be able to go into after developing these unique embedded deep learning technologies.

One of the papers I wrote in grad school was called SqueezeNet. SqueezeNet kind of went viral, if you will. It was a very small deep neural network, and it became very popular. SqueezeNet enabled people to put deep neural networks on smaller devices than they previously could.

We created a lot of interest in squeezing neural nets. At the same time, we were starting a new research group Berkeley called “Berkeley Deep Drive”, which is focused on enabling technologies for autonomous vehicles. So my former PhD advisor, Kurt Keutzer, and I were both involved in this new group, and our sponsors in the lab were mostly automotive companies—carmakers, tier-1 suppliers, and semiconductor suppliers. Key industry players like Ford, Bosch, and Samsung were funding the group’s research.

We got a lot of access to work on leading-edge problems with major automotive stakeholders. They told us about the challenges they were having with autonomous vehicle systems that have many servers in the trunk to enable artificial intelligence, but that was the status quo. They wanted to find a path to lower ​costs and produce vehicles with AI that would be affordable and profitable. That was the epiphany that got DeepScale started.

Now, we develop a product that consists of designing and deploying very small deep neural networks for computer vision and machine perception into cars. Our value proposition is building these AI solutions in such a way that requires substantially less computing resources and enables deployment on various edge processors.

Tell us about DeepScale’s product, Carver21

Carver21 – Source: DeepScale

Carver21 is a piece of software that runs on very small processors and tells the car what’s going on around it. It is the first product we’re bringing to market, and it essentially enables cars to see things—not just pixels. It’s a software stack that takes the camera data and runs it through a collection of neural nets on a small embedded processor. Carver21’s output then captures vital information like the identification and location of vehicles and pedestrians, where the lanes are, and a variety of other things you need to know about your environment to drive safely.

How much more efficient are DeepScale’s neural networks compared to traditional ones?

It varies depending on the application, but in some cases, we’ve gotten 10 times more efficient, sometimes even 100 times more efficient. And the particular resource that matters varies in the situation: some computing platforms have lots of capacity for memory bandwidth, and don’t have much computing; other platforms have a lot of computing and not much memory bandwidth. Which of those resources we optimize for varies depending where we want to run the application.

In some cases, we’ve gotten 10 times more efficient, sometimes even 100 times more efficient

How do automotive OEMs integrate your technology?

We provide the software. In a mass­-produced vehicle, each processor can typically run several different applications at once. You can have a processor that runs a perception system and also runs other tasks as well.

OEMs and tier-1s might want to build pieces of perception solutions in​-house, source solutions from various suppliers and integrate them, or buy a more complete solution from DeepScale. That’s all perfectly fine: ​modularity​ of our solutions has been a critical strategy that has resonated with our customers.

How do you position your offer against competitors?

I guess you could contrast our approach with Mobileye, the outright leader in automotive computer vision. With Mobileye, you really have to buy the complete solution. If you look at how they’ve developed their product, you have to buy their camera, their processor, and their software all in one bundle. If you only use a small piece of that technology or you want to use it in different way, it’s not really set up for that. That was the only way to sell solutions into the automotive market when Mobileye came on the scene about 20 years ago. We see the automotive value chain shifting to more open platforms that accept and desire solutions from 3rd party suppliers that enable OEMs and tier-1s to differentiate themselves.

One of the unique offerings of DeepScale’s solutions is that it provides automotive customers with options that they haven’t had in the past. Customers can use our technology as the spearhead to their perception stack, or they can peel off modules from our product portfolio to complement the rest of their perception solution. Perception is a high-value capability that many automotive companies are putting resources into, but it’s also a huge and complex problem space. For instance, we run into situations where customers have built a great solution for lane detection, but they don’t have enough resources to also build a pedestrian detector or can’t get their free-space recognition to compute efficiently enough for their target processor platform. We are happy to fill in the gaps and work collaboratively with our customers instead of competing with their internal teams capture as much of the solution as possible. Flexibility​ has been an important proposition to our customers.

Carver21 Flexibility – Source: DeepScale

Are you working only on the software or also on the hardware?

Our approach is to focus on the software and the AI algorithms and to work closely with partners who develop computing and sensing hardware. We have partners who develop exotic types of cameras or radars. We have partners who develop very low-­cost processors that run with high reliability in cars. Those partnerships are pretty tight and we do often to some extent influence each other’s design choices and collaborate on what’s built.

What has been your biggest challenge so far?

I think our biggest challenge is hiring. We’re in an era where deep neural nets have really opened up a whole new set of possibilities for computing applications. As Marc Andreessen likes to say, “software is eating the world”. He’s been saying that for close to a decade now. I think software has eaten a lot of things. We’ve automated lots of things with software today, but now we’re left with lots of applications remaining that have not been automated because it’s too hard to do with normal software: they’re too ambiguous, it’s too hard to write rules for those, etc.

I think deep learning offers the opportunity to automate a whole new set of things with computers that we couldn’t do with normal software, and I think we’re going to see that in the coming years.

Deep learning offers the opportunity to automate a whole new set of things with computers that we couldn’t do with normal software

I think the jump has finally happened in Silicon Valley from hardware—the foundation that built the tech capital—to software. There just seems to be more of a focus on software than hardware now. I don’t know that deep learning will replace traditional software that we’ve come to know, but it certainly has the potential to be as big. Today, there’s so many applications of deep learning that people have come up with that have huge promise. The problem is there aren’t enough people to build those applications, who know how to do it well. The result is that hiring has just been so much work. We have a lot of advantages with our connection to UC Berkeley, but I think we’re in an era where coming up with an application for AI is pretty feasible, but getting the team capable to build that is pretty hard.

Tell us about the team

We’ve been fortunate to collect some very, very bright engineers. We have eight PhDs on our team, 27 of our team members are engineers and we have 35 people total. Having close to a third of the engineering team have PhDs, for the most part in deep learning, has been a huge power tool that we have, where we can really get things done quickly, and that’s been exciting. The head start that we got on very small energy-efficient neural nets was great, and the continued progress that we have due to this very high-quality team is also something that I am very excited about.

DeepScale team. Source: DeepScale

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap