Imagine you are inside your new completely autonomous vehicle that drives you to your next business meeting. At one of the intersections, the vehicle fails to “see” the red light and drives through incoming traffic and crashes into a school bus. Who is to blame? The car of course: an autonomous entity has failed to respect traffic rules. Hmm .. really?
What legal personality for Robots?
The subject of Artificial Intelligence is getting a lot of hype these days. And because of this level of interest, regulators are beginning to ask the important questions about AI and trying to define the concepts around this topic. Remember GDPR? Well, this was the first step in this direction as the EU began regulating access to private data, the raw material to any (deep) learning AI techniques.
Techniques? Why do you call them AI techniques?
Intelligent algorithms are nothing new as they were first defined in the 1950s. Paralyzed by philosophical debates and depicted in books and movies as the perfect tyrant of the future, AI techniques were more of a research topic and a technological niche: expert systems were used in medicine, for example, to help with diagnosis.
But in order for these algorithms to work, they need a lot of computing power and training materials. You can compare it to a child learning what a doorknob is. They already have the computing power( the brain) available, but you still have to tell them that different doorknobs do the same basic function even if they have a different shape, color or are positioned differently on the door. And even then, when faced with a model they have never seen (the horizontal bar on emergency exit doors for example), they will still hesitate to use it.
So as computing power became largely available in recent years and with the advent of Big Data, AI techniques are beginning to show their potential in doing low-level jobs in order to optimize certain human tasks.
A definition for AI?
Not one but many, as there is not one “AI” but multiple techniques. In order to simplify, scientists talk about two main concepts called Weak AI and Strong AI.
Weak AI recreates and amplifies human cognitive capacities. This is helpful when trying to execute predefined tasks with maximum performance and autonomy. One simple application is mail sorting where machines are able to autonomously read addresses and, using dedicated hardware, are able to sort parcels by destination.
Other applications are based on learning techniques and include visual or sound reconnaissance systems. When you use Siri, Echo or Shazam you take them for granted, but you should know that they are the result of many hours of learning what a chat between two persons is or what a musical tune sounds like.
Strong AI, on the other hand, has an artificial conscience, has sentiments and can also take initiative and do tasks without human intervention: the machine thinks on its own. In the words of Steve Wozniak, if a machine can go in an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons, then it can be considered intelligent.
This is still the realm of science fiction and works like I, Robot or Westworld, though fascinating, are still the fruit of the imagination of really creative writers.
Can I sue a robot? Not even my autonomous car?
Well, not actually!
The European Parliament issued last year a resolution on Civil Law Rules of Robotics recommending the European Commission to “create a specific legal status for robots”. This will imply the creation of a status for “electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.”
This prompted a reaction from AI researchers, Law and Ethics experts and many other civil society leaders who state that regulators should not over evaluate the actual capabilities of even the most advanced robots. In an open letter to the European Commission, they are urging politicians not to base future regulation on “superficial understanding of the unpredictability and self-learning capacities and a robot perception distorted by Science-Fiction and a few recent sensational press announcements.” They are reminding legislators that current AI techniques are all defined as weak AI.
And even if “AI” today can create works of art, this creation process still needs the intention and the intervention of a human being to use AI as a tool to create something. By combining the weak AI definition with a common definition of art, what some articles call AI creations are actually human creations made by using a new technique (another great example here). But more on this in a future article.
Back to our topic. So who should you blame in case of AI malfunction? As with any technology, there is a malfunction and mishandle. Present day AI systems have a maker and handler. More like you and the device you are reading this article on. If the display stops working even if you used the device properly, then you can contact the maker and ask for a repair as it’s a malfunction and it’s their fault. But, if you let it fall in a hot lava pool while hiking in Hawaii and then the display stops working, well, sorry, but it’s your fault as you mishandled it. Hope you could retrieve that lava selfie from the memory card though…
What to watch for next?
Regulators will continue to follow this topic closely and, because of recent affairs about AI malfunction or misuse, they will most certainly create a framework for future research and usage.
For people like you and I, even in a “weak AI” world, we still have to educate ourselves on this topic and pay attention to those who will try to use AI techniques in order to influence our decisions.
- 5 AI artists and their creative work
- AI Document Workflow Automation For Better Business And Society