A.I. “Doomsday” Might Look Different Than We Think

Though distinct from Terminator-style “doomsday”, our technologies have already profoundly changed the way the world works. | Source

We all have some sort of image of what the world could look like if robots take over. Hollywood has impregnated our imaginations with visions of metallic killers who seek only to advance their cause by using humanity as a means to their ends. Perhaps the robots don’t even exterminate all of humanity, but simply farm their vital energy in order to power their society; weaving an intricate simulation to keep the humans distracted. These kinds of fears have reemerged in recent years due to advancements in, and popularization of, artificial intelligence technologies. When the mind of the people is stoked by the inflammatory influence of sensationalist media, nightmares run amok. We’re beginning to live in a world pervaded by such nightmares.

Experts have been pushing back recently: telling the worried masses that artificial intelligence is a long way from experiencing any sort of apocalyptic tipping-point. Rather, they emphasize that these types of doomsday scenarios are completely unrealistic. Even by the time A.I. is sufficiently advanced, they claim, we will have carefully planned for a host of worst-case scenarios; ultimately circumventing the problem before it comes to fruition. Worrying about robots taking over is more akin to those in the middle of the 20th century imagining that nuclear wasteland-grown monsters would terrorize gentle humans with their glowing, melty third-arms.

I would like to argue that A.I., despite what some experts are saying, does have the potential to bring about irreversible doomsday scenarios, has been doing so already, and is following in the footsteps of other powerful technologies. The transition from “normal” life to a life where we are the puppets of digital systems will be so gradual, so subtle, that we will likely be taken along for the ride; none the wiser. To illustrate this argument, we should take a look at three technologies: car rental systems, social media, and nuclear weapons. Each offers both a portal into a future where humanity has no clue that it is ruled by artificial intelligences, and a reflection of our own world that may reveal it to look more similar to a future doomsday than anyone thought.

Car Rental Systems

I recently had an experience during which I had to rent a car for a business trip. It seemed like a fairly straightforward transaction: I purchased the rental online in advance of my trip, paid for it in full so it was cheaper, and then showed up to pick up the rental car. Little did I know that the location I specified for pickup was not an actual rental office. Although it was listed in the choice of options, and was the exact destination of my train, the office actually existed slightly down the road under a different name.

This completely threw off my rental, and I was forced to wait over an hour and a half to work with the employee at the rental office, his manager, and a customer service representative in a call center, while my case was being fixed. They were all supremely helpful, and I truly felt worse for them, seeing their frustration with the computer system, than I felt frustrated for having to wait so long. If this had occurred in the days preceding computer systems, the location difference would likely have been quickly corrected, acknowledged as a strange accident, and I would have received my keys and been on my way. Instead, in a world governed by digital bureaucracy, the rules of the rental office software were so strict that such a slight mistake (which it allowed for in the first place!) necessitated the involvement of three different employees whose expertise with the system was confounded. It begs the joke question: how many people does it take to screw in a light bulb? Well, obviously at least four if the light bulb is part of a national, computerized, strictly rule enforced light bulb screwing system with error messages that only serve to confuse those who are just trying to get some light in the room.

We are already living in a world where we are controlled in a significant way by computers.

Anyone who has ever worked with computerized systems at the doctor, for medical insurance, or for any other large, national company who is anything other than top notch in their software design, knows that these issues happen all the time. In many cases, the convenience promised by digital systems is mired in poor design; thus rendering the systems almost impossible to use given any edge cases.

My experience at the car rental office says a lot more about our society than simply that we occasionally design bad software. We can go so far as to say that because these types of issues arise all the time, they have become incredibly normalized to us. Over time, we had to accept the fact that computer systems often fail, and that sometimes we just have to spend hours trying to untangle issues that never would have arisen in a world devoid of strict software.

We are already living in a world where we are controlled in a significant way by computers. The employees helping me out were completely subjugated to the will of the stubborn rental system. Human ingenuity and power was rendered meaningless because the computer system held all the keys to unlocking my car rental. Because the world gradually started implementing digital, bureaucratic systems, everyone simply went along for the ride. In most cases, a low level employee would never have any say as to whether or not their employer forces a broken, complex system on their job. A few people change the way the world works, and we are all forced to slowly accept that this is our new reality.

Different technologies affect us in different ways. Whereas the car rental issue is frustrating, when other technologies exhibit flaws in their design, the outcomes can be much more pernicious. To examine the effects of, and response to, wide-scale implementation of machine learning systems, we need look no further than the paragon of frustrating technology: social media.

Social Media

Facebook, Instagram, Twitter, and Snapchat have all been embroiled in scandals in the past few years; garnering criticism from many diverse groups of people. Social media platforms transformed from being simply networks to connect old friends or classmates, to daily hubs for news and politics. Unfortunately, due to the nature of these systems being primarily financially motivated, profiting by advertising revenue calculated by the number of eyes that see ads on the site, social media networks also knowingly transformed themselves into addictive systems. By giving users content that they would inherently enjoy, playing into their existing biases, unsavvy social media users experience ideological isolation and augmentation by nature of only seeing news or opinions that agree with their own.

This type of machine learning, which is aimed at learning user behavior in order to bring them to an app or website to maximize profit, has had a profound effect on our society. It is commonplace now to recognize that overuse of social media can be very dangerous. People become addicted to Facebook, Twitter, or Instagram, and are constantly surrounded by information that could be entirely distinct from that of someone with different preferences. It has led to many users adopting radically different worldviews from those of their political or ideological opposites, and thinking them totally based in “facts” they read on the internet. A.I. technologies employed via social media have affected so many individual viewpoints that societal discourse as a whole has profoundly shifted as a result.

When only a handful of individuals make enormous decisions like these, wide-scale societal effects are heavily influenced by their perspectives — or lack thereof.

As in the case of the bureaucratic car rental system, people who use social media simply became acclimated to the fact that social media sites were harvesting their data and distorting their worldviews. At least with bureaucratic systems, their bugs and poor software design sometimes reveals their flaws and makes it clear that digital systems have an unfortunate and frustrating grip on our lives. Social media, and machine learning recommendation systems in general, are designed with a flawed outcome in mind. When they work correctly, they are having a negative societal impact.

When people think of A.I. doomsday robots controlling our lives, the picture of social media systems being those same doomsday robots never crops up. Of course, we are not brutally subjugated against our will, but to a degree, we are heavily restricted by systems that govern our lives like social media. If something informs your worldview, your worldview then controls actions you take based on your beliefs. When A.I. researchers dismiss the fear of systems taking more control of our lives than we want, I believe that they are missing this context. Often, computer scientists and software engineers are too isolated in the world of computing to recognize that deep social change is made by their technologies all the time.

Wide-scale normalization of incredibly destructive technologies are not new to humanity. In most recent memory, world-ending nuclear militarization has become almost commonplace to accept as the “way things are.” Where more voices used to speak up against proliferation of this destructive capability, the world has moved past the initial shock of such notions — even though the threat posed by the technology is arguably more significant today.

Nuclear Technology

Technologies that are initially shocking due to their potentially destructive capabilities can too easily become normalized when conversation surrounding them is lacking. Our post-Cold War world cares less about the spread of world-ending nuclear weapons, and the destruction of nuclear treaties, than it did only a generation or two ago. The public conversations surrounding nuclear proliferation have all but stopped; assuming an inevitable, nuclear world. The truth is that proliferation of nuclear weapons is absolutely not an inevitability; as some world leaders or technicians may suggest. Inventors, innovators, and state leaders have the ability to choose how technology progresses based on their ideologies and goals. A U.S. president who drives towards military hegemony will be more likely to rip up nuclear arms treaties than a president who wishes technology to be used for more humanitarian purposes. The only reason the dissemination of these weapons is considered “the way things are” is because so-called leaders have actively pushed for it to be the way things are; motivated by profits, power, or perhaps ignorance tinged with madness.

Artificial intelligence will likely follow a similar path. Due to its incredibly powerful nature, decisions restricting or liberating its potential uses, or of where to integrate it into society, will surely initially fall on the shoulders of political and industry leaders. When only a handful of individuals make enormous decisions like these, wide-scale societal effects are heavily influenced by their perspectives — or lack thereof. After a phase of initial public scrutiny of such decisions, there is a great risk of important discourse fading away; leaving only a few to deliberate the fates of many.

Recognizing the potential for gradual “doomsday” to occur, and voicing our concerns in a loud way, is the best way right now to ensure accountability of powerful technology actors.

We should view A.I. as a technology whose power is of the same magnitude as nuclear weapons. While the effects of these systems may not be as immediately shocking as the use of a nuclear weapon, it stands to reason that long-term societal effects from use of A.I. systems can be as far reaching as the multi-generational aftermath of Hiroshima and Nagasaki. We cannot allow the world to become so acclimated to the manipulations of A.I. that only a few have any say or opinion about its effects in the world. People who stand to be most affected by results of these systems should have a permanent say in how they are developed and used. Discourse should be constantly encouraged; even — and especially — when it gets in the way of short-term profits or power gains.

Just as the world is a simple mistake away from a doomsday scenario driven by nuclear catastrophe, a not-too-distant future world may also have the added threat of subtle, societal doomsday driven by an undiagnosable digital flaw, or the unknown mistake of a handful of programmers. Those who make decisions regarding both of these technologies are consciously making choices that shape the entirety of human society. Any malintent, bias, or ignorance on the part of those in control could certainly bring about the doomsday that experts say is essentially impossible. It is very dangerous that the same experts are becoming part of the normalization process. This should be constantly countered by an equally powerful, and diverse, set of voices calling for caution. If the balance ever swayed too far toward normalization, it could mean complete control by a handful of unaccountable, potentially malicious, few.

Accountability May Mean Life or Death

A threat that is perpetuated by the potential ignorance or mistakes of the few has only one logical counterpart: mitigation by the many. Technologists do not like to talk about this because they often become too wrapped up in their own expertise to consider the opinions of those whose lives are most affected by their potential failings. They understandably imagine that any ordinary person would have such an ignorant opinion regarding decisions made about A.I. that they believe technologists alone have a mandate to shape the developments and usages of their systems.

This mindset is surely folly. The perspective of those whose lives are most changed by powerful technologies is essential to render any conversation even-sided. Nuclear proliferators surely would have a different opinion about the technology if their home had been utterly annihilated by a nuclear weapon. Executives at Facebook would likely have different thoughts about the effects of their greedy recommendation systems if they had found themselves unknowingly manipulated by the algorithms. Those who have the privilege of standing outside of the blast zone of these technologies often lack the perspective or empathy to imagine the disastrous effects applying to them. A.I. experts who dismiss domination by computer systems are right to dispel Terminator-fueled hysteria, but wrong to so completely dismiss the notion that our societies really could be controlled by computers without the requisite human understanding to turn back.

Those who design artificial intelligence and machine learning systems must plan for enough degrees of control that any potentially disastrous effects could be easily mitigated. They must also commit to taking the view of the “common person” into account when making significant decisions. Just as the scandals surrounding Facebook and Twitter’s use for political manipulation sparked public outrage, the only existing avenue to try to change how these systems affect your life is to become outraged.

Due to the lack of popular representation in corporations who control much of the world, and unwillingness of governments to regulate A.I., right now the burden of accountability falls on ordinary people. Recognizing the potential for gradual “doomsday” to occur, and voicing our concerns in a loud way, is the best way right now to ensure accountability of powerful technology actors. If technologists or potential regulators do not listen to the concerns of the many, due to ignorance or arrogance, their choices could mean life or death for many around the world.

To view machine learning technologies in anything other than this light is to look the other way in the face of deep troubles plaguing the world. Sure, machine learning or artificial intelligence will probably not create killer robots hellbent on controlling us through a sophisticated simulation. However, if we’re not careful in our analysis of these systems, we may wake up in a world where our lives are dominated by machines that were created by a few, cannot be easily rolled back, and where we may not even know how much we’re being controlled until it’s too late.

Opinions expressed by AI Time Journal contributors are their own.

About Nick Rabb

Contributor A.I. Researcher, Technologist & Philosopher Nick is an independent writer who focuses on the philosophy of technology; frequently in his area of research, A.I. He is a PhD student at Tufts University. All view are his own.

View all posts by Nick Rabb →