AI Problems are Human Problems


Harvard’s Belfer Center for Science and International Affairs — a paragon of higher learning and perhaps even higher folly. | Source

I recently attended a meeting of great minds at Harvard’s Kennedy School of Government titled, “Governing AI — How Do We Do It?”. The meeting was full of high-profile individuals ranging from distinguished Harvard faculty, to cyber-security guru, to members of a delegation working with the Danish Minister of Higher Education and Science. As an extremely low-profile software engineer and prospective graduate student, I felt quite intimidated to say the least.

Passionate discussion ranged from the challenges surrounding AI governance and policy, to the mythos of horror surrounding the term “AI” as opposed to “machine learning”, to issues of transparent algorithmic decision-making and accountability; lofty topics for an 8 AM gathering. These topics are of great interest to me, so throughout the discussion, my nerves eased. Rather than quietly quaking in my chair, I quickly focused in on the fascinating banter being exchanged between experts in the field. Upon relaxing, I took a read of the room, and could not help but notice the concern — dare I say, fright — subtly leaking through practiced, professional facades. With every nervous laugh following a joke of technological doomsday, the veil of expertise wore thinner. Something soon became apparent to me: “Even some of the experts do not know what to do about AI challenges.”


I have no illusions about the nature of wide-scale problem solving throughout the course of history. Rarely are sweeping changes noticed, worked on, and introduced to the populous by genius technocrats. Instead, magical innovations are often the synthesis of seemingly disparate ideas; cultural shifts do not occur due to governmental policy, but rather due to the shift of collective consciousness and the transformation of norms. However, sometimes I still imagine that some field experts have, if not all the answers, then at least more answers than an undecorated commoner like myself. This was obviously not the case during this discussion.

What is it, then, about managing the responsibility brought on by a technology as advanced as AI, that makes imagining governance of it so difficult? Is the underlying technology too complex? Are we working with forces we barely understand? The answer was obvious to me as I listened to the confusion bouncing off the walls; culminating in a haze of uncomfortable, morbid laughter: the most pressing issues facing the control and governance of AI are the same issues that have faced humanity since the dawn of civilization.

Intelligence Inspired by Humanity

Humans, by nature, create tools in their own image. We often take inspiration from nature, or from our own biology, in order to model our own systems to thwart the oppressive rules of our reality. Artificial intelligence systems are modeled after the way that human brains operate; taking in information and activating neurons that themselves are not smart, but combine to create probabilistic models that mimic our own pattern-recognizing intelligence. Taking this notion of similarity into consideration, it subsequently seems abundantly obvious to me that problems facing human-created artificial intelligence will be those that plague human intelligence.

Thinkers have been diagnosing these very human ailments among the species for thousands of years.

Accountability, transparency, and model bias are a small example of some of the concerns that arose from the meeting. AI needs to be held accountable for its decisions; good or bad, they must be assigned an impetus, or an actor, for either praise or blame and subsequent repercussions. AI needs to be transparent by revealing the logical path to its decisions in a way that humanity can understand and internalize. AI must strive to act in unbiased ways — learning from innocuous data sets — so its decisions reflect our highest societal values. This can all sound overwhelming, and alien when thinking about the accountability of a digital intelligence. However, some perspective can help us understand the challenges: these same sticking points have plagued human society since its inception, and are very human problems that we have not yet ourselves worked out.

Human systems of power and organizational structures are hardly accountable, transparent, or unbiased in any meaningful sense of the words. Top leaders of failed banks were not held accountable after helping to collapse the global economy in 2008. Governments and private corporations fail to be transparent as they work to gain power or profit by coercing constituents through lies and propaganda. Individuals frequently fail to recognize how easy it is for bias to creep into their own biological models of reality. Thinkers have been diagnosing these very human ailments among the species for thousands of years. Any notion of complete novelty assigned to these challenges, in the context of AI, should be immediately recognized as fallacy given the extensive track record of the same human struggles.

Our Daily Lives are Just as Complex

Until the problems surrounding AI are recognized as human, and potentially social, at their core, those most affected by the worst follies resulting from societal integration with intelligent systems will likely be treated to patronizing shushing mixed with technocratic, jargon-filled “solutions.” Technologists have a habit of condescending those who are allegedly too uneducated to understand the complexities and nuances of the issues. Rather than accept the responsibility to educate, inform, and make technology work for society, the least affected elites have historically chosen to hide behind a wall of complexity and feign innocence.

The technocratic argument is completely contradicted by the fact that there are equally, or far more complex notions that we treat as common sense in our day-to-day lives. The troubles of a struggling, single mother are endlessly complex, yet a large part of even “uneducated” society has come to view this problem as one worth addressing. Complexities of love, emotion, and relation between people are endlessly ambiguous; much more so than how a machine learns to play Pong. These types of human issues require a great deal of introspection and empathy to understand and act upon in productive ways. Just because they are human conditions does not mean that they are simple — especially not simpler than any type of digital technology. Yet we each face up to these challenges in our own lives, and are able to develop an understanding of the underlying systems at play. Diagnosing issues facing AI should be a walk in the park compared to the existential struggles we face every day.

I have no doubt that any average person viewing AI as they would another human would propose much more sound solutions to challenges posed by the technology.

Understanding challenges that artificial intelligence faces through a more human lens will undoubtedly lie at the heart of future solutions. Even today, rampant bias and overfitting of news feed recommendation is best viewed through a human analogy. A naive, overfitted news feed recommender is akin to a dedicated human friend who sifts through the news, really wants to make you feel good about yourself, and thus only presents you with articles that fit your preconceived notions. This friend has learned certain traits about you, and has label you as “liberal”, “Latino”, “transgender”, among a host of other identities. They have also learned to recognize patterns in news articles that also assign similar tags like, “appeals to liberals”, “enrages homosexuals”, and others. Since your friend simply wants to naively please you, because they stand to make profit from your pleasure, they only recommend you news articles where the tags match up. We know that this friend is doing a disservice to any democratic notion of being informed; as that necessitates a healthy mix of opposing and diverse views.

This type of personification of neural algorithms should serve as a model for diagnosing and anticipating any such issue in intelligent systems. When imagining how to hold AI accountable, we should look to our ideals in holding powerful humans accountable. Transparency in intelligent computing should reflect the standards of the greatest teachers and communicators humanity has ever offered. Bias should be rooted out in the same way that humans do so: by taking into account as broad and diverse a perspective as possible.


Where the minds in the room at Harvard were led into the weeds by technologists and policy experts concerned with the minutiae of specific machine learning nuances, I have no doubt that any average person viewing AI as they would another human would propose much more sound solutions to challenges posed by the technology.

Never did the participants in the discussion consider looking for solutions in the wisdom of the commoner. While gushing about how intelligent systems should follow the United States’ unequivocally majestic form of democratic participation, the notion that the same democracy is deeply and tragically flawed never entered the room. Any average person waiting for the bus in Harvard Square would be able to tell you the flaws in “democracy” by virtue of the manifestation of their average, or perhaps sadly below average, quality of life amidst rhetoric of superiority; an undoubtedly more nuanced view to include in the considerations toward AI governance.

AI Should be Our Mirror

While it is true that humanity struggles with these sometimes complex, reflective notions, and that human solutions will likely be flawed, we should recognize the important opportunity that AI affords us. Discussions about challenges facing AI make clear that humans have hardly overcome such hurdles. In the process of discussion, however, we are afforded a unique opportunity to humble our religious humanism and recognize that we do have significant issues with which we have not yet dealt.

Where taking in the evidence provided by average people has failed, we should strive to use AI as an icebreaker to spur discussions surrounding governmental and individual accountability, corporate transparency, systemic bias and hatred. Perhaps the lens of artificial intelligence can act more as a mirror; as we begin to realize that the creation on the other side closely resembles we slightly smarter primates who accidentally stumbled upon intelligence.

I believe our future prosperity depends on seeing ourselves in our creations and recognizing the deep flaws that humanity must address. Empathy and technology have reached a point of convergence that could spell disaster if the powerful continue to fail to recognize their own shortcomings, and their refusal to truly listen to those “beneath” them. Without any consideration, machines much more powerful than even the richest elite may soon wield enormous influence; laden with humanity’s greatest faults because their creators were too scared to look at themselves and see room for improvement.


Original. Reposted with permission.

Opinions expressed by AI Time Journal contributors are their own.

About Nick Rabb

Contributor Technologist

View all posts by Nick Rabb →