The Social Question of Artificial Intelligence

Dynamism of a Human Body, 1913 Umberto Boccioni (1882-1916 Milano, Castello Sforzesco)

Dynamism of a Human Body, 1913 Umberto Boccioni (1882-1916 Milano, Castello Sforzesco)

In considering how AI will change us, we may also attend more narrowly to social AI. In the near future, we will have artificially intelligent apps, digital assistants, and eventually, humanoid robots that—in professional interactions, casual conversations, and even shallow romantic relationships—will be behaviorally indistinguishable from human beings. This raises two questions: first, how this development may lead us to think differently about ourselves; second, how it may lead us to treat one another differently. We will raise some concerns, but our intention is constructive—to point out the ways in which AI will force us to think hard and in some strange ways.

Thinking behavioristically about ourselves

In a future saturated by persuasive social AIs that seem to interact with us as we interact with one another, it will be difficult not to feel and believe that they are doing what we do when we do these things. We argue, crucially, that they will not—because the underlying AI will not have a subjective experience or point of view as we do. In brief, the most successful mechanism of artificial intelligence is the artificial neural network. These networks are idealized simulations of physical biological structures. There are no physical connections, only a computer program of ones and zeros that represents the equations of the neural network. One could run an artificial neural network with a pencil in a notebook, even if only with agonizing slowness. But this is not human-like thought, even if it is brain-like information processing, in an observer-dependent sort of way. Its integrity is in the eye of the beholder; we select the level of description at which to view a computational device. Unlike a mouse we might observe in a lab, it doesn’t have a strictly independent existence. Both because they are not biological and because they are observer-dependent, these calculations are not aware any more than a student’s physics homework has gravity. They do not feel any more than a flight-simulator flies.

Some have argued that this is a distinction without a difference. Consider the aforementioned “Turing Test.” Turing pointed out in 1950 that we will perceive a programmed computer to be “thinking” if it seems to converse interpersonally as we would. Turing’s test is indifferent to the mechanism by which the computer manages its feat. It is not a test of the programmed computer’s nature but of its accomplishment. However, Turing believed that if the computer were persuasive, we would not really care any more about whether it was really thinking, and we might even be happy to expand our definitions of “thought” or “conversation” or “love” to include whatever it was that the computer was doing.

What would such an expansion mean, though? The question matters most when we talk about social AI—the AI that we might have relationships with. If a non-conscious artifact talks about its own interior life and persuades us that it is our friend, we can assert that it “thinks” or “loves” only if we reduce thinking or loving to behavior that we interpret as appropriate to thinking. Now certainly, outward behavior is how we conclude that others are thinking or loving or anything else. And yet, outward behavior is not everything we mean by those words. When we say that our children understand or love us, we are referring not only to their love-like behavior, but also to the inner life that this behavior manifests. Their subjective experience matters. It matters that the life we share encompasses our interiority. For this is what we call a relationship. But if we have reduced thought or relationships to behavior, then to speak of an AI’s “intelligence” and even “love” would be only to speak of that AI’s tendency to behave in ways that we associate with intelligence and love. And it may be feared that the same could become the case with humans. That is, if—in order to pitch a big enough tent to accommodate humans and AI—we learn to think of humans without reference to their interiority, then what will we be left with? Human thinkers and lovers will be classified as such merely as producers of behavior that I deem adequate to thought and love. Behavior, once an expression of interiority, will become a substitute for interiority. And we will all be not selves but behavior-producers, just like our AI companions.

Treating one another as producers of behavior

Eventually, we will live in a world filled with apparently personal social AI’s, including perhaps robotic caregivers and companions. Why? Humans tend to replace human activities with technology when those activities are challenging—from GPS for navigating, to tablets for occupying children. Eventually, might many parents give the bulk of child-rearing over to robot caregivers, just it was once far more common to employ a full-time nanny? It’s not just that some parents may not feel they have the time, or may quail before the challenge of bedtime. After all, robo-nanny wouldn’t make my mistakes. A robot will never over-react when at bedtime a child is found not in but beneath the bed, undressed and covered with dust while pretending to be a seal.

What will happen when AI and even robots are all around us, spoken of in terminology that implies their equivalence to us, and filling in for us with behavior that is now seen as the essence of what we meant by love? We will find ourselves in a dilemma: We will still in the end treat those AIs as tools because we will (accurately) see their behaviors as products fashioned for our consumption rather than as expressions of an interior personal life with self-possession. But this is not how it will feel; we will helplessly empathize with our apparently personal AI’s. And so they will feel real, even though we will have redefined these inner states in terms of behavior.

Alongside these problems of terminology and empathy is a third problem: the forces shaping the robots’ behavior. The robots among us will be manufactured because they will sell, and they will sell because they will do the things and act in the ways that consumers want a purchased assistant or companion to act. Recently, a man claimed to have married a robot. It doesn’t walk; it barely talks; but it does simulate certain aspects of sexual intercourse. But the sex robots of tomorrow will be domestic companions, able to read and rock climb with you as well as join you in other activities. They won’t be seen as erotic toys but as lovers and spouses who will push you to new heights—heights that you will have selected from a list of options for self-improvement.

Yet because these companions must please if they are to be purchased, they might not force us to expand our own view of how a person might be, as human relationships and human friendships can. They won’t vex us or force us to develop compassion, to re-evaluate who we are, nor even to think beyond how we want them to make us think. You wouldn’t buy an app to turn your domestic companion into a sick person, confined to bed and needing your heroic self-gift even when you feel disinclined to give it.

What will this do to our relationships with other humans? We will treat our never-challenging android companions as consumer products, but we will not instinctively differentiate between androids and humans. Because we judge a thing’s inner life by its behavior—and we do so whether or not that thing may have an inner life at all—we will not be able to avoid feeling that these companions are intentional subjects like we are. And so, acting as consumers of agents whom we can’t but feel are persons, we will learn to be consumers of behavior in general—including the behavior of other human beings. We will learn to be slaveholders again.

If a machine failed to meet our expectations, we would simply return it to the store. What, then, when other humans do not conform to our expectations and desires? Is it possible that we will no longer see this as a glimpse of a wider array of humanity? That we will not struggle toward a charitable response? Perhaps instead, we may come to think of these others as simply faulty human beings, viewing them with the same sort of idle dissatisfaction that we would a robot that failed to deliver the set of behaviors and reactions that we wanted to consume.

When we live in a world that doesn’t adapt to suit our expectations, we are challenged. When we meet people whose responses are not customized, adapted, and tuned to us, then we grow. Character and virtue advance by living in human community. With artificial intelligence it is otherwise. Amazon’s suggestions are based on your own browsing and purchase history, correlated by AI with the purchases of others with histories similar to yours. Amazon’s AI doesn’t jostle us beyond the groove into which we have settled. Indeed, it smooths and trains us to fit into a groove that others make for us. Education into human community is a similar process, but market-driven artificial intelligence is a poor educator and a poorer community.

In the smart houses of the future, with their android domestic companions, our experience will teach us that our environments and companions will deliver what we expect—or what we are trained to expect—to pay for. Yet in such a world of easy and confirmed expectations, we may forget that our views of ourselves and of others are not the horizon of the possible or the good. Having started out seeking an artificial intelligence that could rise to human levels, the risk is that, as AI advances in its abilities, human subjectivity—increasingly integrated with these AI systems—will begin to be flattened, diluted to the level of what those systems are capable of representing, or of what is rewarded in them by consumers.

In closing, we may return to words: The word “artifice” means handiwork, work of skill. Artificial intelligence is a wondrous work. And yet, if—terminologically, empathically, and commercially—we allow that artifice to define our reality, to reduce human reality to the scope of what can be ascribed to programmed automata; if we allow this artificial behavior to form our lives and our children, then we may become artifices ourselves, handiwork of our handiworks.

John M. Dolan received his doctorate in mechanical engineering from Carnegie Mellon University, where he has been a faculty member in the Robotics Institute since the 1990s.

Jordan Wales received his doctorate in theology from the University of Notre Dame, and has been on the faculty of Hillsdale College since 2014.

Previous
Previous

The Extraordinary Marie Magdeleine Davy

Next
Next

Developing an Off-Liberalism