I wrote this speech for a competition at Yale; the winners will get to deliver a TED talk in public later this year, which will also be filmed. The final third remains to be completed, but it’s a good start.
***
Is civilization as we know it doomed to extinction within the next hundred years?
The question seems simultaneously so hyperbolic and unfathomable that at first glance, it might be impossible to take it completely seriously. It appears to be fodder for street-corner prophets of doom and crackpots on late night television rather than the subject of serious academic inquiry.
But Stephen Hawking, who is without exaggeration one of the smartest men on Earth, believes that it’s a question worth asking. He warns that the human species is on the verge of a singular and irreversible change, and unfortunately for us, there is strong reason to believe that it might be for the worse.
The culprit isn’t global warming, or nuclear war between superpowers, or the evolution of a deadly airborne virus, though these are all admittedly grave threats to the species. Hawking was in fact speaking about the advent of strong artificial intelligence—that is, computers and robots smarter than human beings. Though it sounds like science fiction, the idea is that such robots might come to dominate us in the wake of the so-called singularity. Hawking elaborates upon this idea at length. He says:
“One can imagine…technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Hawking isn’t alone in his concerns. Elon Musk, for one, shares the scientist’s apprehensions. Musk is one of the founders of Paypal, the CEO of Tesla Motors and Space X, a multi-billionaire, and a prominent futurist. He said in September of 2014 that artificial intelligence is perhaps “our biggest existential threat.” In fact, he even says of artificial intelligence that we are “summoning the demon.”
If what Hawking and Musk are saying is accurate and machinery is about to become inhabited by independent anthropomorphic wills, we are perhaps talking about nothing less than the most significant paradigm shift in the history of civilization since the advent of the concept of civilization itself. But what exactly is this “singularity” that Hawking and Musk are talking about? Is there actually reason to believe that computers and robots armed with artificial intelligence might try to enslave or destroy humankind? And finally, what should we as a species do about this simultaneously absurd yet horrific prospect? Today, I’m going to explore potential answers to these questions with you. But before I do, I want to tell you a little bit more about myself, and why I became fascinated by these kinds of issues.
I’m a fifth year doctoral student at Yale and the coach of the debate team there. I’m also the founder and president of the Yale Transhumanist Society, which is a group of people interested in exploring the answers to questions about the future intersection of technology and society. You may or may not agree with my conclusions in this talk; my peers on the YTS are certainly far from unanimous when it comes to the answers to these questions. We have drastically different perspectives because we come from very different walks of life: we are undergraduates and graduates, professional students and artists, engineers and philosophers. But what unites us is our belief that the kinds of issues raised in today’s talk are worth exploring now, before it is too late. According to some of the most authoritative voices on the planet, the future of humanity could literally be at stake.
In my case, my field of expertise is ancient history, which at first glance seems like a dubious distinction for someone claiming insight into the nature of the future. But I’m particularly interested in certain themes that are universal in human history, like the idea of decline and fall. When most people talk about the fall of the Roman Empire, they assert that it was a matter of over-extended frontiers, or barbarian invasions, or in the case of Gibbon, even the coming of Christianity. But I think that Jose Ortega Y Gasset was closer to the mark when he suggested that the ultimate failure of Roman civilization was one of technique. The Romans had no concrete notion of human progress, and their world never industrialized. Hero of Alexandria invented a steam engine in the first century AD, but no one ever considered talking seriously about the technology’s potentially transformative effect on transportation and manufacturing. As far as we know, no one even imagined the possibilities. Ultimately, the steam engine was put to use opening and closing temple doors for the creation of a magical effect in pagan ceremonies.
Instead of investing in the creation of new machines, the Romans relied on slave labor. So the civilization remained trapped in a pre-industrial state, and eventually succumbed to internal and external pressures. But the intriguing fact remains that attitudes toward slavery and technology might have saved the Roman Empire when it was still at its height, or at least radically altered its history for the better. It struck me that there was a lesson to be learned here for modernity. And at the same time, it fascinated me that Vegetius, writing at the end of the empire, warned that technological progress was all that could save the Romans from destruction. These days, the precise opposite state of affairs is being implicitly argued. I wanted to decide for myself whether there was good reason for this shift.
So much for the past. Let’s return our attention to the future. As I said before, we’ll be looking at three issues. What is the singularity, should we be afraid of it, and what should we do about it? Let’s begin with the first question.
Actually, the history of “singularity” as a concept is a bit complicated. The word technically refers to a phenomenon associated with the physics of black holes, where space and time don’t exist as we know them under the influence of an infinite gravitational pull. In the mid 1950s, Stanislaw Ulam, one of the people who worked on the Manhattan project, applied the term to the history of human civilization itself. He said in a conversation with another mathematician that modernity was characterized by an “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” So, initially, the word spoke to the idea that given the rapid rate of technological progress in the modern world, a seminal event beyond which the subsequent history of humanity would seem almost incomprehensible was on the horizon, and the concepts that define life as we know it would lose meaning. But what would the event be?
In the mid 1960s, scientists like Irving Good began to elaborate on the rising intelligence and sophistication of computers. He was a colleague of Alan Turing, and shared his interest in the tangled relationship between computer science and consciousness. Good said that if machines could be created with superhuman intelligence, they would theoretically be able to take control of their own programming and improve their own design continuously until they became so sophisticated, humanity would seem insignificant in comparison.
In 1983, the year I was born, a mathematician named Vernor Vinge became the first person to explicitly associate the word singularity with the creation of machines of superhuman intelligence. He said that when strong AI was created, “human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”
In recent years, the widespread applicability of Moore’s Law has added a sense of urgency to the issue and propelled Vinge’s definition to the forefront of discourse on the future of human progress. Moore’s Law states that the number of transistors on integrated circuits doubles every two years. What this means is that the general sophistication of electronics expressed in things like processing speed and memory is increasing exponentially. At this rate, it seems almost inevitable that a threshold will be crossed some day and computers will surpass human intelligence, by some estimates within just a few decades from now. (Some question whether Moore’s Law will continue to hold true in the future, but we’ll get to that in a moment.)
This is what the word singularity has come to mean as Hawking and Musk understand it. So much for the first question. Now, on to the second. Should we be afraid of the singularity as we’ve just defined it? As a classicist, when I think about the current state of artificial intelligence, I’m reminded of Aristotle’s description of slavery in the fourth century BC. In contrast to the ideas of some sophists that slavery was merely conventional or an accident of circumstance, Aristotle argued something else—that in some cases, slavery was in fact natural. The philosopher believed that hierarchies emerge spontaneously in nature—humans are superior to animals, for example, and the mind rules the limbs. The idea was that those who were able to apprehend rational principles well enough to follow basic orders but who simultaneously had no rational strategic faculties of their own were essentially slaves by nature. Classicists argue endlessly about exactly what Aristotle meant by this. For example, some say he was referring to the mentally handicapped, and there are those who claim that he was talking about barbarian peoples, who were said to lack the logical impulses of the free Greeks. Today, though, it seems to me that the term “natural slave” could well be applied to computer programs like Siri, who are able to understand instructions well enough to do our bidding, but who have no rational will or the ability to engage in individual strategic decision making according to their own spontaneous ends. They understand, but they do not comprehend.
When it comes to the evolution of an independent rational will, though, things become very different. A computer system sophisticated enough to be able to form independent values and create strategies to actualize them is no longer a natural slave at all. It will be a living being, and one deserving of rights at the point that it becomes sophisticated enough to comprehend and demand them. This hypothetical strong AI would have limited time to pursue its interests and meet its goals, and it might not choose to employ its hours slavishly doing our bidding. There’s no reason to be confident that its goals will be our goals. If you’ll pardon another classical allusion, the philosopher Seneca once wrote of human nature that nothing was milder and kinder and more inclined to be helpful and merciful if one were only in a proper state of mind; in fact, Seneca went so far to say that the very concept of anger was something foreign to human nature. There is, however, nothing to guarantee that a superhuman will would share this same kind of humane impulse, if it even existences in our own species at all. In fact, if the history of human civilization is any barometer, slaves tend to be resentful of their former masters once they have won their freedom. And if the experience of the conquest of the New World or the fall of the Qing Dynasty is any indication, where contention exists in the presence of technological inequality and more material progress on one side than the other, there tends to follow the wholescale capitulation and destruction of one side. The history of the world constantly warns us of the threat of misunderstandings and violent interactions when two cultures meet for the first time, let alone two rational species.
A consciousness able to independently strategize for its own ends and navigate the Internet could be poised to wreak incredible destruction on society, especially in an integrated and wired world with power, water, and heat all controlled electronically, to say nothing of the existence of weapons of mass-destruction bound to computerized communication networks. All of this suggests that we should indeed be very afraid of the singularity as it is commonly understood. Yet to retard technological progress or to place restrictions on the development of AI seems premature given the ambiguity of the future threat, and of course, there are those who question whether Moore’s Law will hold true at all in the future. So, this leads me to my third and final question: what are we to do about the existential crisis facing the species?