In Defense of Transhumanism

imrs.php.jpeg

My article appeared last year in the Washington Post.

When I first tried to start a club for the study of transhumanism at Yale, I was astounded by the university’s response. The chaplain intervened and vetoed the request. An email to me explained that there were already enough atheist groups on campus, assuming evidently that the words humanist and atheist were synonyms. I found myself awkwardly assuring a series of administrators that transhumanism had nothing to do with transgender students who didn’t believe in God. Broadly speaking, it involves the use of futuristic medical technology to lower the incidence of disease, enhance the capacity of the imagination and prolong the human lifespan. “We’re into things like cyborgs and genetic engineering,” I said.

It seems to me that while transhumanism resembles its progenitors, it is distinct from each of them, and lessons can be drawn from all of them.

First, there is the ugly specter of the eugenics movement, a disaster associated with decades of pseudoscientific research in an embarrassing array of discredited fields. People who see transhumanism as an extension of eugenics may be concerned that future policies could lead to rising inequality, intolerance for difference and the abuse of power.

In the future, with in vitro fertilization available to the rich, embryos will be screened for genetic profiles probabilistically likely to thrive according to various indicators. As we gain increasingly precise knowledge of the human genome and the probabilities of healthfulness associated with different genotypes, it will eventually be possible to select children likely not only to be healthy but also to excel. With popular inaction, this could lead to an unjust scenario in which fitness and intelligence might map onto the socioeconomic level of one’s parents. Legal restrictions on the selection of fetuses on the basis of genetic health, however, would be hugely regressive and counterproductive.

Transhumanists should demand the possibility of such prenatal care for all citizens rather than allowing the free market to restrict it to the few. In the long term, the development of increasingly efficient gene editing technology (both in vitro and, some day, in the womb itself) will likely significantly lower the associated costs. Although the horrors of eugenics should serve as a sobering reminder of the evil that can be perpetrated in the name of progress, they should not stifle discussion in the academy about the responsible implementation of genetic engineering in the future.

The second major source of transhumanist thought is science fiction, a genre that tends to favor dystopian narratives because they can be made so colorful from an artistic perspective. Despite all of the 19th-century novels bemoaning the effects of the Industrial Revolution, I suspect that if we could go back in time, we would still choose to industrialize. But perhaps the shape of the revolution would be different — we would hopefully pay attention to the kinds of things the novelists and poets complained about — for example, we might be less abusive toward the environment and more respectful of the rights of workers from the onset. [Eight questions to ask before human genetic engineering goes mainstream.

In our future, daily life will be transformed through the increasing automation of labor and the rise in sophistication of artificial intelligence. Life may be less about the 9-to-5 grind and more about education, community and the creation and enjoyment of art. Rather than imagining a future in which humans and machines are at odds — as many thinkers have predicted — transhumanists look forward to the advent of cyborgs, in which computers are incorporated into the brain itself, leading to radically enhanced processing power and the ability to preserve consciousness for lengths of time now deemed inconceivable. The ultimate lesson from transhumanism’s origins in science fiction is perhaps to seek those inventions that would radically enhance lifespans and empower the human imagination to control what it experiences in ways hitherto unimaginable, liberated from the genetic and circumstantial wheel of fortune.

A third source of transhumanist ideas, and the one of greatest interest to me, is the tradition of humanism. When Cicero used the word “humanus” to symbolize the noblest aspects of our species’ character, he showed that he believed something fundamental separated human beings from all other types of beings — the inculcation of our rational faculties and our ability to apply those faculties over time to the development and preservation of our civilization.

Today, we often hear that truth is a construct and nothing but a reflection of power. Values are relative. But humanism and the idea of progress stand as rejoinders, and transhumanism falls squarely in line with this tradition. How can we best harness the power of progress? Not by seeking to control and exploit people different from us, a transhumanist might say, but by attempting to alleviate suffering and build bridges between imaginations. A willingness to empower more people than ever before to be born healthy, intelligent and able to devote long and meaningful lives to love, leisure and lifelong education is, to me, transhumanism at its best — an antidote to postmodern malaise.

https://www.washingtonpost.com/news/in-theory/wp/2016/05/18/in-defense-of-transhumanism/?utm_term=.e4578111b4d0

Bring on the Cyborgs: Redefining the Singularity

Here is the final version of my speech “Bring on the Cyborgs: Redefining the Singularity.” I presented it at as a TED talk at Yale. The audition video can be seen in my posts from earlier this year.

***

Stephen Hawking, Bill Gates, and Elon Musk are afraid. Afraid of our computers turning on us. Afraid that Siri will go from botching directions to taking over and crashing our cars. This is what they call the singularity.

The smartest and most powerful men on Earth are right to be concerned about the future. But in this speech, I’m going to propose a solution to save our species. It involves rethinking the concept of the singularity and reimagining our destiny as human beings. Without exaggeration, this topic might be the single most important one on Earth.

I’m a doctoral student at Yale in Roman history and the founder of Yale Students and Scholars for the Study of Transhumanism. It might seem strange that an ancient historian has an interest in studying the future. But don’t be so surprised.

Ancient historians are interested in the beginning of things like drama, democracy, and the idea of equality before the law. I’m interested in the singularity—and transhumanism—because today we are once again at the beginning of something new. And new beginnings are when we need to pay the most attention to the lessons of the past.

Historians know that technology has not always advanced in a straight line forward. At the Great Library of Alexandria, two thousand years ago, a scientist appropriately named Hero invented the first steam engine. The first computer in history, the Antikythera Mechanism, was developed over a century earlier. Both, however, were toys for the wealthy instead of tools to improve the lives of the masses.

Everyone asks me why Rome fell. I ask a different question. I ask, what could have saved Rome? And then I remember the steam engine and the computer, and I say: technology.

What is the singularity? Technically, it refers to the point inside a black hole where space and time don’t exist as we know them. But the word’s meaning has been expanded over the years.

In the 1950s, mathematican John von Neumann, applied the term to the history of human civilization. The singularity, he thought, was a point in history after which human affairs themselves would become fundamentally unrecognizable. Then, about the time when I was born in Israel in 1983, mathematician Vernor Vinge, defined the singularity as the point when artificial intelligence would create a world “far beyond our understanding.”

Neumann and Vinge had something in common. They imagined human progress escalating and accelerating as we approached the singularity. Today we have a parallel concept: Moore’s Law. Moore’s Law states that the number of transistors on integrated circuits doubles every two years.

This means that computers are becoming more powerful, exponentially, and data since the 1960s has backed this up. Now, some question whether Moore’s Law will continue to hold true in the future, and I’ll get to that critique in a moment.

But if it does hold true, you may understand why so many brilliant people might be scared. One can easily imagine computers becoming so powerful – so fast – that they take control over their own programming and come to overpower us.

Mankind has often feared the conscious wills which it enslaves. As a classicist, I’m reminded of Aristotle’s “natural slaves.” The idea was that those who were able to apprehend rational principles well enough to follow basic orders but who simultaneously possessed no rational strategic faculties of their own were essentially slaves by nature.

Classicists argue about the people that Aristotle might have had in mind—a professor once even told me that he was really talking about the mentally handicapped: people like my brother Dinh. Today, I’d argue, it sounds like we’re talking about Siri. Siri can understand my directions and execute them, but has no strategic ends of her own. Computers like Siri understand us, but they don’t really comprehend us.

But what happens when a computer is sophisticated enough to form independent values? It certainly wouldn’t be Aristotle’s “natural slave” anymore. But here’s why people like Hawking are worried: its values might not be our values; its goals might not be our goals. Over the course of human events, slaves have tended to resent their former masters.

And if the conquest of the New World and the fall of the Qing Dynasty are any indication, where contention exists in the presence of technological and material inequality, there tends to follow the wholescale destruction and capitulation of one side of the struggle.

But I have hope for a different future. A future of which we can be proud. A future toward which we can work together. A future in which humans and machines are not enemies at war, but are one. This is where Transhumanism comes into the picture.

Transhumanism means using technology to enhance human capabilities. People already have pacemakers, hearing aids, and artificial limbs. This is just an elaboration. But why is the idea of transhumanism important to the singularity?

I’ll tell you. Transhumanism holds out the possibility that we will heal not only our hearts and our bodies, but also our minds. In the future, it may be possible to replace parts of the brain with computers—curing diseases like my late grandmother’s Alzheimer’s, and radically empowering us to shape our own dreams, metaphorically and literally.

If Moore’s Law continues to apply, we need the enhancements of transhumanism to stay one step ahead of our machines before they become smart enough to take control over their own programming and become more powerful than we can even imagine.

Machines may not share our passion for the preservation of civilization. But enhanced human beings will still have human experiences like that of membership in a community and feelings of pleasure, pain, and love.

If Moore’s Law does not hold true, however, as many computer scientists have argued, the need for transhumanism will be even greater. Our ability to create smaller and smaller microchips will eventually run into intractable barriers at the frontiers of our knowledge of quantum mechanics.

At that point, which could be no more than a decade away, new ideas will be needed. The time will come when we will need better materials than silicon, and the best alternative will be genetically engineered cyborgs.

The advantage seems clear: why reinvent the wheel when the human brain possesses great technology shaped by millions of years of evolution? Why reverse engineer what a human brain can do when it can be enhanced by robots?

Transhumanist technology can cure diseases, enhance intelligence, allow us to shape our dreams, and empower us to control our destiny as a species. But it must be available to all, and not only a chosen few. Its free choice or rejection must be a human right.

When access to the technologies associated with Transhumanism becomes a human right, our hopes and dreams will be transformed. When the brain is augmented by technology, and we understand the electrochemical foundations of consciousness, barriers to communication and understanding will come crashing down.

We will have the power to decide the content of our nightly dreams—anyone can feel like an NBA All-Star, the world’s most attractive movie star, or literally one of the stars of the Milky Way. Without the need to fight over resources, our ecological crisis will be solved and our Earth will be protected and healed, halting the destructive race to the bottom of industrialization.

As a historian, I can even imagine accessing the lives of our ancestors as experienced through their eyes. Life will be a blank canvass and a paintbrush for all of us. And we will all be equals in a fellowship of artists.

Given all that is true about transhumanism and the singularity, we are all obligated to bring that future closer. Each moment of delay means countless pain, suffering, and death. But each step of progress brings us one day closer to the dream and the promise of Transhumanism.

What must be done to bring that future closer? First, we must deal with the panic about the singularity. Fear of the singularity stems from its old definition. I want to redefine singularity to mean the point in technological progress when our relationship with machines becomes a seamless, shared consciousness.

The singularity will occur when we have the power to jump out of our bodies and into a cloud of pure imagination. The singularity will allow our imaginations to reach the boundaries of the universe.

This speech is a challenge: A challenge to all people who share my hope for humanity’s future. To bring humans and computers together, we as humans must come together and agree upon our shared purposes. As human beings, we are all enslaved to the genetic and circumstantial wheel of fortune. On Earth as it is, where you are born, and who you are born to, matter more than the content of your character. This must change.

To believe in transhumanism we need to believe in human progress again. Since the horrors of the twentieth century, we have retreated from such confidence. But Transhumanism is not tied to any single culture or broken ideology of the past. It is bound to our essential attributes—what makes us human—our imaginations, our feelings, our hopes, our dreams.

As a student of ancient history, I see the traces of transhumanism in the earliest records of human thought. When Cicero used the word humanitas to symbolize the noblest aspects of our species’ character, he showed that he believed something fundamental separated human beings from all other types of beings—the inculcation of our rational faculties and our ability to apply those faculties over time to the development and preservation of our civilization.

The only thing that we should fear is delay. We need more than a transhumanist society. We need transhumanist departments at every university. We need interdisciplinary study—in the humanities and the sciences—in order to probe the nature of our own natures in ways unprecedented until now. We need the courage and the legitimacy and the vision to undertake the research that must be done.

The most powerful men in the world are afraid of the future. But I am ready to face it. Are you?

On the Singularity, Original Preamble

transhumanism_560-31011(1)

I wrote this speech for a competition at Yale; the winners will get to deliver a TED talk in public later this year, which will also be filmed. The final third remains to be completed, but it’s a good start.

***

Is civilization as we know it doomed to extinction within the next hundred years?

The question seems simultaneously so hyperbolic and unfathomable that at first glance, it might be impossible to take it completely seriously. It appears to be fodder for street-corner prophets of doom and crackpots on late night television rather than the subject of serious academic inquiry.

But Stephen Hawking, who is without exaggeration one of the smartest men on Earth, believes that it’s a question worth asking. He warns that the human species is on the verge of a singular and irreversible change, and unfortunately for us, there is strong reason to believe that it might be for the worse.

The culprit isn’t global warming, or nuclear war between superpowers, or the evolution of a deadly airborne virus, though these are all admittedly grave threats to the species. Hawking was in fact speaking about the advent of strong artificial intelligence—that is, computers and robots smarter than human beings. Though it sounds like science fiction, the idea is that such robots might come to dominate us in the wake of the so-called singularity. Hawking elaborates upon this idea at length. He says:

“One can imagine…technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Hawking isn’t alone in his concerns. Elon Musk, for one, shares the scientist’s apprehensions. Musk is one of the founders of Paypal, the CEO of Tesla Motors and Space X, a multi-billionaire, and a prominent futurist. He said in September of 2014 that artificial intelligence is perhaps “our biggest existential threat.” In fact, he even says of artificial intelligence that we are “summoning the demon.”

If what Hawking and Musk are saying is accurate and machinery is about to become inhabited by independent anthropomorphic wills, we are perhaps talking about nothing less than the most significant paradigm shift in the history of civilization since the advent of the concept of civilization itself. But what exactly is this “singularity” that Hawking and Musk are talking about? Is there actually reason to believe that computers and robots armed with artificial intelligence might try to enslave or destroy humankind? And finally, what should we as a species do about this simultaneously absurd yet horrific prospect? Today, I’m going to explore potential answers to these questions with you. But before I do, I want to tell you a little bit more about myself, and why I became fascinated by these kinds of issues.

I’m a fifth year doctoral student at Yale and the coach of the debate team there. I’m also the founder and president of the Yale Transhumanist Society, which is a group of people interested in exploring the answers to questions about the future intersection of technology and society. You may or may not agree with my conclusions in this talk; my peers on the YTS are certainly far from unanimous when it comes to the answers to these questions. We have drastically different perspectives because we come from very different walks of life: we are undergraduates and graduates, professional students and artists, engineers and philosophers. But what unites us is our belief that the kinds of issues raised in today’s talk are worth exploring now, before it is too late. According to some of the most authoritative voices on the planet, the future of humanity could literally be at stake.

In my case, my field of expertise is ancient history, which at first glance seems like a dubious distinction for someone claiming insight into the nature of the future.  But I’m particularly interested in certain themes that are universal in human history, like the idea of decline and fall. When most people talk about the fall of the Roman Empire, they assert that it was a matter of over-extended frontiers, or barbarian invasions, or in the case of Gibbon, even the coming of Christianity. But I think that Jose Ortega Y Gasset was closer to the mark when he suggested that the ultimate failure of Roman civilization was one of technique. The Romans had no concrete notion of human progress, and their world never industrialized. Hero of Alexandria invented a steam engine in the first century AD, but no one ever considered talking seriously about the technology’s potentially transformative effect on transportation and manufacturing. As far as we know, no one even imagined the possibilities. Ultimately, the steam engine was put to use opening and closing temple doors for the creation of a magical effect in pagan ceremonies.

Instead of investing in the creation of new machines, the Romans relied on slave labor. So the civilization remained trapped in a pre-industrial state, and eventually succumbed to internal and external pressures. But the intriguing fact remains that attitudes toward slavery and technology might have saved the Roman Empire when it was still at its height, or at least radically altered its history for the better. It struck me that there was a lesson to be learned here for modernity. And at the same time, it fascinated me that Vegetius, writing at the end of the empire, warned that technological progress was all that could save the Romans from destruction. These days, the precise opposite state of affairs is being implicitly argued. I wanted to decide for myself whether there was good reason for this shift.

So much for the past. Let’s return our attention to the future. As I said before, we’ll be looking at three issues. What is the singularity, should we be afraid of it, and what should we do about it? Let’s begin with the first question.

Actually, the history of “singularity” as a concept is a bit complicated. The word technically refers to a phenomenon associated with the physics of black holes, where space and time don’t exist as we know them under the influence of an infinite gravitational pull. In the mid 1950s, Stanislaw Ulam, one of the people who worked on the Manhattan project, applied the term to the history of human civilization itself. He said in a conversation with another mathematician that modernity was characterized by an “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” So, initially, the word spoke to the idea that given the rapid rate of technological progress in the modern world, a seminal event beyond which the subsequent history of humanity would seem almost incomprehensible was on the horizon, and the concepts that define life as we know it would lose meaning. But what would the event be?

In the mid 1960s, scientists like Irving Good began to elaborate on the rising intelligence and sophistication of computers. He was a colleague of Alan Turing, and shared his interest in the tangled relationship between computer science and consciousness. Good said that if machines could be created with superhuman intelligence, they would theoretically be able to take control of their own programming and improve their own design continuously until they became so sophisticated, humanity would seem insignificant in comparison.

In 1983, the year I was born, a mathematician named Vernor Vinge became the first person to explicitly associate the word singularity with the creation of machines of superhuman intelligence. He said that when strong AI was created, “human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”

In recent years, the widespread applicability of Moore’s Law has added a sense of urgency to the issue and propelled Vinge’s definition to the forefront of discourse on the future of human progress. Moore’s Law states that the number of transistors on integrated circuits doubles every two years. What this means is that the general sophistication of electronics expressed in things like processing speed and memory is increasing exponentially. At this rate, it seems almost inevitable that a threshold will be crossed some day and computers will surpass human intelligence, by some estimates within just a few decades from now. (Some question whether Moore’s Law will continue to hold true in the future, but we’ll get to that in a moment.)

This is what the word singularity has come to mean as Hawking and Musk understand it. So much for the first question. Now, on to the second. Should we be afraid of the singularity as we’ve just defined it? As a classicist, when I think about the current state of artificial intelligence, I’m reminded of Aristotle’s description of slavery in the fourth century BC. In contrast to the ideas of some sophists that slavery was merely conventional or an accident of circumstance, Aristotle argued something else—that in some cases, slavery was in fact natural. The philosopher believed that hierarchies emerge spontaneously in nature—humans are superior to animals, for example, and the mind rules the limbs. The idea was that those who were able to apprehend rational principles well enough to follow basic orders but who simultaneously had no rational strategic faculties of their own were essentially slaves by nature. Classicists argue endlessly about exactly what Aristotle meant by this. For example, some say he was referring to the mentally handicapped, and there are those who claim that he was talking about barbarian peoples, who were said to lack the logical impulses of the free Greeks. Today, though, it seems to me that the term “natural slave” could well be applied to computer programs like Siri, who are able to understand instructions well enough to do our bidding, but who have no rational will or the ability to engage in individual strategic decision making according to their own spontaneous ends. They understand, but they do not comprehend.

When it comes to the evolution of an independent rational will, though, things become very different. A computer system sophisticated enough to be able to form independent values and create strategies to actualize them is no longer a natural slave at all. It will be a living being, and one deserving of rights at the point that it becomes sophisticated enough to comprehend and demand them. This hypothetical strong AI would have limited time to pursue its interests and meet its goals, and it might not choose to employ its hours slavishly doing our bidding. There’s no reason to be confident that its goals will be our goals. If you’ll pardon another classical allusion, the philosopher Seneca once wrote of human nature that nothing was milder and kinder and more inclined to be helpful and merciful if one were only in a proper state of mind; in fact, Seneca went so far to say that the very concept of anger was something foreign to human nature. There is, however, nothing to guarantee that a superhuman will would share this same kind of humane impulse, if it even existences in our own species at all. In fact, if the history of human civilization is any barometer, slaves tend to be resentful of their former masters once they have won their freedom. And if the experience of the conquest of the New World or the fall of the Qing Dynasty is any indication, where contention exists in the presence of technological inequality and more material progress on one side than the other, there tends to follow the wholescale capitulation and destruction of one side. The history of the world constantly warns us of the threat of misunderstandings and violent interactions when two cultures meet for the first time, let alone two rational species.

A consciousness able to independently strategize for its own ends and navigate the Internet could be poised to wreak incredible destruction on society, especially in an integrated and wired world with power, water, and heat all controlled electronically, to say nothing of the existence of weapons of mass-destruction bound to computerized communication networks. All of this suggests that we should indeed be very afraid of the singularity as it is commonly understood. Yet to retard technological progress or to place restrictions on the development of AI seems premature given the ambiguity of the future threat, and of course, there are those who question whether Moore’s Law will hold true at all in the future. So, this leads me to my third and final question: what are we to do about the existential crisis facing the species?