Roman Decadence and Complex Systems Theory: Toward a New Teleology of Historical Progress, Collapse, Modernity, and Futurism



Discourse on the link between the erosion of traditional moral values and political collapse during the era of the Roman Republic and Julio-Claudian dynasty nurtured the ideology that just as “capitalism” is often conceptualized as a ubiquitous bogeyman in the eyes of some contemporary critical theorists, in antiquity, “free love” was a similarly corrosive force beguiling individuals into losing a sense of allegiance to the state as they succumbed to their petty perversions.[1] This vision of the ancient world, perhaps best epitomized in the moralizing histories of Sallust and Tacitus, haunted the Western imagination forever afterward, with “perversion” thematically bound to the idea of social collapse. This final chapter stands as a rejoinder to such notions, defending the practitioners of vilified forms of sexual expression from the ridiculous allegation that they provoked the fall of Rome or will cause modern culture to descend into anarchy, instead proposing a very different model of historical change in the ancient world.

The idea of Roman history as the cautionary tale of a society where sexual transgression sparked the conflagration of civilization at large has found various forms of expression over time, alarmingly often in modern political contexts. In May 1971, for example, President Nixon complained that All in the Family was promoting homosexuality and declared:

You ever see what happened to the Greeks? Homosexuality destroyed them. Aristotle was homo, we all know that. So was Socrates. The last six Roman emperors were fags. Neither in a public way. You know what happened to the popes? They were layin’ the nuns; that’s been goin’ on for years, centuries. But the Catholic Church went to hell three or four centuries ago. It was homosexual, and it had to be cleaned out. That’s what’s happened to Britain. It happened earlier to France. Let’s look at the strong societies. The Russians. Goddamn, they root ’em out. They don’t let ’em around at all. I don’t know what they do with them. Look at this country. You think the Russians allow dope? Homosexuality, dope, (and) immorality are the enemies of strong societies. That’s why the communists and left-wingers are pushing it. They’re trying to destroy us![2]

Nixon’s bizarre understanding of history is grounded in terror at the idea of society slackening as its individual members kowtow to their personal inclinations rather than the cisgendered heteronormative patriarchal rules of the game. Depressingly, the idea of Rome falling in the wake of the normalization of homosexuality has remained something of a trope in conservative circles. According to his 2012 book America the Beautiful, future Presidential candidate Ben Carson wrote that “as a Bible-believing Christian, you might imagine that I would not be a proponent of gay marriage… I believe God loves homosexuals as much as he loves everyone, but if we can redefine marriage as between two men or two women or any other way based on social pressures as opposed to between a man and a woman, we will continue to redefine it in any way that we wish, which is a slippery slope with a disastrous ending, as witnessed in the dramatic fall of the Roman Empire.”[3]

Screen Shot 2017-05-05 at 11.38.21 PM.png

These kinds of cockamamie theories have often been promulgated by “scholars” too. For example, Roberto De Mattei, the deputy head of Italy’s National Research Council and a “prominent…historian” claimed as recently as 2011 that the “contagion of homosexuality and effeminacy” destroyed Rome after it subdued Carthage, which was apparently “a paradise for homosexuals.”[4] Other scholarly metanarratives about ancient history, love, and historical collapse have proved to be equally dark and outlandish. Perhaps no schema linking political disintegration and sex seems to be so misguided in retrospect as the work of Joseph Vogt, whose “Population Decline in the Roman Empire” (1935) and “Race Mixing in the Roman Empire” (1936) popularized the original theory of Arthur de Gobineau that racial mixing was responsible for the decline of Rome, with the originally “Aryan” conquerors increasingly diluted by inferior Semitic and African genetic influences.

In the wake of these kinds of revolting models, no wonder reputable historians have increasingly turned away from the construction of grand schemas and have instead accentuated the nuance and complexity of micro-systems, overseeing increasingly specialized and compartmentalized studies of the past (and writing for increasingly small audiences). In 1979, Jean-Francois Lyotard’s The Postmodern Condition: A Report on Knowledge challenged the underlying validity of sweeping explanatory schemas fumbling to account for complex phenomena like the onset of political disintegration. He characterized the postmodern condition in general as one of skepticism toward metanarratives, rejecting their old-fashioned emphases on “transcendent and universal truth.” According to Lyotard and critical theorists inspired by his legacy, such metanarratives invariably downplay the naturally existing complexity of various systems, and they are often created and nurtured by oppressive power structures begging to be deconstructed. In short, since grand metanarratives tend to ignore the heterogeneity of the human experience, theories of human progress as historical development toward a specific goal are ultimately deemed inadequate by most of my academic peers.

Nevertheless, while I realize that to propose a metanarrative schematizing historical progress in 2017 is to invite a barrage of criticism since the very definition of progress has been destabilized by critical theory, the merits of the theoretical approach outlined in this paper speak for themselves. Its themes stand as a strong retort to millennia of hysterical discourse demonizing non-normative sex as the cause of civilization’s ills. The fact that any given metanarrative can be problematized does not mean that metanarratives in general cannot still be useful as thematic prisms through which to view a complex social process, providing a simplifying yet clarifying lens that can often prove revelatory when it comes to accentuating unexpected dynamics of open-ended questions.[5]

Though this chapter is grounded in original research in complex systems theory, the underlying thesis is not unprecedented. In the eyes of Jose Ortega y Gasset, for example, the modern world was liberated from a tendency toward chaos and collapse due to the inherently progressive nature of technological evolution and its marriage to the scientific method, ensuring an increasingly vibrant standard of living for an increasing number of people over the long run. According to his view, a failure of “technique” [6] rather than non-vanilla sex doomed the Roman Empire. In the language of complexity theory, the system tended toward a state of collapse because the pace of technological and scientific progress was ultimately retarded before it could gain the unstoppable momentum it seemed to attain after the Italian Renaissance. The remainder of this chapter defines these terms, summarizes the themes of complex systems theory, and applies this lens to the subject of “historical progress” in the ancient world. I conclude by proposing falsifiable hypotheses that could test this framework, providing evidence against the idea that either sex or Christianity was at the root of Rome’s collapse.

Defining Terms: Progress and Modernity

The fighting Temeraire tugged to her last berth to be broken up.JPG

Once writing was invented and the memories of past thinkers could be stored and readily accessed, a long conversation was initiated between generations of brilliant individuals who, in long discussion and debate with each other’s ghosts, were ultimately able to further and further clarify humanity’s collective understanding of the empirical characteristics of reality, to say nothing of how its constituent elements could be carved up, recombined, and harnessed to serve utile human ends. Tragically, throughout many periods of history, voices were deliberately excluded from this evolving dialogue and even denied basic education, which consequently resulted in a lower quality of debate, less discourse, and slower advancement in the arts and sciences in general.[7]

Be this at it may, once history began (that is, once representational symbolic records came about), a long conversation between ingenious contributors was initiated which led to what I want to call “progress.” The invention of writing enabled a conversation to take place that could be sustained across multiple generations about questions to which there seemed to be no obvious answers, but to which meaningful contributions could nonetheless be made that served a useful, clarifying role. Is there a God? How is motion possible? Why does it rain? What is art? How can I maximize the yield of my crops? Different people have different perspectives on these kinds of open-ended questions and diverse ways of schematizing the problems and solutions. Once their perspectives are added to the evolving discourse, these people’s contributions can never be erased. If what they articulated was meaningful and clarifying, it will inspire new, micro-discourses in turn. Over the course of time, thousands of meaningful contributions lead inevitably to what I want to define as progress—an increasingly lucid understanding of the nature of reality and how to harness its constituent elements toward (hopefully) good ends such as the alleviation of physical torment. Across the millennia, if enough people are welcomed into the conversation of great minds, there will be millions of meaningful contributions which can never be erased, and this will inevitably lead to advancement over time as battles will rage in the marketplace of ideas and only the best ideas (those most bound to meaningful contributions from the perspective of the most people) will survive.

What do I mean by modernity? In this chapter, I mean a condition in which political institutions valuing both autonomy and stability, economic institutions catering to the distribution of “money,” and academic institutions governing scientific research create synergistic platforms where discursive progress can take place. Foucault, of course, reminds us that the influence of institutions on discourse can be oppressive, but in fairness, the great institutions of civilization can also provide stages upon which meaningful contributors can interact with one another and usher in an increasingly accelerated and exponentially growing rate of progress.

According to the teleology of modernity as imagined in this paper, and contrary to the idea that most premodern Iron Age civilizations were fundamentally similar in nature, I will argue that a formative moment for the West took place in the polytheistic, “democratic” civilizations of Greece and Italy and Asia Minor and not in the monotheistic or monarchic contexts of other civilizations. I will also suggest that the medieval contribution to modernity is in some ways being overstated in contemporary scholarship, though the preservation of ancient knowledge and the creation of the university-system would of course contribute immeasurably to the synergy between academic, political, and economic institutions which this paper associates with modernism.

Complex Systems Theory and Historical Change


 According to complex systems theory, certain events such as rises and declines in the number of living species unfold according to a process of punctuated equilibrium, with spurts of sudden advancement or collapse associated with changes in the organisms’ relationships to their environment. The rule of the day is long intermediate periods of stable predictability interrupted by sudden catastrophic plunges, then a series of unpredictable oscillations before a new homeostatic balance is reached. I want to suggest that a similar lens can be applied to thinking about the process of historical change in the form of political collapse (the elimination of old institutions and the leadership roles associated with them) and reconsolidation (the creation of new institutions and the subsequent rise of novel opportunities for political dominance by new factions of people.) The system can be conceptualized as a zero-sum game for power expressed in the form of individual “players” scrambling to attain limited institutional positions; over time, individuals maneuver and form alliances to gain such positions, and preexisting hierarchies can be upset by changing environmental conditions.

Complex systems theory is an emergent area of scientific investigation. While chaos theory, a subset of the general field of complexity, has been enriched with quantitative theorems since the emergence of sophisticated computer technology in the 1970s, the study of complexity as a broad principle in itself is, as of yet, largely limited to qualitative descriptions of the dynamics of non-linear systems marked by sensitive dependence on initial conditions. In my opinion, these qualitative descriptions, while frustrating to mathematicians seeking specific formulae to describe the evolution of complex systems, are in fact an ideal prism through which to view the periodic transformations of civilization without reducing the infinite nuances of the phenomena involved to anything analogous to a neat set of simple rules. Fundamentally, in order to comprehend the behavior of a non-linear system, one must in principle examine the system as a whole and not merely investigate its parts in isolation. For this reason, a description of change over time in a civilization demands a somewhat sweeping chronological approach, whatever the detractors of metanarratives in history might say. Antiquity uniquely provides us with several useful examples of cultural evolution over whole millennia.

The essential idea of complex systems theory is that the interactions of individual parts within a whole can result in so-called self-organizing criticality. This is to say that the changing relationships between diverse constituent elements of a complex system can spontaneously result in great changes in the whole, potentially characterized by radically distinct emergent properties. The complex whole exists in a fragile state of equilibrium in a “critical state” on the “edge of chaos.” Changing environmental factors can tip aspects of the complex system into chaos itself through “cascading events,” resulting in the sudden onset of turbulence, tumult, and disorder. Eventually, according to chaos theory, the complex system should settle into new points of equilibrium rather than simply collapsing altogether—chaos is turbulent and unpredictable, but it is not synonymous with a complete and total breakdown of order. The new equilibrium, however, similarly exists at a critical point on the “edge of chaos” until new environmental forces again tip it toward chaos and the eventual emergence of a new state of homeostasis similarly radically divergent from the preceding initial conditions. The entire process is one of punctuated equilibrium-by way of analogy, imagine a graph that shows exponential growth, a period of stagnation, and then either a period of collapse or a resumption of growth; the horizontal axis would be time and the vertical axis would be some measure of the level of progress (which I suggest can be measured in such potential ways as surviving written records per year, patents produced per year, deaths by disease each year, institutional roles available per year, etc.)

According to information systems theory, the emergence of chaos can result from exceedingly slight shifts in environmental forces, minutiae like the emperor Claudius’ choice of a successor, or unpredictable migrations of whole barbarian tribes. Such forces precipitate the rapid emergence of unpredictable, fast-changing sets of information that have the capacity to overwhelm traditional governmental structures and contribute ever more to a slide toward a chaotic breakdown. Nevertheless, according to chaos theory, this breakdown should not be complete, but rather characterized by the emergence of new equilibrium points which are always themselves on the edge of chaos. This process perhaps explains phenomena like the restoration of imperial hegemony in the form of the “Dominate” in the third century AD after a period of civil war, the permanent splitting of the empire into eastern and western regions of governance, and finally, the tripartite division of the Mediterranean region into Western European, Byzantine, and Muslim spheres of influence. We can think about the history of the Roman Empire as a narrative of punctuated equilibrium; during eras of “chaos,” individual efforts by the government to restore the old order resulted in diminishing returns, reflective of the theories of Joseph Tainter, but clarifying when they actually come into play.[8]

In my opinion, the question of why certain eras are characterized by such diminishing returns has everything to do with the emergence of chaotic patterns complicating previous states of equilibrium until a new homeostatic balance is eventually reached, potentially far less complex than the initial system. The old ways of carving up and dividing resources are upset by demographic and environmental changes and shifting cultural expectations. During periods of turbulence associated with the onset of chaos, complex systems whose central organizing structures are burdened by an overflow of information tend to disintegrate—whether they were organized as a multiparty system, a monopoly by a single party, or a dual party system, old organizational structures built to accommodate old fashioned flows of predictable information quickly become outmoded. New factions rapidly form. However, as any single faction gains an upper hand, it is in the interest of all smaller factions to join together against it. This leads inevitably to a bipolar tension, with the creation of a two party equilibrium and the ultimate emergence of a single party system or a new multipolar equilibrium themselves susceptible to collapse and always tending toward bipolar cleavages. In this chapter, I will call this the factional nature of political change.

Insights from chaos theory can help to make sense of the largest questions in world history from a fascinating new perspective. Turbulence and transformation are the order of the day rather than decline and fall. The unexpected appearance of chaos belies the linear biases of traditional models of history. Violent fluctuations and oscillations cannot be casually dismissed by mono-causal theories; they are in fact a fundamental aspect of any system at a critical point on the edge of chaos.

As mentioned before, there is currently a decided movement among historians in the direction of micro-history. But there is nevertheless great value in a global approach to world history and the exploration of supposed periods of “decadence.” Broadly speaking, the very nature of causation itself is more complex than contemporary historiographical accounts of things like the “decline and fall” of the Roman Empire suggest.

In other words, a core set of beliefs in the field of history about the nature of complexity and causation are ultimately incorrect. Traditionally, it is assumed that simple systems behave in simple ways, and that as long as such systems could be reduced to a few perfectly understood deterministic rules, their long-term behavior should be stable and predictable; it is also asserted that complex behavior implies complex causes, and that a system that is visibly unstable, unpredictable, or out of control must be governed by a multitude of independent components or subject to random external influences. Now, however, physicists, mathematicians, biologists, and astronomers have created a new set of ideas. Simple systems can give rise to complex behavior, and complex systems can give rise to simple behavior. Moreover, contrary to the idea that the stories of the rise and fall of individual civilizations are fundamentally unique, it is now believed that the laws of complexity hold universally, whatever the constituent parts of the system.

Questions about causation need to be approached probabilistically (what forces worked to raise the odds that a specific outcome took place, and to what degree did they raise the likelihood of the outcome?) and inclusively (what diversity of explanations can help to explain an outcome rather than a mono-causal model?). The following three sections illustrate this approach toward describing history.

Mesopotamia, Egypt, Israel and Phoenicia Versus the World of the Poleis


In the beginning was the Stone Age. It last for an obscene number of millennia. A rock is only so sharp and strong, and during agonizingly long eons, humankind struggled to carve up and recombine the constituent components of nature, powerless to harness them toward useful and progressive ends. But then, civilization began in Mesopotamia, Egypt, India, and China beside great rivers where agricultural surplus could be harnessed by the sundry institutions required to organize labor. The use of bronze was fundamental to this shift because it enabled the creation of objects like axes, ploughs, and swords, tools that could not be chiseled out of rock. Such devices enabled nature to be carved up more efficiently, leading to further surplus and the possibility of the creation of a leisured class devoted to discursive inquiry rather than the brute struggle to survive. Now, progress was born, and “history” proper began with the invention of writing. The pace of technological progress was incredible, particularly in the intercompetitive monarchic city-states of Mesopotamia, where the boat, writing, and the wheel were pioneered. I believe that the decentralization of the region was key to its innovativeness. Whenever one city-state innovated by creating a new invention, other city-states either had to adapt and improve the invention for their own ends or lose their territory and be winnowed out.[9]

Ultimately, however, these early Bronze Age Civilizations did not evolve institutions in which politics, economics, and academics lined up to create modernistic synergy along the same kind of radical lines to be seen in Greece and Italy and Asia Minor. After the great burst of inventiveness around the time that bronze was first forged, there was a sudden stagnation. In other words, a kind of equilibrium was reached after exponential growth (which could be measured according to such factors as numbers of inventions created per century, the number of new cities founded, etc.) The reason why is that the very institutions that created the platforms upon which meaningful contributors acted suddenly became oppressive, forming rigid class structures which excluded voices from discourse and emphasized the creation of rules where the goodies could be monopolized by the elite.[10] Subsequently, authoritarianism, rigid class structures, and oppressively dogmatic religious institutions barred, exploited, and excluded people from contributing to discourse (for example, all non male elites). This inherently retarded progress, since the voices of geniuses went silenced: for example, there were thousands of anonymous women who never got the chance to be Aristotles, though they had the capacity to do so.

Between the age of the pyramids and the birth of Thales of Miletus there extended a tragic 2000 years—approximately the length of time separating us from Cleopatra. But then, iron came, and a new age dawned, with a sudden rise in progress. When we mastered iron, we literally forged a new future for ourselves—stronger tools which were more productive, resulting in more utility (stronger armies, more crops yielded per acre, etc). This rise in productivity allowed the goodies to be spread to more people than traditional elites, and suddenly, new classes began to arise. These new classes for the first time could contribute to the development of political, economic, and academic institutions, leading to more progress. This promise would prove to be most fully actualized in the Greco-Roman-Semitic world.[11]

The cultures of the poleis of Greece, Italy, and Asia Minor did not have religious institutions strong enough to sanction or to ban provocative debate about the nature of reality. At the same time, in that society, inherent values of the government were grounded in the celebration of debate, equality, and the inherent importance of every man’s contribution. The city states were fiercely agonistic, yet their people spoke dialects of the same language, so everyone could simultaneously compete with each other and imitate each other’s innovations. Finally, the society was composed of disparate, far-flung colonies that were inherently at competition with the societies around them and forced to govern themselves without the help of age-old institutions. One man in this society declared that everything was made of water. Another man questioned the hypothesis of Thales. This led to a debate which progressed toward proto-scientific notions. The origins of “modernity” were not bound to be found in Greece, Italy, and Asia Minor, but rather probabilistically likely to be brought into being there thanks to institutional features of those territories, to say nothing of their geographically central location on the easily accessible Mediterranean Sea. Enriched by iron tools and metal coins, utile goods could be distributed to more people than ever before, and more and more brilliant positive contributors could make a difference to their communities.

Greece and Italy are in a culturally diverse spot in the Mediterranean Sea near the spot where one group developed the alphabet (the Phoenicians), another group pioneered centralized bureaucratic organization (Egypt), another group developed coined money (the Lydians), and still another group refined ideas about monotheism (the Jews), making the area a diverse hodge-podge including the voices of many different people with many different perspectives. Ultimately, the institutions of the Greco-Roman world created a unique situation where political, economic, and academic institutions could welcome a greater plurality of voices with a greater variety of ideas than in other contemporary states. Compare the situation to that in other ancient cultures:

The Egyptians: They essentially invented the idea of the centralized monarchic state and refined techniques of massive stone architecture in concert with the Mesopotamians. But their 3000 year old civilization was one of the least progressive in the history of the planet despite the enormous productivity of the land of Egypt itself. This is because political, economic, and academic institutions all aligned to impoverish the vast majority of the country and retain the goodies for a small minority who monopolized all education (it took years to learn hieroglyphs—difficult to do that if you’re a peasant). It boggles the mind to think of all the women, non-elites, and foreigners deliberately excluded from discourse—and many of them extraordinary thinkers! One of the sole examples of real political innovation took place under an elite despot (Akhenaten), and his legacy of “novelty” in questioning whether there were one god or many was vilified forever afterward in Egyptian lore. Tellingly, however, when Greco-Roman civilization came to Egypt and Alexandria was established as a polis, it became the greatest center of science in the ancient world because it welcomed a cosmopolitan congregation of voices debating the nature of reality in a way that was never possible before, and all in the presence of the bounty of the Nile River, which could feed enough people to provide a great deal of leisure time. Even women were sometimes allowed to participate in this academic discourse.

The Jews: Arguably, as a whole, Jews have made the most meaningful contributions to human progress from the perspective of individual ingenious contributions to life on this planet. But I think that ideas about religion and politics in ancient Judaea made it probabilistically much less likely that a “scientific revolution” would take place there rather than in Greece, Italy, and Asia Minor (the world of the poleis). This is because more people and more ideas were inherently excluded from discourse in the Jewish culture due to ideas about politics and religion, leading to less internal progress. In Jewish culture, there was no place for discourse questioning whether certain elements of the Law could be broken (though debates about the meaning of the law could, and did, take place, admittedly showing that what superficially seems dogmatic can often run much deeper.) A rigid priestly caste monopolized power and education, meaning that many voices which might have been brilliant went uneducated while a small group of individuals monopolized the learning for themselves. Much scientific progress was bound to discourse on the Law and its meaning, with a neglect of areas of study about the atomic nature of reality. After all, the Bible inherently answered certain kinds of questions (“God made it that way.”) The Jewish idea that God chose them, loved them, and had a special covenant with them sowed the seeds that would one day grow into the concept that there is fundamental goodness in the world and all people are inherently worthy of redemption and made in God’s image. Yet science and philosophy as we now know them began as a branch of Hellenic paganism and not monotheistic Judaism.

The Phoenicians: The Phoenicians are the most similar to the Greeks of any other Mediterranean civilization. They lived in mercantile-oriented small city-states; there was no single governing monarch; the people were seafaring and polytheistic; and they established colonies in the Western Mediterranean. They were also an inventive culture, pioneering glass, dye-making, and most importantly of all, the alphabet, which not only hastened economic transactions, but also made education more readily available to more people than ever before, and hence led to great material progress. There were even institutions resembling the ecclesia or comitia of the Greco-Roman world.

Yet while the Phoenicians were great explorers and agronomists, there seems to have been absolutely no tradition of philosophical discourse and debate in their society. Why? One of the reasons is that the Romans annihilated Carthage and its books, but we have to look deeper than this—there were no famous Phoenician philosophers (though Zeno of Citium might have been of remote Phoenician ancestry.) We must look to religion, economics, and politics, I think, to say nothing of social attitudes toward abstract philosophizing versus practical knowledge. The Canaanite form of polytheism was one of the world’s most brutal, at some times in history evidently mandating child sacrifice even among elites during times of hardship—this more than anything shows a brutal commitment to religious principle at the expense of reason, for all of the institution’s social-leveling power. The Phoenicians formed a narrow mercantile ruling oligarchy over polyglot city-states where the bulk of the non-Punic population was denied political rights. In the Phoenician homeland where there was the most scope for “equality,” overmighty empires like the Persians and Assyrians conquered the cities and set up restrictions to ensure that society was oriented toward the production of ships and money, not knowledge. Culturally practical knowledge was valued much more than silly, impractical “abstraction,” which was conceptualized as something fundamentally Greek.

Because we cannot rerun history as a simulation just yet, it is impossible for us to test hypotheses about what might have happened in other times and places and in other contexts. But the fact remains that in the history of our world, the Greece-Italy-Asia Minor axis created a certain synergy associated with democracy, empiricism, and coined money that proved hugely historically influential. Political, economic, and academic institutions were inherently more inclusive of more voices and ideas than in the case of their Mediterranean counterparts, and this made more scientific progress more likely. The fruits of that progress constitute the core of Classics.

From the Grandeur That Was Rome to the Squalor of the Dark Ages


Between Thales of Miletus and the period of the height of activity in the Library of Alexandria under the early Roman emperors there existed a period of approximately 800 years. Toward the end of the period in Alexandria, Aristarchus was the first to propose heliocentrism and Hero invented the steam engine; early “computers” like the Antikythera Mechanism boasted the sophistication of eighteenth century Swiss clocks.

Aristotle’s work had long set the stage for empiricism and the development of the scientific method. “Modernity” seemed to be on the cusp of something great. Then, the unexpected took place. Among a perfect storm of other forces, the repercussions of a single man’s unjust crucifixion would reverberate through the centuries—history’s greatest example of the Butterfly Effect in action.

Earlier in this dissertation, I have addressed the topic of decadence from the perspective of the common but outmoded belief that sexual perversion was the destabilizing influence in Roman history around the time of Christ. Contrary to the opinions of scholars like Blanshard, I have argued that behavior which might be considered licentious did in fact exist in the Late Republic as a response to changing political and economic conditions in which the sexual availability of slaves and prostitutes coupled with the rise of totalitarianism by divine right upset traditional patterns of morality. However, I have also shown that the idea of sexual license itself as a chaotic influence on Roman history is a case of mistaking causation and correlation. Free love did not vitiate the Roman Empire. The inadequacy of its cultural hierarchies in the face of the turbulence of history did.

While the study of antiquity is inherently interesting for its own sake, it is perhaps particularly valuable because it represents a long stretch of time in which myriad historical changes took place, with the entire history of the system existing in a kind of metaphorical laboratory. The height of the Roman Empire and its subsequent decline are particularly fascinating because the sophistication of the Mediterranean world ultimately faltered, and the Roman Empire and the barbarian cultures surrounding it finally blended together into a single, largely similar culture. Why did the sophistication of the ancient world lapse so horrifically, and why was the recovery rate following this collapse so slow? The theory of complex systems provides the answer: the “parochial” elements of the ancient economy described by historians like Moses Finley ultimately hindered the development of historical momentum toward industrialization until the entire system collapsed over the edge of chaos into increasingly less complex states of equilibrium. Society was transformed from the single-party domination of the Principate to the multiparty chaos of the Dominate; then, society re-stabilized as the two-party Eastern and Western Roman Empires before the Western portion distintegrated and the Mediterranean was divided into the multiparty three civilizations of Islam, Western Europe, and Byzantium. The periods between the eras of stable hierarchies (the second century and the fifth century and the seventh century) are the ones associated with the onset of chaos; the conclusion of this chapter provides a means of testing the thesis.

Mono-causal explanations for Roman decadence such as “perversion” are ultimately fruitless. In fact, the era of the greatest sexual license in Roman history is ultimately the one of its greatest economic and territorial expansion. Instead, complexity theory provides a very different answer to the question of why the Republic fell and the Principate replaced it: a plethora of forces existed that pushed the old multipolar equilibrium represented by the checks and balances of the earlier Republic and its feuding dynasts over the so-called “edge of chaos” into a simpler new “homeostatic state” marked by the monopolar despotism of a single family, very much like those of their Hellenistic neighbors (and hence less complex than a unique Roman political system artificially distinct from the institutions of the civilizations around it).[12] The history of the transitions along the way are classic lessons in the factional dynamics of the organization of power, shifting between single-party and multi-party modes of organization with a marked tendency toward dualism: hence we see patrician versus plebeian, optimates versus populares, cives versus socii, Marians versus Sullans, the dictatorship of Sulla, the First Triumvirate, Caesarians versus Pompeiians, the dictatorship of Caesar, the Third Triumvirate, Octavian versus Cleopatra, and the ultimate rise of the dictatorship of the Julio-Claudians, the union of the two most influential families of the late Republic.

We have seen that throughout history, changing relationships between humans and the metals with which they forged their tools contributed to chaotic transitions and the emergence of new forms of social organization accommodating increasing numbers of people in dominant roles. In the late Roman Republic, however, as the Republic ripened (or rotted, depending on one’s perspective) into the Principate, it was not a change in humans’ relationship to metals but rather an information-overflow associated with the repercussions of Roman imperialism that destabilized the national government to the point of Civil War; the autocratic monopolar system which followed was both simpler (less complex) than the earlier multipolar system which preceded it and also far more similar to the surrounding civilizations (organized under monarchic rule by divine right), as if by a process of osmosis which diluted the institutions of the Republic. By the same token, when the Western Empire collapsed, the cultures on either side of the Rhine and Danube became fundamentally more similar: Christian, de-urbanized, and dominated politically by German tribes. The tortured intricacies of the late Dominate collapsed into simpler states more similar than dissimilar to the civilizations surrounding them.

Of course, the Middle Ages was not a single Dark Age, but we have to admit that the level of progress was retarded for some time. It seems to me that the period as a whole in the West can best be defined as an age of stagnation and decline at the end of the Iron Age that eventually settled into an equilibrium and then began to hit upon an upward trend again after the crisis of the Black Death created another pivot point on the edge of chaos at the end of the period. According to my formula, fewer voices must have resulted in less discourse for some time, and less discourse must have resulted in less progress in the form of meaningful contributions to questions about the nature of reality. Institutions must have become less welcoming of difference and more oppressive and oriented toward self-preservation rather than the creation of meaningful platforms for debate. At the same time, there must have been no new significant advancements in metallurgy to radically improve the potential for creating new sources of utility to fuel the development of new social classes. I understand that medievalists regret that classicists historically derided their era’s contributions and are right to emphasize that the era they love was a dynamic one in some ways, but it’s important to understand that the period between the fall of Rome and 1000 AD really was a Dark Age despite some cultural continuity. It serves as a sobering lesson for all ages—the momentum of material and technical progress can never be taken for granted.

According to complex systems theory, there existed at least a small probability that the Roman Empire might have industrialized at their pivot point c. 180 AD. Why did they fail to do so? Was it due to their penchant for licentious sex? How can historians even begin to go about answering these kinds of counter-factual questions in the first place?

Rather than branding ancient cities fundamentally primitive or modern in nature in the tradition of Max Weber, I want to examine the various forces working for and against the increasing specialization and application of productive technologies in the Roman Empire. My conclusion is that while aspects of the ancient Roman economy were in fact quite “modernizing” and might have led to a technological revolution under different circumstances, there existed sufficient forces in society hindering the momentum of material progress and rendering an industrial revolution in antiquity far less likely than one in late eighteenth century Britain.

Of all eras of world history, the period of the Roman Empire boasted many of the prerequisites for a commercial and industrial revolution. The Roman world contained some sixty to one hundred million inhabitants living in largely peaceful conditions. A single currency was employed throughout the Mediterranean, disseminated by bankers and professional financiers. The very existence of the Mediterranean as a great central lake facilitated trade and communication, as did the existence of a fine road system overseen by the policing power of the Roman army. Sprawling urban centers like Rome and Alexandria boasted populations in the hundreds of thousands, their populations demanding a steady stream of material products in order to sustain themselves. Great opportunities existed to serve increasingly globalized markets. At the same time, individual merchants enjoyed a set of circumstances marked by relatively free trade, and the capacity to make massive amounts of money by participating in the commercial life of the Empire. In places like Alexandria, intellectual elites cooperated to pioneer potentially world-changing technologies like Hero’s rudimentary steam engine. From the perspective of complex systems theory, all of these forces might have tipped the Roman Empire into a state of industrialization, and the “proto-modernity” of several aspects of the ancient world cannot be denied. As I suggested earlier, the world of the poleis is where institutional “modernity” was born and then refined and extended to the West by the Romans.

Nevertheless, several factors existed rendering an industrial revolution unlikely—the high Roman Empire was an era of equilibrium and eventually stagnation in world affairs. All of the following elements, from the perspective of a computer simulation, would lower the probability of progress and raise the probability of stagnation.

The language required to describe and conceptualize economic growth was relatively rudimentary. The cumbersome system of Roman numerals rendered mathematical calculations arduous and difficult, hindering the development of practices like double-entry bookkeeping, which is virtually unattested in antiquity. At the same time, ancient manuals on the field of “economics” usually emphasized the importance of maintaining the self-sufficiency of plantations, with expenditures kept lower than income. This stands in stark contrast to the later emphases of early modern economic theorists, who advocated catering to the rules of supply and demand to maximize fiscal profits. Ancient economic theorists downplayed the desirability of investment in trade, which was seen as inherently riskier than pooling resources in real estate.

There existed fundamental bias among the most politically powerful classes toward manual labor, commercial investment, and applied technology. Finley exhaustively categorizes these trends in his famous books on the ancient economy. While modern critics are correct to point out that these conservative biases were not necessarily universally felt in Roman society, their existence among the classes of society with the greatest ability to invest in new material resources surely acted at least in part against the chances for industrialization. In antiquity, slaves, freedmen, and non-citizens were responsible for most economic activity. The political powerlessness of these groups is remarkably conspicuous, particularly when their situation is compared to that of their counterparts in the Middle Ages; in medieval Florence, for example, membership in a trade guild was a prerequisite for political participation in the state.

In the late Republic, free enterprise and what Weber called “merchant capitalism” were at their height. Limited liability joint stock companies even existed in the form of conglomerates of entrepreneurs who pooled resources to win the rights to tax farm various provinces. In the early Roman Empire, however, there existed an increasing preference for the use of appointed officials for such activities, and the legal underpinnings of corporate cooperation failed to further develop. Thus, there existed no overlap between the era of the greatest commercial sophistication and freedom (the late Republic) and the era of greatest economic expansion and opportunity (the early Empire).

There existed several bars to the application of new technologies. While current archeological work admittedly points to the widespread implementation of certain technologies (windmills, etc.), there existed no patent law in Roman antiquity to spur on technological innovation. In fact, narratives exist of Roman emperors actively discouraging technological progress for fear that mechanization would result in unemployment, and hence social instability. For all of its revolutionary potential, Hero’s steam engine was viewed more as a toy than an implement of social change. Techniques of metallurgy stagnated in an era of universal peace, as did the need to create new weaponry for the sake of a competitive edge over enemies. At the same time, the omnipresence of slavery similarly served to deter investment in new machinery, since investments in slaves and real-estate promised the safest returns.

The very unity of the Mediterranean world stifled innovation. Consider the example of Roman Lusitania. Merchants in that province had access to the entirety of the Mediterranean basin to sell their wares. In the Middle Ages, however, geographical fragmentation denied the state of “Portugal” a Mediterranean coast. Thus, merchants were forced to turn to the Atlantic Ocean in hopes of finding new products and markets, spurring the development of radically new shipping technologies. No such incentives existed in the unified, relatively non-competitive world of Roman antiquity.

The existence of amphitheaters drained economic resources, particularly in the West (which, interestingly, had far more amphitheaters than the Roman East, which was traditionally more economically vigorous than the West and survived much longer). Rather than investing in economically beneficial infrastructure, local elites poured money into the celebration of gladiatorial games, importing professional fighters and exotic beasts to satiate the interests of the populace. However, all of these resources were ultimately wasted despite spurring limited economic activity. In the same way, the existence of grain doles similarly retarded economic growth, as major metropolitan centers invested most of their resources on defense and feeding the unproductive urban populace, who remained in a permanent state of economic non-productivity. In my opinion, these historical forces provide some validity to Weber’s insistence on the “parasitic” character of ancient cities, which generally consumed resources from the countryside rather than producing materials to be redistributed to suburban markets (though exceptions admittedly existed to this rule.) At the same time, though, the Romans’ emphasis on the importance of the distribution of the bounty of the government back to the people and the emperor’s promotion of fun on public holidays were, in my view, admirable features of their culture, if only the spectacles didn’t cause so much pain and heartbreak to their victims.

There existed virtually no notion of “historical progress” in the Roman Empire. Although many at least sensed that the order of the Roman world was preferable to barbarism, major historians advocated cyclical views of history, or the notion that the true “Golden Age” was in the distant past, before urbanization and the use of tools corrupted humankind’s primordial naïveté. With the civilization at large devoid of the sense that the world could actively be improved over time through the evolution and application of radical new technologies, the momentum of increasing material progress was actively retarded.

According to my model of the Roman Empire as a complex system existing on the edge of chaos, ancient civilization was able to survive for a remarkably long period of time at a “critical point” of great material prosperity so long as the army remained loyal to the emperor and the citizens of the realm agreed to pay the taxes required to support its infrastructure. In terms of the punctuated equilibrium of progress, it was an era of equilibrium after one of growth. Broadly speaking, the Empire can be compared to a snowball that could maintain its structural consistency so long as it continued to roll, but begins to melt when its journey down the hill comes to an end. In the same way, so long as the Roman army was able to incorporate new territory into the Empire and redistribute booty in the form of slaves, booty, and various forms of material resources, the civilization was able to subsist at the edge of chaos despite its lack of internal momentum toward industrialization. However, once the civilization’s territorial growth came to an end, the costs of maintaining the defenses of the sprawling realm proved to be immense, and the system became remarkably unstable. As instability led to the emergence of chaos, efforts by the emperors to preserve the structure of their civilization resulted (as Tainter suggests) in diminishing returns on investments in social complexity. Why is this the case? In the long term, chaos theory suggests that the system was bound to collapse into new states of less sophisticated equilibria unless the momentum of scientific and technological progress overtook the abiding forces of stagnation and “decadence” mentioned throughout the dissertation. The story of the “decline and fall of the Roman Empire” is actually a tale of turbulent dynamics upsetting the ancient society and resulting in a new homeostasis similar to the old order in some ways, yet fundamentally distinct in others.

According to world systems theory, the fall of the Roman Empire cannot be understood as an isolated phenomenon. The third to seventh centuries AD were in fact marked by cascading patterns of turbulence throughout all of Eurasia unleashed by the outbreak of plague, environmental degradation, and aggressive migratory patterns by individuals formerly content (or compelled) to exist on the fringes of civilization. After the period of the Antonine Plague, emperors became increasingly reliant on marginalized ethnic groups and finally barbarian hordes to man the Roman army. This resulted in a massive influx of foreigners into the empire with only marginal allegiances to the state, ever ready to resort to violence for the sake of promoting the interests of a local warlord. At the same time, as uncivilized tribes across Eurasia spilled into each other’s territory, barbarian groups saw their ancestral lands taken from them and were compelled to venture into new countries. The prosperous civilized territories surrounding the Mediterranean seemed increasingly attractive to such immigrants. Migrations were associated with the sacking of major urban centers, terrorizing the local populace into retreating into the countryside and destroying the traditional bases of Roman tax collection.

Chaos theory suggests that the onset of chaos produces more information than a stable state of equilibrium; for example, each new number in the numerical pattern 121212121… represents less new information than each new number in the chaotic, seemingly random series 173749724… As the Roman Empire slipped over the edge of chaos, the central government began to be flooded with information concerning the destruction of cities, the emergence of rebel groups, military disasters, the migratory patterns of barbarians, and the outbreak of diseases. Even as it was burdened by this information overload, it began to lose internal consistency as civil war swept through the empire and loyalty to the central government became increasingly divided. Unlike the situation in the Han civilization, Roman dynasties were usually helmed by individual emperors with a great deal of personal power as opposed to the largely ceremonial kings of China, ruled by a narrow oligarchy of Confucian bureaucrats. As the empire slid into civil war, the individual charisma of the Roman emperors was increasingly undermined, and the relatively feeble bureaucratic institutions of the central government proved incapable of juggling the dilemmas at hand. To make matters worse, as increasing numbers of would-be emperors attempted to finance their campaigns and new sources of precious metals dried up, massive inflation began to undermine the economy, and several areas of the empire reverted to bartering and trade-in-kind. While traditional historians often point to individual elements of this chaotic breakdown as an explanatory cause for the transformation of Roman society, chaos theory instead suggests that they are all fundamentally interconnected symptoms of a movement over the edge of chaos after a long homeostatic/stable period of self-organized criticality.

The leaders of the Roman Empire were confronted by major problems, and they were in no position to stem the tide of chaos despite their best efforts to do so. Just as chaos theory predicts, however, the system did not collapse entirely overnight, but began to re-solidify at new points of equilibrium according to the creation of new party-systems tending toward bipolar duality. Thus, the dictatorial Roman Dominate replaced the relatively gentle rule of the Principate, as military figures attempted to cement the structure of the collapsing society by imposing mandatory liturgies on local aristocracies who had once given freely in a process of euergetism, requiring children to follow their fathers’ professions, and mandating religious uniformity throughout the empire. This new state of homeostasis, imposed by brute force and driven by an increasingly de-urbanized economy, proved far more precarious than the old order, and unsurprisingly, the system again slid into chaos as the barbarous nations on the fringes of the Roman world created entirely new kingdoms within its borders. A division between East and West after a brief division in four would prove to be abiding.

In 1776, Edward Gibbon famously pioneered the view that Christianity was ultimately a symptom of decadence, and one of the principle causes of the collapse of the Roman Empire. He reasoned that its emphasis on peacefulness and passivity vitiated the ancient martial spirit of the Romans, and that its insistence on non-material causation served to hinder the development of the ancient scientific method. Thus, in the tumultuous third, fourth, and fifth centuries AD, thinkers increasingly turned to un-judicable philosophical debates about the nature of divinity rather than taking steps toward the refinement of the scientific method. Eventually, thought was “canonized” by the government, and discourse shut down altogether, relegated to the realm of “commentary” and “copying.”

There is some truth to this narrative. Yet ultimately, I believe that complex systems theory problematizes these claims, to say nothing of the fact that most of the warlike barbarian hordes who overran the provinces of the Roman West were themselves Christian, rendering the idea that the religion necessarily resulted in a state of martial enervation somewhat non-compelling.

First, I plan to explore the historical forces that gave shape to Christianity in the first place from the perspective of complex systems theory. The “Butterfly Effect” is a fundamental principle of chaos, which stresses the interdependence of the constituent parts of a complex whole, sensitivity to initial conditions, and the potential for cascading effects. On the most basic level, the life and death of Christ, an anonymous carpenter in a backwater of the Roman Empire, had the potential to revolutionize the entire Roman world due to its nature as a complex system sensitive to the Butterfly Effect. At the same time, the emergence of the idea that humans were naturally sinful served to incentivize parents to baptize their children, since the prospect of sprinkling water over an infant represented a low cost when it came to forestalling the possibility of eternal torture in hell. Moreover, in a world marked by widespread poverty, a philosophical system stressing God’s love of the poor was surely an attractive alternative to the official state religion, which accentuated the worship of brute power. As the structures of Roman government fell into increasing disequilibrium following the Antonine Plague of the late second century, the apocalyptic message of Christianity perhaps seemed increasingly instructive, as well as its emphasis on the promise of a better world in the hereafter. Roman culture’s traditional emphasis on exemplarity also likely facilitated the rise of Christianity, as martyrs met their deaths heroically in the face of persecution by the state, ultimately forming a new canon of exemplary figures replacing traditional Roman personae such as Lucretia and Cincinnatus. And the Christians were on to something in their aversion to the ubiquitous violent sexual exploitation permeating ancient society—unfortunately, this intolerance extended toward all elements of human sexuality, throwing away the baby with the bathwater.

In the short term, Gibbon was surely correct that the rise of Christianity led to a loss of momentum in the development of the ancient scientific method due to its emphases on supernatural causation and obedience to the Bible as the literal, unquestionable word of God. However, in the long term, I believe that Christianity in fact represented a major source of power for the West, embodying one of the reasons that the equilibrium of the Middle Ages ultimately metamorphosed into a new and more vigorous state of homeostasis in the Renaissance following a period of chaos in the fourteenth and fifteenth centuries, ripe for a new era of development in the unfolding of the punctuated equilibrium of discursive progress.

Unlike the situation in the Roman Empire, there existed opportunities for common men and women to become priests and nuns during the Middle Ages, greatly broadening the net when it came to the number of individuals contributing to intellectual discourse. It must be remembered that the possession of great intelligence and even genius is randomly distributed. Consequently, given the nature of ancient demographics, it stands to reason that most great minds were either enslaved or members of severely disadvantaged classes with little access to education. The rise of Christianity began to mitigate this problem, adding more knowledgeable voices to scientific discourse.

During the height of the Roman Empire, the greatest intellectual achievements associated with scientific development were associated with the Library of Alexandria. Why was this the case? Uniquely, it provided a centralized infrastructure through which scholars could share ideas, research the best writings of the past, and find rewards for new theories. Unfortunately, such centers were few and far between in the Roman world. However, the rise of medieval universities as schools for studying the Bible enabled numerous such centers to come into being in the long run, greatly facilitating the growth of the scientific method. Unlike in the pagan Roman Empire, there existed major incentives to provide access to such centers of learning, as knowledge of the precise Word of God was a prerequisite to enter heaven. At the same time, these centers often specialized in the copying of ancient texts, broadening their dissemination.

The system of Roman education was largely geared toward an education in rhetoric and debate, emphasizing relativity and a lack of absolute truth. At the same time, during the height of the Roman Empire, it was difficult to enjoy a career devoted to the pursuit of science and literature for its own sake unless you came from an especially affluent social background. The growth of Christian centers of learning altered this state of affairs, providing the possibility of education to more members of society (and hence more geniuses) than ever before. The Church’s emphasis on the possibility of the existence of Truth with a capital T coupled with the concomitant study of ancient literature emphasizing the rudiments of the scientific method eventually created a unique synergy paving the way for the achievements of figures such as Copernicus, Galileo, and Descartes.

It seems clear to me that the emergence of Christianity can be explained by complex systems theory as a variation of the unpredictable Butterfly Effect, with the cascading repercussions of Christ’s life and teachings increasingly prevalent throughout all levels of Roman society. As the late Roman Empire succumbed to chaos, the religion’s teachings appeared increasingly attractive to an ever-expanding core conservative group, who proved unwilling to compromise their major beliefs even in the face of widespread persecution. While Gibbon is perhaps correct that in the short term the rise of the religion led to a retardation of the development of the scientific method, in the long term, the presence of the Church in Europe served as a major stimulus toward scientific growth, to say nothing of representing a major step forward when it came to social attitudes toward coming to the aid of the poor and helpless.

Historical periodization is, admittedly, a somewhat arbitrary science—thus, for example, some have even hazarded to suggest that the Classical world ended with the fall of Athens at the conclusion of the Peloponnesian War. In my eyes, however, there is great validity to Henri Pirenne’s thesis that the true end of the ancient world took place after the Battle of Tours in 732 AD, which halted the expansion of Muslim armies into Europe. Modern historians have questioned this thesis, suggesting, for example, that it conceptualizes the Islamic World as an Other. However, from the perspective of complex systems theory, 732 AD represents a significant date marked by the creation of a radically new equilibrium in which the Mediterranean was divided into Western European, Byzantine, and Muslim spheres of influence, and the unified system of currency came to an end; fundamentally speaking, the date marks the final and permanent fragmentation of formerly unified economic zones. Formerly, the most stable points of equilibrium involved either the political unity of the entire Mediterranean basin (the Principate and the Dominate) or a division between the Latin speaking West and the Greek speaking East (the Late Roman Empire). Now, for the first time, the economies of Western Europe would be left to develop on their own in a crucible of geographical fragmentation and intense internal competition. A new equilibrium had come about. The new civilization would ultimately give rise to a dynamic culture which, when pushed out of equilibrium over the edge of chaos by the Black Plague and Great Schism, arrived at a new homeostatic state enriched by the discoveries of the Renaissance and the resources of the Americas, empowering it to set forth and conquer the world.

Modernity and Futurism

paintings robots cyborgs men the creation of adam 1400x1050 wallpaper_www.artwallpaperhi.com_48.jpg

 By the end of the Middle Ages, urbanization had sprung up again and an inter-fragmented collection of nation-states loosely created by the tribes who inhabited the fallen Roman Empire were all competing to make meaningful contributions to ensure cultural survival; many meaningful contributions also came from the Muslim and Chinese worlds as well, who were no less involved in the struggle to survive, understand, and harness and recombine the world’s elements toward utile ends. Yet unlike the unified Chinese empire or the great Muslim monarchies, after the fall of Rome, the West was blessed with an inter-competitive edge much like that of ancient Mesopotamia, when a city-state had to innovate or be annihilated. After the Black Plague, there were so few people left alive in society and institutions had become so inherently weakened that the stage was set for an era of true rebirth. All the ingredients were there for renewed progress: competition, a demand for new elites and experts, the necessity of welcoming of new voices to the table, and higher wages for the living. Now, progress began to quicken, and the development of steel weaponry and maritime navigation made possible the discovery and exploitation of the New World. Descartes improved upon Aristotle, and the experimental method was eventually articulated and led to the possibility of Newton finally answering Parmenides’ questions about how limits and infinity should be conceptualized.

On a macro scale, the economic history of the West is until the nineteenth century largely the story of a loss of precious metals to the East in return for luxury items, a trend first undermined by the discovery of the New World, and then finally put to rest in the nineteenth century Opium Wars. The eventual emergence of full fledged European capitalism proved particularly productive to the development of new technologies. In the midst of intense competition, there existed major incentives to produce wares quickly, differentiate them, and deliver them to market more rapidly than competitors, all of which would be facilitated by more efficient productive technologies. In the Roman Empire, despite the intensity of urbanization, categorical bars existed to the development of such technologies. Max Weber’s model of “merchant capitalism” is particularly revealing, because it suggests that commercial agents had incentives to ensure that local production remained rudimentary so that there would continue to exist increasing demand for foreign products unable to be manufactured closer to home; this state of affairs was undermined in the capitalist age, when the political fragmentation of Europe rendered the geographical scope of merchants’ activities much smaller. On the eve of the Industrial Revolution, England had twice as many people as Rome, huge international markets, knowledge of advanced science, and a particularly conducive environment to the exchange of free capital. Thus, the probability of an Industrial Revolution was much greater than in Roman antiquity. The forces working against Roman industrialization would ultimately render the “critical point” of its equilibrium on the edge of chaos increasingly precarious. In a sense, then, economic stagnation represents the heart of Roman decadence.

We are now in the midst of an era of great scientific development. In terms of the punctuated equilibrium of progress, we have all of the ingredients suggesting that we are neither in decline nor at an equilibrium, but in the midst of a rise—an era like the golden age of Athens, or Augustan Rome, or the Renaissance.

  1. We are transitioning into a new age of metal—the Silicon Age. The ability to process information and enhance the human body with computers will increase the potential for more and more people in society to enjoy sources of utility. This will inherently lead to more and more voices joining discourse, and more meaningful contributions over time.
  2. For the first time in history, women and non-elite males are being welcomed by academic, political, and economic institutions. This will inherently lead to better discourse and more progress over time for all of the reasons brought up throughout this paper: more geniuses will now contribute.
  3. There exist many new inventions every year, which is indicative of a high degree of technical innovation and experimentation.
  4. Wars are not being fought between dying superpowers. The era from the Boxer Rebellion to the fall of the Berlin Wall was one of crisis in which nuclear weapons might have annihilated material progress and shown its dark side, temporarily halting progress (but perhaps, like the Black Death, enabling the creation of progress in the future as the survivors experimented with new technologies to live on in the wreckage of the earth.) At the moment, the probability of major metropolises being destroyed by nuclear weapons is much lower than it was at the time of the Cuban Missile Crisis.

I define Futurism as the belief that close alignment should be forged between political, economic, and academic institutions to harness the most progress possible in as short a time as possible to be enjoyed by as many people as possible, particularly in the form of advancements in medicine and the development of cyborg technology, cloning, and genetic engineering. In the face of the threat of the “singularity” and a destabilization of the superpowers imperiling the world through nuclear war, Futurism is the only hope for harnessing the exponential power of progress for good rather than toward self-destruction in the form of the retardation of progress.

Concluding Thoughts: Simulations and Falsifiable Hypotheses About Ambiguous Questions of Causation


 A major advantage of the theoretical model proposed in this paper is that it lends itself to the creation of “simulations” to explore open-ended hypotheses about causation, which is always a matter of a storm of different probabilistic influences, some more direct and major than others (in other words, certain forces raise the probability that an event will take place more directly than others). Assume that the unfolding of Roman political history from the Principate to the barbarian successor states represents the evolution of a complex system sensitive to initial conditions and the Butterfly Effect; it was one in which individuals engaged in a long term zero-sum game for power expressed in the form of a limited number of political and cultural offices and institutions, with conflicts represented by battles such as those mentioned in the (imperfect) historical record.

We will consider two hypotheses. The first is whether gay sex caused the Roman Empire to fall; the second is whether Christianity was the culprit. First we must consider how to model the questions at hand by constructing crude and imperfect simulations of history drawn from quantitative data when possible; next we need to justify what empirical results (what relationship between quantifiable variables) we would expect when examining the outcome of the simulation if a given hypothesis were true; then to say what we would expect if it were false; next what we ourselves hypothesize; and finally, how quantitative data drawn from the relationship between variables in the simulation sheds light on our assumptions, or defies them.

In the case of the first hypothesis, compose a list of years, listing battles per year. Also, search a database of literature (including legal literature) for mentions of gay sex. If it were probabilistically true that homosexuality largely precipitated the fall of Rome, the least I would expect is that the decades which saw the most battles would be associated with the most surviving mentions of individuals described as engaging in gay sex, and also the most surviving laws permitting institutions like, for example, gay marriage, relative to times of internal stability (measured by a lower frequency of battles per year). Yet if it were probabilistically unlikely that non-normative expressions of sexuality played a decisive role in corrosive social change, I would expect little alignment or even reverse alignment—individuals described in the historical record as having gay sex would be distributed evenly across the years, or their numbers might even decline as the empire entered into its most violent phases.

Of course, neither correlation necessarily guarantees causation—for example, perhaps as the empire declined, more religious hysteria arose leading more people to be falsely accused and demonized for homosexuality, generating an artificial rise in the historical record of how many times it is mentioned in surviving literature but saying nothing about its actual social prevalence or why society was collapsing. However, the specific information that the number of mentions of homosexual behavior declined in the final period of the greatest violence would be very problematic for the first hypothesis, because it would suggest not only that most instances of homosexual behavior come statistically from the late Republic and early Empire when there were the fewest battles and the civilization was strongest, but that the era of the final collapse was actually one of cultural repression toward gay sex, since one would expect that with all else being equal, the number of mentions should be equally distributed across the centuries, with highs and lows in the historical record reflecting various degrees of either cultural permissiveness or paranoia. (I actually hypothesize that the highest number of mentions of gay sex would come from the High Roman Empire, when the civilization was flourishing. Then, after an artificial rise associated with the rise of the hegemony of Christianity and discourse hysterically demonizing gay sex, laws banning it would lower the numbers in the final centuries of the Western Roman Empire, thus vitiating evidence for the first hypothesis.)

The second hypothesis made famous by Gibbon is even more challenging to model. Like the first simulation, we might compose a list of years, examine the number of battles mentioned as occuring per decade, and see if the most mentions of Christianity correlate with the years containing the highest numbers of battles. However, just as last time, there would be little revelatory information even if the number of battles correlated strongly with the most mentions of Christianity—after all, perhaps the civilization became Christian coincidentally while it was collapsing or as a response to the horror of the collapse, and this led to a rise in the number of mentions, saying nothing in either case about causation. However, just as with the first hypothesis, the specific information that mentions of Christianity declined during the time of the most intense violence might prove problematic for the theory, though it could also be a function of other forces as well, like so many people perishing, there was little literature produced during the final death throes of the culture. (I actually hypothesize that the data this time round would speciously vindicate Gibbon, with the most mentions of Christianity found during times of the most violence at the end of the Western Empire.)

In order to model the question more closely, we would need recourse to a wider comparison. Even if Christianity, which was unique to the Roman Empire and its environs, caused Rome to fall, we would expect it to have no effect on the history of another similar directly contemporary Iron Age empire such as, for example, Han China. Hence, if the hypothesis were true that it was Christianity that had the largest probabilistic influence on the collapse of Roman civilization of all other possible factors, we would expect it to have more of an effect on the outbreak of battles and their locations than, for example, Pan-Eurasian forces that might have affected both empires, such as the onset of plague or the migration of barbarian tribes or the widespread adoption of a new technology. If the hypothesis were false and Christianity’s rise had less to do with the fall of Rome than Pan-Eurasian factors, we would expect those forces to have more of an effect on the outbreak of battles. But how can all of this be modeled?

Imagine we were looking at a map of the Roman Empire and Han China, divided into many quadrants.

These are the elements that would be tracked:

1) the locations of iron deposits and other natural resources that can be pinned down with a fair degree of accuracy, including the locations of major mines (these are, of course, static)

2) The locations of recorded battles (these move about, and are thus dynamic)

3) The location of metropolises, major roads, and other geographical features (Mediterranean sea and the Rhine-Danube frontiers; major Christian centers, etc.)

4) The borders of the empire

I tentatively hypothesize that times of plague, rebellion, and civil war should show statistically significant changes in the relationships between the static and dynamic data sets as such periods would lend themselves to efforts to seize control of local mineral deposits and resource-distribution-centers.  By contrast, in times of relative internal stability, the Rhine-Danube frontier and the walled frontiers of China would be more likely to attract dynamic movement in response to external pressure along the borders. Permanent changes in spatial relationships would suggest watershed moments in Roman history. (Imagine, for example, if after a certain date battles suddenly never take place within a 50 mile radius of an area that once suffered from yearly violence.) The upshot of all this is that using the right mathematical tools, the relationship between these variables can be systematically evaluated, and we can investigate what various causal forces (internal or external) seem to have been primarily responsible for violence at different points in time.

Consider the question of Christianity’s influence on the fall of Rome. If it were true that Christianity was a major formative factor, we might expect major Christian centers to attract battles—this might be, for example, the result of sectarian violence between rival heresies, or barbarians sacking passive religious populations. We might hypothesize that the number of battles within a 50 kilometer radius of major Christian centers would rise over time as the empire collapsed, and we might even expect such centers to attract more battles relative to pagan cities untouched by Christianity or the fifty mile radius along the Rhine and Danube frontiers. By contrast, if it were not the case that Christianity were a major factor, we might see no such increase over time as we studied the decade by decade data. We might guess that the number of battles named in the historical record would remain highest within a 50 mile radius of the length of the Rhine and Danube, since the primary focus was on keeping out barbarians. (I actually hypothesize that the data this time round would again champion Gibbon, with the most battles found around cities, which were—albeit coincidentally—also Christian centers, since it was primarily an urban phenomenon.)

In order to disprove Gibbon, we might propose a new question—whether Christian centers or, for example, mineral deposits were greater probabilistic attractors of violence. If the urge to control mines was the primary determiner of where conflicts arose, we would expect the number of battles in the vicinity of mines (within a fifty kilometer radius) to rise during decades of turbulence, and we would expect the battles around Christian sites to either decline in number or show no statistically significant rise or fall at all. (In this case, I actually hypothesize that there would be no relationship between the locations of mines and battles at all; the number of battles in such locations would not rise over time relative to other indicators like whether an event is within 50 kilometers of a Christian center or 50 kilometers along the Rhine and Danube, since the late Roman emperors resorted to adulterating their coinage and hiring mercenaries.)

Our last resort might be to add Han China into the mix so that we could begin to see the limits of Gibbon’s view by considering Christianity’s impact versus that of pan-Eurasian forces, like the outbreak of plague, the spread of new technologies, and the migration of barbarian tribes. Comparing the two empires decade by decade, I would measure the number of battles per decade and whether they were within 50 kilometers of the borders of each empire (in the case of Rome, the Rhine-Danube frontier.) During times of internal instability, metropolitan centers and mineral deposits might be expected to attract battles more than the old frontiers, which are disintegrating (presumably because armed groups want access to the goods in the cities and countryside.) If Pan Eurasian forces were the largest probabilistic influence on the fall of Rome, I would expect the empires to both show an increase in the number of battles outside of the 50 mile radius along the frontier zones during the same period—the shape of the graphs (with more internal battles rather than frontier battles over time) would be expected to have the same shape over almost the same time frame. If a cultural force unique to Rome such as Christianity caused the fall, by contrast, I would expect no such relationship to exist between the datasets of the two empires, separated by thousands of kilometers.

Of course, any similarity or difference might be purely coincidental. Nevertheless, finding that both Rome and China were undergoing turbulence at the same time (measured by the number of battles in internal regions rising, to say nothing of the number of battles rising in general) would provide strong evidence for the view that Pan-Eurasian forces had a major formative effect, which itself undercuts the idea that the rise of Christianity was the vitiating factor. (This time, I expect that Gibbon’s argument would be undermined—turbulence in both Rome and China was probably caused at least in part by the same migratory phenomena affecting all Eurasia; in the language of this chapter, it was sparked by the complexity of an artificial border with a high degree of organization on one side and a low degree on the other collapsing into a less chaotic state of stable, simpler homeostasis with cultural similarity and less political sophistication on each side of the barrier. A heap of stones, however aesthetic, is no long-term solution to socio-economic and cultural division between neighbors in any time or place.)


[1] In the eyes of biographers like Plutarch, Mark Antony’s decision to divorce his Roman wife in favor of taking up with his Egyptian mistress and then dividing up Roman territories to their illegitimate children together might stand as the epitome of such forces in action. (Of course, from his perspective, he was only restoring traditional Ptolemaic territories to their rightful owners and leaving the Senate to govern Rome rather than imposing his will as a dictator upon it.)

[2] Quoted by James Warren, “All the Philosopher King’s Men,” Harper’s MagazineFeb, 2000. Accessed at

[3] See

[4] See

[5] E.g., while one might not be a Marxist, applying a Marxist lens to questions about social change can help to illuminate specific dynamics associated with, for instance, class struggle. This is why so much of the work of people like Freud remains interesting and relevant despite the fact that few psychiatrists today subscribe strictly to his specific model of the human spirit; applying his model, however bizarre it sometimes appears, can help to emphasize and clarify the role of forces like family interaction in early childhood and repressed memories in shaping character. Ideally, scholars should use a variety of thematic lenses to examine a subject from different vantage points; many, however, stick strictly to their favorite set of glasses, stubbornly ignoring the microscopes and binoculars of the world and complaining that such apparatuses blur vision because they cannot learn to refocus their vision. The lens of complexity theory accentuates the role of the unexpected, the contingent, and the probabilistic on history.

[6] José Ortega y Gasset, The Revolt of the Masses : Authorised Translation from the Spanish (New York: W. W. Norton & co., 1932).

[7] Discourse becomes impoverished in the absence of diversity for two reasons—first, geniuses who were born anything but elite males are doomed to a life where they cannot actualize their potential; second, the greater the diversity of voices and lived experiences at the table, the greater and more powerful the synergy can be created as unique perspectives are applied to age-old problems.

[8] In the language of this paper, during periods of “turbulence,” a situation envisioned by Tainter can readily arise in which individual efforts by the government to micro-manage a devolving state of affairs in the face of rapidly changing environmental conditions and information-overload can simply provoke more devolution.

[9] Shades of Ashley Wilkes in Gone With the Wind.

[10] This is where Foucault’s greatness as a historian is most apparent, because he understood this phenomenon intuitively.

[11] Interestingly, after the Bronze Age stagnation, there was a temporary dip into chaos and misery at the onset of the Iron Age when barbarous tribes armed with iron ransacked civilization. Eventually, however, a long and productive equilibrium was eventually reached.


Mars, Tomb of Futurism


Tend Your Own Garden

If immortality is the Holy Grail of Futurism then the colonization of Mars is its Holy Sepulchre—a big empty tomb. Both attract their pilgrims: the former is a fairytale; the latter is a real place just out of reach, a sort of tantalizing inspiration to hungry dreamers everywhere salivating for land that doesn’t belong to them. These days, from the promises of Elon Musk to the heroics of Matt Damon, we positively fetishize Mars. Yet my advice to the 11th century crusader and the 21st century Martian colonist would be the same: tend your own garden.

I’m afraid that this is blasphemy from someone who calls himself a Transhumanist. After all, the colonization of space is tangentially connected enough to other themes associated with technological progress that they’re ordinarily all lumped together under the general banner of Futurism. In an increasingly divisive political climate, the promises of SpaceX and Mars One shine like the hope of some long-awaited escape from ourselves.

We might not have cities on the moon, but the fruits of space programs enrich our lives immeasurably.More fundamentally, the allure of space colonization is at the heart of some of our most beloved cultural narratives, shaping the aspirations of explorers since the first days of NASA and the Soviet Space Program. Even the earliest films lionized astronauts. The moon landing was the greatest collective lived experience of the twentieth century, this perfect human achievement more majestic than the pyramids and just as pointless only to the cynical.

Today, we might not have cities on the moon, but the fruits of space programs enrich our lives immeasurably. And given our recklessness when it comes to the fragile environment of this planet, perhaps we could use another world as a backup, just in case. We already have the technology to achieve the goal of getting to Mars, though for a perfect storm of reasons, it has yet to happen. But isn’t getting there a worthy goal? And won’t the journey there (and not only the physical journey, but the technical refinements forged along the way) benefit the cause of Progress with a capital P? Then what the hell am I complaining about?

Space X JCSAT-14 long exposure launch. Credit: SpaceX

Colonization Problems

My intention here isn’t to trash space exploration or regale you with clickbait about the top eleven reasons why the colonization of Mars would be a tragic mistake at this juncture in time. However, I want to seriously problematize the prospective colonization, if you’ll excuse a word that academics tend to overuse. I don’t want to focus on the hackneyed and frankly shortsighted idea that the money spent on getting to Mars could be better employed for services here on earth.

My critique has to do with the repercussions of contemporary attitudes about the seemingly unrelated topics of imperialism in outer space on the one hand and Transhumanism on the other. Cultural prejudices enshrining heroic astronauts blazing across the sky and mad scientists forging abominations pose serious problems for Transhumanists of all stripes and would-be Martian colonists alike.

If the predominant image of space colonizers enshrined in our zeitgeist is heroic pioneers soaring across the galaxy in the name of science and adventure, the narratives surrounding genetic engineering and cyborgs are positively apocalyptic by comparison—just think of Frankenstein, the Terminator, and GATTACA.

Somehow, an astronaut’s 400 million kilometer journey from Earth to a theoretical outpost in a faraway wasteland seems less terrifying than a head’s four-meter journey from its body to a theoretical apparatus capable of supporting its consciousness.The reasons for this difference in our intuitions are varied. They partly have to do with the genealogy of our ideas about imperialism in outer space, which are grounded in discourse about the benefits of the exploration and exploitation of underdeveloped foreign lands, exotic travelogues, Cold War propaganda, epic films, etc. They also have to do with the attitudes that surround Transhumanism, grounded in skepticism about discredited fields like galvanism, the abuses of the eugenicists, deep-seated fears surrounding physiological dislocation and dismemberment, etc.

Heroes and Monsters

The end result of all this discourse is that, right now in the popular imagination, would-be cyborgs are monsters and would-be Martian colonists are heroes. Let’s take it for granted that the exploration of Mars would provide net benefits for society at large. Nevertheless, whether from the vantage point of someone who wants to investigate Mars and preserve its landscape (let’s call this the environmentalist perspective) or someone who wants to colonize and terraform it(the imperialist perspective, which incidentally seems to completely dominate the environmentalist one), the problem inherent in this tension is immense.

First, imagine you were an environmentalist who felt strongly against the radical transformation of Mars. Your reasons might be varied. To you, the urge to dominate nature with the clutter of terrestrial civilization might seem arrogant and intrusive. True, there are no indigenous Martians to despoil. But the process of terraforming the planet’s surface would still seem to be hugely rapacious.

Mars, Tomb of Futurism: The Hopes of Success Are Dependent on Cyborg HumansMars, Tomb of Futurism: The Hopes of Success Are Dependent on Cyborg HumansImagine drowning its pristine scarlet valleys in water and clouding its translucent atmosphere with chemicals. Wouldn’t even the most single-minded developer preserve some of the planet’s original landscape rather than transform it all? Doesn’t this intuition concede that there is inherent value and beauty in the wild state of the place? If advanced aliens exist within visitable distance of our planet, they are evidently the type to silently observe or ignore us rather than actively intervene in our affairs. How primitive it might seem to them that our conception of space travel in 2017 is still bound to the small-minded earthly impulse to barge in, dominate nature, and claim random parcels of it as our own.

From this perspective, the only visits to Mars should be undertaken for the sake of exploration rather than colonization. The best agents to do so would be robots and cyborgs rather than unenhanced human beings, whose imprint on the environment would be immense by comparison. Yet until the development of cyborgs, we are doomed to either only know Mars indirectly or permanently scar its landscape as successive generations of pioneers perish on its inhospitable surface.

Now, consider the imperialist perspective. To you, between climate change, nuclear war, plague, and pestilence, the existential threats to human civilization are great enough that you feel we need to colonize Mars as soon as possible or face the potential extermination of civilization as we know it. The preservation of the beauty of nature is all well and good, after all, but human interests come first.

Yet the conditions on Mars for the colonizers would be like something out of Dante; indeed, the first Martian immigrants should be “prepared to die,” warns Elon Musk.

As it is, we can’t even control the weather yet here on Earth, let alone create a colony on another planet with an inhospitable atmosphere. The bright eyed and bushy tailed original colonists would be like Joseph Conrad’s Mr. Kurtz, fantasizing about the march of civilization but ending up the lonely dupes of capitalism wallowing in lunacy in a dark place where they shouldn’t have ventured in the first place.

On closer reflection, the imperialist would realize that until it became feasible to travel to Mars on a mass scale, the original colonies could only remain pitiable outposts for misguided dying settlers and insanely rich tourists rather than anything like a safety net for civilization at large. The fastest and most efficient way to transform the landscape would be by the sweat of cyborgs. And yet ironically, with the advent of cyborgs, the need to terraform the environment to suit un-enhanced human needs would perhaps be moot.


Great Respect

While I might have misgivings about the subjugation of a planet ironically named for the god of conquest, I don’t want to disparage a journey there as an admirable Futurist goal. But whether you are an advocate of peaceful exploration or large-scale colonization, the time has come to think realistically about the requisite intermediate steps. We need to make heroes of the pioneers who are willing to risk their lives and careers to overcome the hurdles on the way to our destination “in this dark march toward whatever it is we’re approaching.”

Cyborgs and space explorers are entirely akin in their willingness to risk their lives for the sake of challenging the boundaries of conceivability. Yet in 2017, we call volunteers for the journey to Mars heroes, and there are no volunteers at all for brain implants because no doctor would ever dream of performing such an operation or convening a conference to discuss plans for one.

If a prominent surgeon called for volunteers and warned, as Musk did, that they must be prepared to die, I wonder if the public would meet the declaration with the same resigned sigh in recognition of the heroism of all involved. The principle is precisely the same: a human life is at stake. Yet we are willing to sanctify the sacrifice of the astronaut and glorify him, but would rather reverse engineer a machine analogous to a human brain than implant a machine into one

Investment in Mars in the absence of Transhumanism as a vigorous social ideology doesn’t necessarily come at the expense of Transhumanism, but it does come at the expense of the future of Mars. The most widespread current projections of the next century of human development imagine the needs of unenhanced humans predominating as a matter of course. Hence, long-term plans for Mars call for terraforming the planet to create a second Earth. Yet this limitation in our imaginations augurs great brutality and a great deal of human blood spilled along the way as we struggle to dominate conditions not meant for our bodies.

This, of course, does not mean I think there should be no exploration of Mars, or even that I am dead-set against eventual colonization. But I would hope that any such colonization would be undertaken in a spirit of great respect for nature, imposing upon it, let alone uprooting it, as little as possible. And I would also pray that the path toward colonization would be blazed with as few deaths as possible along the way.

Yet this can only take place after the ascendancy of Transhumanism and not a moment before it. For the time being, I would no more recommend a journey to Mars than I would a voyage across the Atlantic to an ancient Roman armed with nothing but a leaky trireme and his copy of Ptolemy.

My article was published at

Is A Computerized Brain Far-fetched?


Here’s my Letter to the Editor which was featured in the New York Times last year.

Kenneth D. Miller’s article (against the longterm efficacy of cryogenic freezing) is a cogent reminder of how little we still understand the nature of consciousness. But his assurance that the ability to upload a human mind is unimaginably beyond the potential of our civilization is misplaced.

The brain is a machine that runs on electricity, and consciousness is an emergent aspect of the workings of its physical parts. There’s no reason to think that a three-pound brain is so uniquely mysterious that it could never be truly comprehended, particularly given the likelihood of exponential growth in computing power in the future.

The first steps may not involve trying to model a working brain on a computer, but trying to integrate computers into working brains while still preserving autonomy, memories and sense perception.

When this is done, our understanding of the electrochemical foundations of consciousness will be transformed, and a great deal may become possible. For now, though, even a small chance of being “awakened” after cryogenic freezing is better than no chance at all.


In Defense of Transhumanism


My article appeared last year in the Washington Post.

When I first tried to start a club for the study of transhumanism at Yale, I was astounded by the university’s response. The chaplain intervened and vetoed the request. An email to me explained that there were already enough atheist groups on campus, assuming evidently that the words humanist and atheist were synonyms. I found myself awkwardly assuring a series of administrators that transhumanism had nothing to do with transgender students who didn’t believe in God. Broadly speaking, it involves the use of futuristic medical technology to lower the incidence of disease, enhance the capacity of the imagination and prolong the human lifespan. “We’re into things like cyborgs and genetic engineering,” I said.

It seems to me that while transhumanism resembles its progenitors, it is distinct from each of them, and lessons can be drawn from all of them.

First, there is the ugly specter of the eugenics movement, a disaster associated with decades of pseudoscientific research in an embarrassing array of discredited fields. People who see transhumanism as an extension of eugenics may be concerned that future policies could lead to rising inequality, intolerance for difference and the abuse of power.

In the future, with in vitro fertilization available to the rich, embryos will be screened for genetic profiles probabilistically likely to thrive according to various indicators. As we gain increasingly precise knowledge of the human genome and the probabilities of healthfulness associated with different genotypes, it will eventually be possible to select children likely not only to be healthy but also to excel. With popular inaction, this could lead to an unjust scenario in which fitness and intelligence might map onto the socioeconomic level of one’s parents. Legal restrictions on the selection of fetuses on the basis of genetic health, however, would be hugely regressive and counterproductive.

Transhumanists should demand the possibility of such prenatal care for all citizens rather than allowing the free market to restrict it to the few. In the long term, the development of increasingly efficient gene editing technology (both in vitro and, some day, in the womb itself) will likely significantly lower the associated costs. Although the horrors of eugenics should serve as a sobering reminder of the evil that can be perpetrated in the name of progress, they should not stifle discussion in the academy about the responsible implementation of genetic engineering in the future.

The second major source of transhumanist thought is science fiction, a genre that tends to favor dystopian narratives because they can be made so colorful from an artistic perspective. Despite all of the 19th-century novels bemoaning the effects of the Industrial Revolution, I suspect that if we could go back in time, we would still choose to industrialize. But perhaps the shape of the revolution would be different — we would hopefully pay attention to the kinds of things the novelists and poets complained about — for example, we might be less abusive toward the environment and more respectful of the rights of workers from the onset. [Eight questions to ask before human genetic engineering goes mainstream.

In our future, daily life will be transformed through the increasing automation of labor and the rise in sophistication of artificial intelligence. Life may be less about the 9-to-5 grind and more about education, community and the creation and enjoyment of art. Rather than imagining a future in which humans and machines are at odds — as many thinkers have predicted — transhumanists look forward to the advent of cyborgs, in which computers are incorporated into the brain itself, leading to radically enhanced processing power and the ability to preserve consciousness for lengths of time now deemed inconceivable. The ultimate lesson from transhumanism’s origins in science fiction is perhaps to seek those inventions that would radically enhance lifespans and empower the human imagination to control what it experiences in ways hitherto unimaginable, liberated from the genetic and circumstantial wheel of fortune.

A third source of transhumanist ideas, and the one of greatest interest to me, is the tradition of humanism. When Cicero used the word “humanus” to symbolize the noblest aspects of our species’ character, he showed that he believed something fundamental separated human beings from all other types of beings — the inculcation of our rational faculties and our ability to apply those faculties over time to the development and preservation of our civilization.

Today, we often hear that truth is a construct and nothing but a reflection of power. Values are relative. But humanism and the idea of progress stand as rejoinders, and transhumanism falls squarely in line with this tradition. How can we best harness the power of progress? Not by seeking to control and exploit people different from us, a transhumanist might say, but by attempting to alleviate suffering and build bridges between imaginations. A willingness to empower more people than ever before to be born healthy, intelligent and able to devote long and meaningful lives to love, leisure and lifelong education is, to me, transhumanism at its best — an antidote to postmodern malaise.

On Simulism: a New Perspective


Arguments in favor of simulism date to the dawn of philosophy, when thinkers like Parmenides of Elea insisted that the world of appearances was an illusion. Though the suggestion that you exist in a simulation may seem incredible, consider the arguments in this paper with an open mind.

Before I present my own thoughts on the subject, I first want to consider Nick Bostrom’s influential contentions in favor of simulism, which can be summarized as follows:

“A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one… If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3). Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.”[1]

While the argument is intriguing and even in line with recent suggestions that the universe seems to be a projected “hologram,” scholars might disagree with Bostrom for several reasons. The first two have been mentioned by others; the third is my own reflection.

  1. Information which we gather from within a simulation might not be an accurate reflection about the limits and possibilities in the world beyond the simulation, just as Mario’s knowledge of what happens when you eat a mushroom in his universe tells him nothing about what eating one in our world would do to him. In other words, even if our universe seems to hold the capacity of the creation of simulations containing conscious beings, why can we assume the same thing about the world beyond our universe which gave rise to us? This argument can be mitigated at least somewhat, however, by suggesting that given knowledge of our own existences and of the nature of our universe and the possibilities within it, we can make meaningful assumptions about what might have given rise to it.
  1. The mathematics in question is based on pure conjecture. Bostrom suggests: “While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~1033 – 1036 operations as a rough estimate… We noted that a rough approximation of the computational power of a planetary-mass computer is 1042 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second.”[2] The entire premise of his argument is predicated on this idea, though in reality, we know almost nothing about what it would take to create a simulation of the entire mental history of humankind. We must buy into his math, however, to believe that there are likely vastly more “posthuman computerized consciousnesses” than everyday, mundane consciousnesses derived from an original universe.
  1. Because Bostrom believes there are likelier to be more simulated consciousnesses than actual consciousnesses due to the enormous theoretical processing power of computers the size of the planets, he suggests we are already in such a simulation, or almost no beings in the universe will ever reach the level of being to create such simulations. However, from the perspective of our giant universe as a whole, it seems like there is evidence (if Earth is not atypical) of a great deal of animal consciousnesses, and life (if not consciousness) down to the bacterial level. The upshot of all this is that the chances of being born a live organism somewhere in the enormous universe might well be higher than the chances of being born a conscious being trapped in a consciously designed simulation created by sophisticated beings like postmodern humans.

At first glance, then, Bostrom’s formulation seems neither sound (because he makes assumptions about over-simulations on the basis of a random under-simulation) nor valid (because his justification for the high number of sims relative to base realities is unfounded). Yet despite some disagreement with Bostrom, I am ultimately a simulist, but partly for independent reasons. (Note that Bostrom himself is not a simulist—he says that we are either in a simulation or likely to never create them.)

I shall use the example of Mario throughout the discussion. Our world and Mario’s world aren’t as different as you might first imagine. To begin poetically, is life possible with no consciousness, so that something can be alive but not even realize it? The existence of plants proves that this is so. Is rationalism possible without sense perception, so that something can make accurate calculations but possess no conscious will of its own? The existence of robots and computers proves that this is so. Is experience possible without three dimensional consciousness? The existence of dreams proves that this is so. Is consciousness possible after total oblivion? Our own existences as human beings prove that this is so. After all, we were all effectively dead before we were born. We know that plants can be very much alive in our three dimensional world with no awareness of this fact—and so we, too, can potentially exist within a world of meaning that is all around us and yet beyond us.

  1. Mario might imagine that he is completely unique in the universe and randomly came into being by a process of pixels spontaneously assembling (that is, he imagines that he is the one and only Mario on the one and only television set on the one and only game device in the entire universe, and all these devices came into being by random chance). Or, he might guess that he is one of a large number of similar beings conveyed on a large number of things called game cartridges that are deliberately designed. The latter is likelier than the former. But why? Consider this scenario. If in the future, an archeologist discovers a single book from the lost civilization of 2015—and no other books survived—on what would you place a bet? That the book would be a popular one like the Bible or Harry Potter, or that the book would be someone’s single copy of a lost doctoral thesis? The former is likelier. For analogous reasons, Mario is right to begin to be suspicious that he is a unique thing and not a common one. That is, he is right to guess that he is likelier to be only a version of himself than the one and only version. And if he came into being by some process that worked over time, it is likely that the process would operate more than once and not only in his unique case, since something itself must have given birth to the process and organized it. And the same is true of you—if you won the chance to be yourself, you are likelier to be one of many such winners than the one and only winner, because in any game of chance, the existence of more winners implicitly means more chances to win. Remember, your possession of your conscious will in the form of an individuated consciousness is a separate and distinct fact from your mere existence. David Vincent Kimel might exist somewhere in time and space because a sperm and egg came together, but this fact is distinct from my actually being the one experiencing his consciousness as a singular rational entity and writing this blog post. The inevitable implication of “I think therefore I am” is that the very existence of the “I” requires a separate explanation from the existential implications of its thoughts. It could be that your consciousness’ possession of your specific body was random. But it would seem more rational to assume that you exist as you for a reason, which would give rise to and raise the probability of your existence. If chaos alone governed the universe, the odds would be highly stacked against life in general, let alone the evolution of your individual consciousness—I’ve read that even forming a protein would be (2) (10^-32) unlikely. We need to begin imagining some kind of a process that could give rise to the experience of individuated conscious wills.
  1. Now, think about Mario again. Even if he realizes he is more likely to be one of many Marios than the only version of himself, this would still not explain why he acts as he does–for example, why he leaps over a pit rather than into it. Of course, the answer why he jumps over the pit is that he is simulated to do it; someone is playing him. Simulism, or the seemingly ridiculous idea that Mario is basically a video game, doesn’t just explain why something called a “Mario” exists and that there are likely very many versions of him, but also shows why Mario is this particular Mario; why, for example, he grabs the coins that he does. The same is true of you. It could be the case that we are all born in a random and infinitely complex universe governed by no designer. But even if a sperm and egg came together to make you, this does not explain why you are experiencing your consciousness and not somebody else’s, or infinite other ones. Only simulism gives the answer. The only implausible alternative is that everything else in the world is something caused, with the sole exception of your possession of your own conscious will. The only answer to why you are yourself is either “this is the only thing in the universe that has no reason” or “I’m probably one of many versions of my consciousness, and I am being simulated to act in one way and not another, in the same way that Mario decides whether to grab a coin or not.”
  1. The question arises, even if we realize that our possession of our individuated consciousness is a fact that is distinct from the facticity of my being (that is, the fact that David Vincent Kimel exists somewhere in the universe is a distinct fact from “my” (the author’s) actually being the individuated consciousness writing this document), and even if we concede that it is likelier than not the case that this fact exists for a specific regulated reason which increased the odds of my coming into existence as a member of a non-unique class rather than the unique result of random chance,(for the same reason that Mario should suspect he is a non-unique thing and that a random book from the world of 2015 is likelier to be The Bible than a random thesis), why should we suspect an intelligently designed entity is behind it all and not merely some natural process we don’t understand (karma, etc.)? Now, imagine this possibility. What if in the whole history of the vast original universe, at least one original civilization existed that was so advanced, it started to deliberately construct simulations, even of its own past—what Bostrom calls ancestor simulations. Why would it do it? Perhaps to cure boredom. Perhaps to figure out all the secrets of the past. And perhaps even to download the conscious minds of all the unfortunate individuals who lived in history before simulations allowed people live out their dreams. Regardless of the reason, imagine it happened even once in the whole history of the universe. What would happen when the simulation of history reached the point when the simulation itself was created? The answer is, it would create a simulation of itself. Then, it in turn would create a simulation of itself, and there would be infinite simulated identical realities.  Bostrom is on the right track when he says: “It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration.”[3] What he misses is that an ancestor simulation that truly replicated its own history would create a situation where the “stacked simulations” were in fact exact replicas of each other. And even if the creation of an ancestor simulation may seem to face insurmountable odds, it may also be the case that a simulated civilization (a civilization simulated in the first place, of some kind) became sophisticated enough to understand its own programming enough to tap into its own coding and examine its underpinnings in fine enough grain to recreate the conscious life of the past. The end result would be the same.
  2. If Bostrom’s formulation of the Doomsday Hypothesis is apt, if we consider the present moment not “a year of human civilization” but rather “a year in the existence of the universe,” if the universe is finite and if we find ourselves in a random year of its existence, it is likelier to be nearer the end of the series of years than toward the beginning. This would only be true if the entire universe were in danger of being shut off altogether, which could only be true in the case of being in a simulation.

Now, what is likelier? That you are a single random combination of atoms that came together by chance and that you experience your specific life equally randomly, or that you are one of an infinite number of versions of yourself created when a single very unlikely simulation in an original universe (or within a simulation) simulated itself? The upshot of this is, that it is likelier we exist in a universe designed by a rational will than that we are alone in random space, and that life after death might really be possible, since it should be theoretically possible to upload consciousnesses once the simulation ends.

(Note that the argument that a perfect ancestor simulation might have been created, or that a simulation might have simulated its own coding, mitigates the problem mentioned in the opening critique of Bostrom that we cannot make assumptions about the nature of the environment beyond our universe on the basis of assumptions of the conditions within our universe; yet we can indeed make meaningful assumptions about the environment beyond our universe if we posit that it exists as a copy of itself. However, in this case, we should posit that Bostrom’s logic only applies if we are in an ancestor simulation specifically, and not in a simulation of some kind, despite however many simulations of various kinds might be produced by civilizations within our solar system.)



[2] Ibid.

[3] Ibid.


On Rights and the Right to be Genetically Engineered


On Rights and the Right to Be Genetically Engineered: A Transhumanist Perspective

Imagine a scenario in which a mother could be given medicine to ensure that her child would be born without debilitating congenital illnesses. In such a world, should access to this medicine be considered a human right?

Before you jump to any conclusions, consider the question reformulated in this way:

Imagine a scenario in which the technology existed to genetically engineer embryos to ensure that they would be born without debilitating congenital illnesses. In such a world, should being genetically engineered be considered a human right?

Intuitions may differ with regard to these two questions. I am interested in the distinction between them, and in potential answers to both of them.

How are the questions distinct? Of course, one striking difference is that medicine seems to be a more pleasant turn of phrase than being genetically engineered. The former evokes Florence Nightingale, while the latter calls to mind Frankenstein’s monster. The rhetorical distinction is loaded with terrible baggage grounded in the tragic history of the19th and 20th centuries. Anxiety over the very concept of genetic engineering is at least part of the reason that documents like the European Union’s convention on human rights and biomedicine have historically prohibited altering the gene pool as a crime against “human dignity,” as if the matter were totally non-contentious.[1] By the same token, the US National Institutes of Health refuses to fund gene-editing research on embryos.[2] This terror at the very prospect of genetic engineering stems at least in part from awareness of the evils historically committed in the name of pseudo-scientific eugenics. The aims of transhumanists, however, are not those of the racist eugenicists; the latter attempted to murderously destroy human difference, while the former at their best seek to level the playing field between individuals in a welcoming, non-judgmental, and racially neutral context offering new medicines to as many people as possible as non-invasively as possible.

Genetic engineering is a form of medicine presaged by current forms of treatment. Even now, for example, there exist tests empowering couples to choose between embryos before they are implanted into the womb on the basis of how statistically likely they are to develop there.[3] At the same time, fetuses are often screened for developmental disorders, etc. Though the associated technologies are in their infant stages (pardon me for a pun), the ability to select between embryos on the basis of their complete genetic profiles and to even begin editing those profiles through the use of methods like the CRISPR interference technique may eventually become a widespread social norm. This adds a sense of immediacy to the first form of the question and raises a variety of further questions. For example, to what degree should embryonic selection be subsidized by insurance? If national laws forbid parents from selecting between embryos on the basis of certain genetic qualities, would they in fact be justified in doing so? How can we edit an embryo’s DNA when the embryo itself cannot give consent? Is consent a matter of concern for an entity which according to many intuitions is little more sophisticated than an individual sperm or egg? If consent is a meaningful concept for an embryo, how can we justify compelling an embryo to be born in the first place? Etc. At the same time, the second form of the question differs from the first insofar as it seems to me to imply that parents would have an active obligation to engineer their children if it were true, which is a contentious prospect, particularly given the precarious current state of the associated technologies.

It is my conviction that social progress at its best evolves in such a manner that people of all creeds, kinds, and classes should be increasingly empowered to harness the transformative power of technology and medicine to enhance their lives and protect themselves from random accidents of fortune. This is why I believe in transhumanism, the central arguments of which are bound, for me, to the notions that sensitivity to unwanted pain should inform institutional policy, which in turn ought to aim to minimize citizens’ agony and maximize their happiness by means of the expansion of their potential to make meaningful contributions to society at large through the expression of their “rights;” and that the most effective means of doing so is the promotion of education and the development of new medicines and other beneficial technologies created at the most efficient rate possible through the promotion of synergy among the independent institutions of economics, politics, and academics, collaboration hitherto confined to times of total warfare, non-coincidentally bound to conditions conducive to rapid eras of technical progress. These ideas are investigated both in the first part of the essay, where I explore the characteristics of human rights, and in the conclusion, when I reflect on future policy.

In the middle part of the essay, I suggest that access to effective genetic engineering in the form of screened in vitro embryonic selection should likely be considered a human right even at the present juncture, and certainly when gene-editing becomes a cultural norm. Yet I also come to the conclusion that the right to be genetically engineered cannot currently be understood according to the traditional thematics of human rights for a variety of reasons, not the least of which is the fact that given contemporary levels of technology, it is not possible to genetically modify an embryo in the womb itself, and even the most cautious in vitro modification is hugely controversial. Embryos destined to be born might be argued to have the “burgeoning right” to at least be born in possession of a sound mind and all five senses in working order if there exists medicine to ensure this, though the human right of parents to give birth to children completely naturally might trump such burgeoning rights according to the intuitions of different cultures depending on the degree of invasiveness of the associated technologies. Though a future age may think my intuition quaint or even prejudiced, if being engineered were any kind of right in 2016, it would suggest that the results of natural intercourse which are always hazardous and random in the status quo would somehow be declared morally off-limits, a conclusion too advanced for the current century, and too intolerant for any century.

If genetic engineering becomes cheap, effective, and efficient enough, however, I imagine that the vast majority of parents will surely adopt it in short order to maximize their offspring’s chances for health and happiness. Those who choose not to do so will likely be in such a small minority that the creation of a coercive apparatus compelling them to do so would only sully the futurist cause and the interests of freedom and diversity in general. For this reason, whether or not it is a human right to be genetically engineered to enjoy certain baselines of existence (a question depending to a degree on the state of the available technology), the rights of parents to prenatal genetic healthcare should always be considered paramount.

I. On Human Rights and Access to Genetic Engineering for One’s Children


I seldom encountered a persuasive argument that anything could be considered a “human right” beyond appeals to authority, intuition, or social utility. For example, consider the right to freedom of expression. Why does it exist? One might say that the right is enshrined in documents like the Universal Declaration of Human Rights and has been fundamental to the Western tradition since the days of Greece and Rome. But these would be appeals to authority. Simply asserting that something has long been considered a norm does not necessarily mean that the norm is just or universally applicable. Another might say that the ability to openly communicate feelings and opinions is fundamental to the operation of democratic forms of government, and that a free marketplace of ideas will lead in the long term to the best discourse and the most progress for society at large. But these would all be appeals to social utility. A right should be something fundamental to all individuals qua their humanity, and not necessarily something grounded in what is best for society as a whole—otherwise, for example, things like gross public torture could be justified for the sake of preserving national security through the promotion of deterrence. A final person might say that we are all fundamentally free to express ourselves in a state of nature, and society exists to defend our liberty rather than to infringe upon it. But these would be appeals to our intuitions about what constitutes the state of nature and what the goals of an ideal society should be in relation to that fanciful construct.

If the core arguments of cultural and moral relativism should be taken seriously, it is difficult to imagine how the existence of human rights can be universally defended through appeals to logic alone. Even in the relatively straightforward case of freedom of expression, it is clear that there exist gross differences between cultures with respect to intuitions about what constitutes the limits of acceptability. While most societies would agree that the freedom to yell “fire!” in a crowded theater in hopes of inciting a panic should be curtailed by force of law, the question of whether incendiary hate speech or religious slander should be tolerated is a matter of hotly contested opinion. In a postmodern sense, asserting that something is a right is tantamount to coercively imposing your “truth” onto others, and with every assertion of moral authority comes discursive baggage and problematization associated with the expression of power. If everyone has a “right” to be genetically engineered, for example, what does that mean about the “rights” of parents to decide about what is best for their own developing embryos by deciding not to genetically engineer them? And what might the “right” of an embryo to be genetically engineered imply about the “right” to abort that embryo altogether? Or about the right of a child to sue parents for wrongful birth? Some “rights” are mutually exclusive with each other, and our intuitions about which rights to privilege will surely diverge depending on our respective philosophical, moral, and social perspectives. To assert the existence of a right is ultimately a deeply political action.

Defenders of universal human rights must inevitably grapple with these relativistic thinkers, who insist that “morality” is ultimately something manufactured, constructed to suit the intuitions of the culture which originates that morality, and thus the product of a kind of self perpetuating cycle where doctrine recapitulates and reinforces custom, and custom recapitulates and reinforces power. It comes as no coincidence that that which is called right or wrong, good or evil, often parallels the values that would best prop up mighty individuals at the head of great social hierarchies. An example of this phenomenon is the appropriation of Christianity by Roman imperial authorities in the fourth century AD, when the government emphasized aspects of the doctrine most amenable to the ends of the increasingly authoritarian state. Injunctions to be obedient to slavemasters were accentuated at the expense of, say, calls for the rich to radically renounce their possessions. In any time or place, the powers that be will pick and choose what religious laws to emphasize and which to let slip to the wayside. Yet beyond asserting that one’s moral practice is in line with God’s law, philosophers , politicians, and prophets have all had great difficulty proposing a set of concrete principles universally compelling to all rational members of every human culture.

Of course, the great achievements of Locke and the eighteenth century revolutionaries who followed him cemented the idea of universal human rights to life, liberty, and property in the popular imagination. But on close examination, where do these rights really come from, and what are their limits? Are they in fact universal in any meaningful sense, or are they constructed to suit the exigencies of specific geopolitical situations?

The difficulties inherent in these kinds of questions inspire many down the road of moral relativism. But I cannot find it in my heart to follow in their footsteps, even if I agree with them that “rights” are ultimately constructed, partly on the basis of appeals to authority, partly on the basis of intuitions about fairness, and partly on the basis of social utility given the current level of technological development. Setting aside appeals to authority, let’s examine intuitions about fairness and the relationship between rights and social utility.

The concept of the veil of ignorance might be employed to suggest why rights might be afforded to others in a just society from a purely rational perspective, on the assumption that without a set social identity, an individual is a kind of “pure subject” whose intuitions are not self-interested. It stands to reason that if we were to design a just society without foreknowledge of our social identity, it would be one which would ensure a level playing field for all members from at least certain vantage points. This idea might be employed to defend, for example, the promotion of certain social welfare programs, since the threat that you might be born indigent suggests that blinded by the veil of ignorance, you would prefer for such programs to exist than the alternative. However, from an anonymous rational perspective, one might equally well value being born into a society that taxes its people less and encourages scientific innovation more, under the theory that technological progress ultimately alleviates more burdens than any well meaning lawmaker, and that the more entrepreneurs are taxed, the less they might be capable of taking risks and investing in new technologies. So, the veil of ignorance does not necessarily illuminate how an ideal society should be constructed without avoiding the perils of subjectivity, for no pure subjectivity can exist.

How then can the specter of individualistic intuition be escaped? It seems to me that intuition can be narrow, grounded in one very iconoclastic viewpoint, or broad, found across many cultures. Most broadly speaking of all, from the perspective of all mortals with hopes and dreams capable of feeling pain, nature left to its own devices is often more beautiful than it is good. Human societies join together to promote the ends of frail human beings, since the law of the jungle favors only the strong, and there is no justice but the victory of the sharpest jaws. The laws and moral principles crafted by different civilizations are in some sense a reflection of the local geography (for example, we might expect to find values associated with hospitality in harsh terrains), the customs of neighboring cultures (for example, ideas about strategies to appease the gods might be borrowed from a nearby civilization), and spontaneous indigenous invention (for example, a unique creed might be written in a specific culture.)

Yet it further seems to me that given the randomness of birth and the fundamental equality of all mortals with respective to their frailty, most major world religions, legal systems, and moral philosophies emphasize the imperative of not treating others in ways that you yourself would not want to be treated under the same circumstances. This was Hillel’s silver rule; Jesus expanded upon it by actively urging people to treat others as they themselves would have themselves be treated. (But how one would like to be treated is not necessarily how others would like to be treated, and in this ambiguity there often proved to be much room for oppression and consternation as one culture tried to foist its beliefs onto another with threats of gunfire and hellfire.)

As a human capable of feeling pain, my intuition is that given the choice between two paths, the road associated with the least amount of unwanted pain for the smallest number of humans and the greatest amount of happiness for the most people should be chosen if there is no other real difference between the branching roads. Of course, the fantasy of the Aztecs was that their gory sacrifices appeased the gods and upheld the peace and prosperity of the state and cosmos alike; in the case of the ancient Roman games, the message was sent to the plebs that social outcasts would not be tolerated, leading, reasoned the Caesars, to less pain in the long run by deterring others from emulating the victims of the lions. This was a dark spin on traditional utilitarian arguments that emphasize the importance of maximizing benefit. Unfortunately, what constitutes “benefit” or utility is obviously subject to debate.

But perhaps we can all concur as rational and cooperative human beings capable of feeling pain that disutility in the form of unwanted pain should be minimized if it all possible, and particularly when there is no rationally proven difference between two paths except that one contains more unwanted pain than the other for the greater number of people for no greater purpose whatsoever than upholding brute hierarchies of power in a terroristic context. When considering the existence of a social institution associated with the oppression of a victimized minority group which claims to be in pain, ask yourself what supposedly virtuous ends that oppression serves—if the answer is chiefly “supporting the powers that be by instilling obedience to convention through terror,” the institution is likely an oppressive rather than a progressive one, particularly when its victims are random people who broke no law. This perspective begins to make the Aztec pyramids and the Roman Colosseum seem like reprehensible institutions regardless of one’s cultural perspective, though the Aztecs claimed their rites upheld a sacred cosmic balance, and the Romans emphasized the importance of their spectacles as a social leveler. Ultimately, though, the institutions inculcated more vice and dehumanization than virtue and love, and hence led to a path of greater pain and misery than more humane and joyous alternatives. (Indeed, Christianity’s elimination of both institutions stands among its greatest achievements.)

Insofar as all of this is true, what could be more painful, more brutal, or more bound to terror than a path entangled in the tendrils of the circumstantial genetic jungle? A world without genetic engineering is one in which we are all effective sacrificial victims to the romantic notion of an unchanging, single, sacred Human Nature, and in which we are all gladiators armed for the battle of existence unevenly by an indifferent mob of sperm and eggs. Inaction in the face of our slavery is horrifying, but justified by the fact that we take the horrors of life for granted as a necessity because they have accompanied civilization from time immemorial, and dystopian science fiction has made us fear a better future.

The injunction “do unto your neighbor as you would have your neighbor do unto you” becomes less problematized from the perspective of affording each other rights and freedoms than from the vantage point of trying to hoist specific religions upon one another. Behind a veil of ignorance, for all of the difficulty of finding common ground beyond individual subjectivity, one thing that most people would likely agree upon is that regardless of who they are destined to become, at the point they are all compelled to live in an unjust world, they should at least be afforded fundamental freedoms in line with the technological progress of their age if they do not infringe upon the freedoms of others, in addition to certain safeguards against unwanted pain, under the principle that they should be compensated for being forced to exist in the first place, and they will be happiest the more readily they are empowered to grapple with the vagaries of fortune, and the more protected they are from the brutality of illness. If this is the case, one would seem to prefer a society in which genetic engineering were at least a possibility, since if someone is going to be compelled to be born, he or she would likely hope for a healthy genetic profile. A world in which such engineering were forbidden on face would lead to more random misery, and hence a path of greater pain for more people. And for what? To uphold an unjust status quo in which random minorities monopolize healthy genetic profiles at the expense of the majority, who are obedient to the supposed necessity of being slaves to the genotypes bestowed by nature because it is “dignified” to be the dupe of chance.

Transhumanism accentuates the inherent benefits of new medicine ensuring a baseline of existence free of gross genetic illness. In the future, perhaps an enhanced human imagination itself will lead to less human suffering in the long term, since more medicines produced by more geniuses will mean that we will be increasingly liberated from the genetic and circumstantial wheel of fortune, our leaders channeling their energies into technological research bound to life-affirming constructive technologies rather than those leading to existential destruction. The more inclusive of potentially meaningful contributions the academy, government, and market become, the more rapidly all of this will be achieved. In the future, robotics and computing will transform the landscape of what it even means to be human. We must embrace the idea that it implies more than the brute fact of our animalistic existence. The true “human” is the “human imagination” in an age in which our species’ spirit is empowered to transcend and transform its very form. From the bounty of technological progress and automation can come human liberation and transcendence if the transition into the new epoch is handled with mercy and a sense of equity. The more happy, healthy, and intelligent humans are born, the more meaningful contributions to the arts and sciences can be made, and the less disutility there will be for everyone on earth.

II. On the Right To Be Engineered

Genetic-Engineering.jpgIn recognition of all of this, I want to call conscious attention to the fact that because I am a transhumanist, I want to privilege a perspective on this question of genetic engineering that would do the most to further a movement which, as I understand it, calls for increasing access to pioneering medicines for all people.   At the same time, however, in addition to considering this “political” dimension to the question, I also want my opinion to be grounded as much as possible in the use of logic that would be meaningful for all rational, compassionate individuals. As I have said, beyond appeals to authority, I think that while “rights” are partially socially constructed, they can still be grounded in fundamental and perhaps even universal human intuitions about fairness. Ultimately, they are a form of compensation. No one asked to be born, but we are all compelled to exist on a planet in which gross inequities exist and physical and emotional pain run rampant and many of our dreams do not come true. So long as we live, we agree to play a game whose rules we did not write, and in the midst of this struggle we feel some sense of empathy and commiseration for other sailors, who are all in the same leaky boat as we are.

Humans are mostly self-interested, but also quite social and cooperative, which can sometimes run against self-interest. I believe that this is the real reason we value an abstract universal “right” to things like freedom of expression. Our intuitions as rational humans capable of feeling pain tell us that it is wrong for the strong to randomly oppress the weak and for the majority to silence all dissent, and our social structures mandate that such a state of affairs cannot long endure in harmony with progress and willful cooperation. With respect to society at large, the community could not function without legal and political institutions to ensure that might alone did not determine justice, or the many who are weak would eventually overthrow the few who are strong. With respect to our rational individuality, we would all like to be treated as dignified autonomous agents whose perspectives are worthy of respect, and so we value the rights of others to the same freedoms that we value for ourselves, providing a model for friendly reciprocity and preemptively defending ourselves from the retribution of agents who would be right to resist dehumanization.

With all this being said, we can consider the case of genetic engineering along analogous lines. People who believe that being genetically engineered is a right might present the following arguments. No one asked to be born, but not only do we compel our children to exist in the first place on earth as it is, we also compel them to be members of society with all of its inequities, and to follow its laws, which are often unjust. In return for the twin sacrifice of existing at all and functioning in a community that might constantly disappoint and pigeonhole them, individuals are repaid by society by being legally assured of certain rights—freedom of speech, the right to hold property, etc., privileges which would in fact be undermined by the brutality of sheer force and randomness in a world without laws and strong community life.

The right to be genetically engineered would be directly in line with these other kinds of rights. By genetically engineering embryos, we would compensate future humans for forcing them into existences that they did not choose. We would ensure that they would grow up to be individuals best equipped to pursue their happiness through the unhindered use of their five senses in bodies that are free from physical pain. Our community’s body of scientific knowledge could liberate them from the brutality of the circumstantial and genetic wheel of fortune with all of its inequities and empower them to begin life on a level playing field. Behind a veil of ignorance, it would seem reasonable that most individuals would rather be born in a world which ensured that he or she was healthy and in possession of all five senses than one in which the matter was left to random chance. For all of these reasons, the right to be genetically engineered could be understood as something fundamental. Indeed, it might be especially important that genetic engineering be articulated as a right, or the fruits of genetic engineering might only be enjoyed by a select few instead of guaranteed to all members of society by the government. This is particularly true since in the deeper future, in the thematic shadow of increasing automation, higher degrees of intelligence than ever before may be needed to secure employment, and the restriction of such abilities to the wealthy could set the stage for revolution.

Beyond these arguments, we could also bring up reasons related to social utility in the form of fewer people born in need of constant expensive medical attention; the creation of a larger population of hardy and ingenious agents able to make meaningful contributions to the arts and sciences in the long run; and even appeals to authority in the form of the traditional relationship between the development of new technologies and the extension of rights to new groups. (In fact, for those who consider embryos fundamentally human-like in nature, the idea of their right to medicine before birth seems especially compelling, in contradiction to the idea that certain religious perspectives might, on face, reject genetic engineering. One would rather safely engineer a single embryo if possible than choose between several on their basis of their genetic profiles and abort the rest.)

However, while these are good reasons why parents should have the right to engineer their children, there are nevertheless other reasons to believe that being genetically engineered should not be articulated as a universal human right just yet. We cannot assume that the intuition that it is best to transcend nature is universally valid from all perspectives. Even behind a veil of ignorance, a rational person might choose to be born into a society which radically valued parental rights rather than in a world which might coercively mandate forms of genetic engineering without sensitivity to the long term health risks in the form of, say, the consequences of meddling with linked genes. If the right to be genetically engineered were taken seriously and enforced by the government, the loss to individual parental rights would be severe. Certain disorders along the autism spectrum and illnesses such a bipolar disorder are often associated with great ingeniousness—the automatic elimination of all genes demonized as pathogenic might result in a less imaginative, diverse community in the long term, to say nothing of leading to a slippery slope where parents will increasingly deliver genetically similar children, more prone to be wiped out by random circumstance in the form of disease.

At the same time, insofar as even the mandatory use of inoculations is controversial in our present age, trust in the transhumanist movement would likely be greatly undermined in a world in which its adherents began clamoring for the rights of all embryos to be engineered given the primitive current state of technology; in fact, there would likely be acute and active resistance to its measures among parents, retarding the movement to bring medicine to more people in the long run. To make matters worse, the idea that being genetically engineered is a right carries prejudiced assumptions about the inherent value of one form of rational conscious life over another, suggesting that those who are engineered are so superior that all beings have an inherent right to be just like them. Anyone with mentally handicapped loved ones knows that in the most important ways, all people are equal. At the same time, implying that the embryo has a right to anything at all might imply value judgments about “personhood” that would touch upon the abortion debate (though one could make an argument that there is a distinction between embryos who will be brought to full term and embryos in general.)

Despite all of this, however, I am deeply persuaded by arguments that in a world in which people did not ask to be born, society owes its children the possibility of access to medicine that could help to level the playing-field for them and ensure maximum chances for a happy and healthy adult life regardless of the wealth of their parents. In the future, were it possible for embryos to be given cheap medicine ensuring at least a bare minimum of physical well-being, I might be persuaded that those embryos destined to be born have a right to at least certain baselines, and I would certainly engineer my own children this way. But given the current state of technology and my commitment to parental freedom, I think that while it might not be best to articulate being engineered in itself as a human right at the present juncture, access to genetic engineering in the form of prenatal care and optional screening for one’s embryos before implantation should definitely be deemed one.

III. Quo Vadimus?


I am deeply concerned that in the status quo, the rich will in short order have access to technologies that will ensure their children will be born with a lower likelihood of random genetic illness than those born to parents without access to the same kind of wealth who conceived their progeny the old fashioned way; remember, we are on the cusp of being able to hand pick embryos based on their genetic profiles, from which it is but a short step to overt gene editing. It is worrying that in a world which does not discuss genetic engineering using the language of human rights, access to effective genetic engineering in the form of strategies like the selection of embryos based on their genetic profiles will increasingly be left to the whims of the free market, mapping genetic health on top of socio-economic differences. This worrying fact is not a reason to ban such practices altogether, however, but to subsidize them for all people and to fund research to perfect them. Indeed, they could never be effectively banned across all cultures. Societies that forbid them would be fighting a losing battle.

Of course, whether I should be allowed to genetically engineer my children to be born with their five senses is different from whether I should be able to engineer them to have features like blond hair, though by choosing my mate, I am effectively crudely genetically engineering my children anyway. Questions about acceptable limits to genetic engineering (for example, parents who might deliberately choose to engineer their children to be deaf) should not distract us from recognizing that in fundamental ways, the ability to choose what our children will look like and how they will be raised is the most fundamental natural “right” of all from the perspective of the individual, and the right to genetically engineer is only an extension of this prerogative. Until there are great advances in medicine, to insist that all parents should be compelled to engineer their children would be just as unjust and counterproductive as insisting that all parents should be compelled not to engineer their children for fear that the potential of a slippery slope should stop us from talking cautious first steps on a great and meaningful journey.

Unfortunately, we are now living in an age when genetic engineering is in its infancy, and risky procedures are only just beginning to be performed on embryos. We will not know the long-term health effects of some of these operations until the children are grown, and until now, genetically engineered embryos are destroyed as a matter of course. There have been recent calls to ban pioneering cheap genome editing techniques, and scientists in China have made waves by beginning to “edit” the DNA of embryos despite the misgivings of their peers in the West, leading to calls for a ban.[4] While the risks of the technique are real, I think that the ban is in fact misguided and even smacks of cultural imperialism (one scientist in the New York Times even wrote of the “moral authority” of the scientific community in the US to determine the course of the research.)[5] At the same time, there have been other recent developments which are more promising, with Britain beginning to permit cautious exploration along new frontiers.[6] The future is anyone’s guess, but policies will likely vary by nation. Any sort of transnational moratorium would be hugely unjust.

As we have seen, in the near future, effective genetic engineering in the form of the selection of the fittest embryos suggests that the progeny of those who have sex the old fashioned way without access to thousands of dollars worth of counseling in fertilization clinics will be markedly disadvantaged unless they are subsidized for similar treatment. In the long term, one could well imagine a scenario in which women could have the choice for embryos both to be brought to term outside of their bodies and to be engineered according to a variety of potential prerogatives likely partly to be determined by the democratic process. Such technology would do a great deal for the promotion of transhumanism, to say nothing of the rights of embryos destined to be brought to term, divorcing us from the necessity of aborting the other embryos. Feminism would also be promoted by women’s liberation from the necessity of bearing a baby inside one’s own body. But the development of an artificial womb or at least technology to engineer a child within a womb without harming a mother is a long way coming. One wonders if the United States had focused on such goals with as much passion as it did the journey to the moon or increasingly more lethal weaponry, such technologies might already exist. Were there to be an advocacy for the creation of such technologies and the alteration of language banning it on face in major world documents on the nature of human rights, true progress would begin to come about.

Ultimately, if societies can justify going to war and killing millions of people and spending millions of dollars in the name of higher causes, societies can also justify empowering a small number of parents to genetically engineer their embryos to maximize their prospects for health and happiness in the long term despite the risks of the attendant procedures, particularly given parents’ right to abort a child in the status quo or give birth to it at random when it did not ask to be born, subject to every cruel genetic shift of fortune. (Years of technological refinement may still be required, however, before the techniques become safe enough for regular use.) Some day, if the genetic profiles of a variety of individuals are examined, a library of genotypes probabilistically likely to be physically and intellectually rigorous could be formed, and we could build a greater generation than the current one without succumbing to intolerance for difference or limiting the fruits of the technology to the wombs of the few. The time will soon come to empower parents to take cautious risks by genetically engineering their progeny to ensure the possibility of a better life for their children and the possibility of a better future for all of us in the form of more meaningful contributions from the gifted. The only alternative is delay, inequity, and eventual social instability as the medicines are unevenly distributed across classes and countries.






[6] Britain’s Human Fertilisation and Embryology Authority recently permitted scientists at the Francis Crick Institute to use the CRISPR-Cas9 editing technique on human embryos. For a report on the contentious decision, see

Bring on the Cyborgs: Redefining the Singularity

Here is the final version of my speech “Bring on the Cyborgs: Redefining the Singularity.” I presented it at as a TED talk at Yale. The audition video can be seen in my posts from earlier this year.


Stephen Hawking, Bill Gates, and Elon Musk are afraid. Afraid of our computers turning on us. Afraid that Siri will go from botching directions to taking over and crashing our cars. This is what they call the singularity.

The smartest and most powerful men on Earth are right to be concerned about the future. But in this speech, I’m going to propose a solution to save our species. It involves rethinking the concept of the singularity and reimagining our destiny as human beings. Without exaggeration, this topic might be the single most important one on Earth.

I’m a doctoral student at Yale in Roman history and the founder of Yale Students and Scholars for the Study of Transhumanism. It might seem strange that an ancient historian has an interest in studying the future. But don’t be so surprised.

Ancient historians are interested in the beginning of things like drama, democracy, and the idea of equality before the law. I’m interested in the singularity—and transhumanism—because today we are once again at the beginning of something new. And new beginnings are when we need to pay the most attention to the lessons of the past.

Historians know that technology has not always advanced in a straight line forward. At the Great Library of Alexandria, two thousand years ago, a scientist appropriately named Hero invented the first steam engine. The first computer in history, the Antikythera Mechanism, was developed over a century earlier. Both, however, were toys for the wealthy instead of tools to improve the lives of the masses.

Everyone asks me why Rome fell. I ask a different question. I ask, what could have saved Rome? And then I remember the steam engine and the computer, and I say: technology.

What is the singularity? Technically, it refers to the point inside a black hole where space and time don’t exist as we know them. But the word’s meaning has been expanded over the years.

In the 1950s, mathematican John von Neumann, applied the term to the history of human civilization. The singularity, he thought, was a point in history after which human affairs themselves would become fundamentally unrecognizable. Then, about the time when I was born in Israel in 1983, mathematician Vernor Vinge, defined the singularity as the point when artificial intelligence would create a world “far beyond our understanding.”

Neumann and Vinge had something in common. They imagined human progress escalating and accelerating as we approached the singularity. Today we have a parallel concept: Moore’s Law. Moore’s Law states that the number of transistors on integrated circuits doubles every two years.

This means that computers are becoming more powerful, exponentially, and data since the 1960s has backed this up. Now, some question whether Moore’s Law will continue to hold true in the future, and I’ll get to that critique in a moment.

But if it does hold true, you may understand why so many brilliant people might be scared. One can easily imagine computers becoming so powerful – so fast – that they take control over their own programming and come to overpower us.

Mankind has often feared the conscious wills which it enslaves. As a classicist, I’m reminded of Aristotle’s “natural slaves.” The idea was that those who were able to apprehend rational principles well enough to follow basic orders but who simultaneously possessed no rational strategic faculties of their own were essentially slaves by nature.

Classicists argue about the people that Aristotle might have had in mind—a professor once even told me that he was really talking about the mentally handicapped: people like my brother Dinh. Today, I’d argue, it sounds like we’re talking about Siri. Siri can understand my directions and execute them, but has no strategic ends of her own. Computers like Siri understand us, but they don’t really comprehend us.

But what happens when a computer is sophisticated enough to form independent values? It certainly wouldn’t be Aristotle’s “natural slave” anymore. But here’s why people like Hawking are worried: its values might not be our values; its goals might not be our goals. Over the course of human events, slaves have tended to resent their former masters.

And if the conquest of the New World and the fall of the Qing Dynasty are any indication, where contention exists in the presence of technological and material inequality, there tends to follow the wholescale destruction and capitulation of one side of the struggle.

But I have hope for a different future. A future of which we can be proud. A future toward which we can work together. A future in which humans and machines are not enemies at war, but are one. This is where Transhumanism comes into the picture.

Transhumanism means using technology to enhance human capabilities. People already have pacemakers, hearing aids, and artificial limbs. This is just an elaboration. But why is the idea of transhumanism important to the singularity?

I’ll tell you. Transhumanism holds out the possibility that we will heal not only our hearts and our bodies, but also our minds. In the future, it may be possible to replace parts of the brain with computers—curing diseases like my late grandmother’s Alzheimer’s, and radically empowering us to shape our own dreams, metaphorically and literally.

If Moore’s Law continues to apply, we need the enhancements of transhumanism to stay one step ahead of our machines before they become smart enough to take control over their own programming and become more powerful than we can even imagine.

Machines may not share our passion for the preservation of civilization. But enhanced human beings will still have human experiences like that of membership in a community and feelings of pleasure, pain, and love.

If Moore’s Law does not hold true, however, as many computer scientists have argued, the need for transhumanism will be even greater. Our ability to create smaller and smaller microchips will eventually run into intractable barriers at the frontiers of our knowledge of quantum mechanics.

At that point, which could be no more than a decade away, new ideas will be needed. The time will come when we will need better materials than silicon, and the best alternative will be genetically engineered cyborgs.

The advantage seems clear: why reinvent the wheel when the human brain possesses great technology shaped by millions of years of evolution? Why reverse engineer what a human brain can do when it can be enhanced by robots?

Transhumanist technology can cure diseases, enhance intelligence, allow us to shape our dreams, and empower us to control our destiny as a species. But it must be available to all, and not only a chosen few. Its free choice or rejection must be a human right.

When access to the technologies associated with Transhumanism becomes a human right, our hopes and dreams will be transformed. When the brain is augmented by technology, and we understand the electrochemical foundations of consciousness, barriers to communication and understanding will come crashing down.

We will have the power to decide the content of our nightly dreams—anyone can feel like an NBA All-Star, the world’s most attractive movie star, or literally one of the stars of the Milky Way. Without the need to fight over resources, our ecological crisis will be solved and our Earth will be protected and healed, halting the destructive race to the bottom of industrialization.

As a historian, I can even imagine accessing the lives of our ancestors as experienced through their eyes. Life will be a blank canvass and a paintbrush for all of us. And we will all be equals in a fellowship of artists.

Given all that is true about transhumanism and the singularity, we are all obligated to bring that future closer. Each moment of delay means countless pain, suffering, and death. But each step of progress brings us one day closer to the dream and the promise of Transhumanism.

What must be done to bring that future closer? First, we must deal with the panic about the singularity. Fear of the singularity stems from its old definition. I want to redefine singularity to mean the point in technological progress when our relationship with machines becomes a seamless, shared consciousness.

The singularity will occur when we have the power to jump out of our bodies and into a cloud of pure imagination. The singularity will allow our imaginations to reach the boundaries of the universe.

This speech is a challenge: A challenge to all people who share my hope for humanity’s future. To bring humans and computers together, we as humans must come together and agree upon our shared purposes. As human beings, we are all enslaved to the genetic and circumstantial wheel of fortune. On Earth as it is, where you are born, and who you are born to, matter more than the content of your character. This must change.

To believe in transhumanism we need to believe in human progress again. Since the horrors of the twentieth century, we have retreated from such confidence. But Transhumanism is not tied to any single culture or broken ideology of the past. It is bound to our essential attributes—what makes us human—our imaginations, our feelings, our hopes, our dreams.

As a student of ancient history, I see the traces of transhumanism in the earliest records of human thought. When Cicero used the word humanitas to symbolize the noblest aspects of our species’ character, he showed that he believed something fundamental separated human beings from all other types of beings—the inculcation of our rational faculties and our ability to apply those faculties over time to the development and preservation of our civilization.

The only thing that we should fear is delay. We need more than a transhumanist society. We need transhumanist departments at every university. We need interdisciplinary study—in the humanities and the sciences—in order to probe the nature of our own natures in ways unprecedented until now. We need the courage and the legitimacy and the vision to undertake the research that must be done.

The most powerful men in the world are afraid of the future. But I am ready to face it. Are you?

On Beauty and Taste: A Refutation of Kant’s Aesthetics


Is there a fundamental relationship between art and beauty, and is there a universal standard of good critical taste in art? After some reflection, I’ve come to the conclusion that the answer to both questions must be yes. But neither Winckelmann nor Fry have satisfactorily supplied the standard to which I’m referring, and while my position speciously seems to evoke Kant’s concepts of “pure taste” and “free beauty,” my model in fact necessitates the very abrogation of these categories. I realize that I am tussling with giants here—the greatest art critics of all time, and one of the greatest philosophers. I also understand that attempts to do what I’m doing here have historically been associated with the application of rigidly judgmental regulations meant to limit what constituted “good art.” These stipulations were more often than not associated with hegemonic discourse that neglected and underrated the profundity of non-traditional arts, to say nothing of masterpieces from diverse cultural traditions. I am interested in accomplishing nothing of the sort here.

Instead, I want to examine the way that autonomous subjects experience beauty, exploring why they seem to derive pleasure from the mere contemplation of proportions, which is not obvious by any means. With so much done, in a future essay I’ll go on to investigate what the implications of my model suggest about whether or not the ability to recognize beauty is the fundamental feature of effective art criticism, and whether this suggests that there are certain universally applicable standards of good taste in art as I understand it. But the first thing to do is to define “beauty” and “taste” in terms that are useful to the discussion at hand, and this task will form an appropriate preamble to my forthcoming diatribe on the contemporary state of popular criticism.

We’ve heard that “beauty is that which inspires strong positive sentiment of its own accord thanks to an object’s proportions[1] in themselves rather than any appeal to rationality; beauty acts as a sort of unmoved mover.” This definition provides us with an interesting starting point. But perhaps we can be more precise. Beauty is a good in itself because the sensual experience of the proportions associated with beauty necessarily results in strong positive sentiment. But how can this pleasure derived from the sensual experience of proportions in themselves be described? And why should an object’s mere form inspire such stirrings in the first place? What constitutes “beauty” in the simplest terms? Is it some magnificent objective property that miraculously graces certain objects but not others? Or is it an imaginary phenomenon in the mind of a subject that only exists when it is perceived?

As I understand it, a subject experiences beauty as a kind of deeply satisfying imaginary symmetry between their unconscious preexisting idea of the good (shaped partly by biology, partly by past memories, and partly by internalized cultural discourse) and their conscious perception of the immediate object in question; the more closely that the object’s features conform to preconceptions[2] of the good, the more congruous the association between unconscious exemplary expectations and conscious perception becomes, and the more beautiful the object consequently appears to the viewer. It strikes me that if this schematization is useful, beauty can be philosophically understood as a kind of pleasing tripartite association between an object, our conscious perception of it, and our unconscious preconceptions about the standards that make it “good.”

It’s worth dwelling on this model for a moment. It suggests that the pleasure associated with beauty is essentially a sense of deep satisfaction connected to the recognition and contemplation of associations related to the good; in other words, a beautiful object reminds us of our preexisting standards of the good, and we derive pleasure from the recognition of their actualization in nature. In fact, our pleasure subsequently seems to validate and confirm the original standards. After all, the contemplation of their flourishing embodiment in the form of the object under observation has just made us happy as if by magic, and the ability to inspire such spontaneous pleasure is the sole criterion of beauty. The true source of our pleasure, however, is hidden in our unconscious reasons for our personal taste. In our blindness to them, we ascribe the “beauty” to the object itself, and do not realize that it exists only as a relationship in our own minds between the object, our conscious perception of it, and our unconscious associations involving what it reminds us of.

The derivation of continued pleasure in an object’s fulfillment of unconscious aesthetic standards thus becomes a kind of self-fulfilling prophecy—we consider certain features to be beautiful because they meet our preconceptions about what should make us happy, and their very congruity with our preconceptions when actualized in nature is enough to actually make us so. This is why beauty is an inexhaustible and endless source of pleasure in itself. The sense of free play in the mind as we dwell upon the pleasing relationship between the sum and its parts and all of the positive associations that it calls to mind produces a sense of fun and excitement. The object appears uncannily familiar to us because its constituent building blocks and the relationship between them call to mind the fulfillment of what we already yearned to see.[3]

Now, since every subject has access to a divergent store of memories and interprets and reacts to cultural discourse in radically different ways, a truly impartial or disinterested critical appraisal of beauty seems impossible to me, and here I must begin my quarrel with Kant. To make a Kantian judgment of “pure taste” is to be completely indifferent to all preconception, bias, and cultural discourse. Kant believed that such a judgment would be universally valid for all subjects. Yet if what I have said so far is correct, preconception, bias, and cultural discourse are in fact the very determinants of that which we find to be “good” in the first place, and consequently, what we consider to be beautiful. I could uncharitably compare Kantian “pure taste” to the experience of beauty from the perspective of a child, with access to few memories and little understanding of culture or history. The Kantian youth was never taught to be a sensitive judge of beauty. Their opinion is based on whims and first impressions, and they often render that which is gratifying to the sensual desires of the moment synonymous with that which is transcendentally beautiful. Indeed, as an adult, the child’s ultimate judgment of beauty will be partly informed by these infantile whims and first impressions along with a nexus of memories associated with individualized pleasures and pains engrained deep in the unconscious, structuring aesthetic taste. But the adult’s opinions will be moderated by the wisdom born of the old Confucian triad of experience, reflection, and imitation. Unfortunately for the Kantian model of a universalizing judgment of pure taste, however, this wisdom necessarily makes the child less disinterested and impartial, because prejudices and expectations are the attendant consequences of its acquisition.

To be fair, Kant distinguishes between free beauty and contingent beauty; the former can supposedly be understood through “pure taste,” and the latter is dependent upon cultural discourse and preconceptions that stand apart from the mere pleasure that the sensual experience of an object brings us in itself. It seems to me, however, that any subjective claim of beauty is necessarily contingent, insofar as the delight that we experience when we perceive an object is, as I have said before, derived from the recognition of an imaginary symmetry between our perception of an object and individualized standards of goodness grounded in highly personal memories and reactions to biological factors and/or cultural discourse brewing in our unconscious.

To explore whether or not this is the case, let’s look to some examples of objects that might be categorized as Kantian “free beauties” accessible to judgments of pure taste alone, examining how they might problematize or challenge the formula that I’ve just suggested. There are three potential categories of “free beauties” which might be thought to be universally appealing to all subjects capable of taste according to widely held human intuitions about beauty: geometric beauty, certain elegances associated with biological fertility, and the experience of the infinite, or sublime.[4] We will find that even in these special cases, drawn mostly from the world of nature, my descriptive model of beauty and taste will hold water.

I am indebted to Elizabeth Prettejohn’s book Art and Beauty for the idea that “free beauty” accessible to “pure taste” can perhaps most usefully be understood as the loveliness of geometric form in itself. There is a particular kind of beauty associated with simple shapes that might seem to be universally compelling to any subject capable of forming an aesthetic judgment. Human intuition seems to tell us that there is something transcendentally awesome about the grandeur of a perfect snowflake. Indeed, humans find the loveliness of fractals in general to be so intuitive that an aesthetic judgment of a snowflake as a beautiful object is completely uncontroversial in any earthly society. And the human imagination is stretched to the absolute limit by the idea of someone being able to find a pristine snowflake ugly; perhaps a terrible curmudgeon could call it banal, but never ugly. By its very nature, it exhibits a sense of delicate balance between rhetorical categories that are polar opposites to each other, paradoxically reconciling them in a single elegant unbroken shape. Fractals in general can be understood as visual representations of pattern on the brink of chaos. They exhibit impossibly delicate symmetries reinterpreted in infinite creative swirls. The mere contemplation of the complexity and evanescence of something like a snowflake inspires effervescence. We delight in its crystalline delicacy. Its very existence inspires wonder. The object is also completely uncontroversial; an appreciation for the elegance of its shapes threatens no discourse on power, and so discourse in general spares the snowflake from charges of ugliness. Even if the symbol of the Nazis were a snowflake, the shape itself wouldn’t be inherently offensive. It would still be beautiful in itself, though context could of course render it hideous by association.

Yet the matter is not so simple. It immediately strikes me that if an intelligent robot incapable of aesthetic judgment but eager to understand the concept of “ free beauty” were to read my last paragraph, they might charge that I did nothing but describe an object like a snowflake in language that was itself “beautiful,” asserting its relationship to “the good” but trying to prove my point only by using elaborate words and analogies to describe the object in question, adding no new information about it. Regardless of how strong a writer I am, I employed sophisticated vocabulary, rhetorical devices like parallel structure and alliteration, and anthropomorphic terminology: verbs like “exhibit,” “reinterpret” and “create” and nouns like “delicacy” and “elegance.” If my readers were convinced by my description that the snowflake is transcendentally beautiful, it was only because they found my prose to be “beautiful.”

What about the snowflake wouldn’t the robots understand that seems so intuitive to humans? I would suggest that the fractal pattern reminds humans of the very structure of their unconscious associations between concepts and memories by way of visual analogy. To use a simpler shape than a fractal as an example, a subject might find an abstract painting of triangle without a base to be beautiful because the three points of the figure can stand by symbolic analogy for three ideas, with the lack of the base representing the lack of a connection between two of the concepts—perhaps someone might think of her husband and her best friend from grammar school, two people who both loved her, but have no relationship to each other. The shape might also remind the observer of a sharp surface like the tip of a dagger, which might call to mind stories of romance or adventure, depending on one’s mood. You see the point. The geometric object’s beauty comes from the way that we humanize it by investing its qualities with symbolic overtones that are interesting to humans because they speak to our memories and experiences. But if this is true, then only subjects capable of reasoning by symbolic analogy are capable of finding a snowflake to be beautiful. Other animals care nothing for its symbolic overtones. For them, it is at best striking, or visually arresting. However, though the ant finds no snowflake to be beautiful, it does in fact experience beauty in other contexts in the form of the elegances of its mating games. In the same way, the ant lives in a society governed by rules, but would never be able to understand the concept of abstract justice. Humans are different from other animals because we were able to transfer our delight in the byzantine intricacies of our own mating games (the original biological locus of our idea of “beauty,” as we will see, below) to a delight in abstract representations of geometrical complexity in general by means of analogy. The word “elegant” can be used to describe a snowflake, but the terminology tellingly evokes concepts associated with sex and reproduction.

Thus, the contemplation of the snowflake only reveals beauty because the sight of the object leads to a free play in the mind as we personify its features and play about with psychosexual analogies inspired by its constituent parts. Our preexisting standards are nevertheless still structured by biology, our memories, and cultural discourse. Biology provides a tendency to associate the qualities of being intricate and symmetrical with healthfulness in the mating game. Memories associated with the close observation of shimmering, delicate, and harmless objects are likely to be positive or innocuous. And cultural discourse proverbially enshrines the idea that a snowflake is something beautiful; to be insensitive to its intricacies is to declare oneself barbarous and close-minded. But in fact, the pristine snowflake is no more inherently beautiful to all subjects than a filthy hailstone is. Consider that when it comes to mere form, any object can inspire a wealth of potential symbolic analogies, from a crack in the sidewalk to the Mona Lisa. What makes an object “beautiful” is our anchoring of those series of analogies in a sense that the object we are looking at is making us spontaneously happy, drawing us to continue looking at it. Something about it inspires us to linger and imagine. Beauty can be found in some measure in all things by a sensitive viewer, particularly when an object is viewed in close detail. Under a microscope, divorced of contaminating context, all things are beautiful. But only to a viewer capable of reasoning by sophisticated symbolic analogy, and one preprogrammed with standards of the good whose fulfillment results in a feeling of pleasure.[5]

So much for geometric beauty. Now, let’s consider the elegances associated with biological fertility more closely. Human intuition suggests that there is something inherently beautiful about a thriving rose in full bloom. Its vivid scent and colors were shaped by evolution to attract animals to spread its seed. So too the magnificent plumage of a peacock, or the intricate courtship songs of several different species of insects. Could Kantian “free beauty” be associated with an identity as a thriving member of a class exhibiting healthfulness rather than sickliness? To put it another way, that which is “flourishing” can be defined as the most likely of its class to reproduce in beauty. So if we recognize a flourishing state, do we inherently recognize the transcendentally beautiful? The strongest affirmative argument might be presented in the following way: All flourishing members of a class are necessarily beautiful, because the concept of “flourishing” necessarily involves the concept of the “good,” and if something is comprehended to be flourishing on the basis of its proportions alone, then the “good” must be evoked in the subject’s mind automatically, and the object is thus necessarily beautiful according to the Kimelian definition.

But the argument does not hold water. In the first place, we should remember that truth and beauty are distinct categories: they can both be conceptualized as inherently “good” in themselves, but the truth is not necessarily universally beautiful. We must not mistake the comprehension of the truth (such as the recognition of the fact that something is flourishing and exhibiting traits associated with being healthy) with a universalizing aesthetic judgment of beauty. We can take intellectual pleasure in our awareness of the truth, which is a good in itself, without reveling in the proportions of the object that we are scrutinizing. An appreciation of beauty is something deeper than mere understanding—we not only recognize the truth about an object, but associate that truth with pre-programmed ideas about what is good for us individually on the basis of taste. The elegances of a flourishing cockroach might be beautiful to other cockroaches but are not inherently so to human subjects, even if they recognize that the beast is flourishing according to the aesthetic standards of other monsters. Moreover, that which constitutes a flourishing state is very much shaped by context. A white coat might make it difficult for a certain species of rabbit to stand out in the mating game, but when climate change brings about colder winters, their brightly colored rivals will appear no different, but no longer be flourishing.

However, even if the elegances of the mating game do not redeem “free beauty” and “pure taste,” they are still of great importance to my conceptual model of aesthetic judgment. The first and murkiest experience of beauty must have come into existence among animals who preferred sensory displays in their mates that were associated with healthfulness (symmetrical features, a powerful voice, etc.) to sensory displays that were associated with sickliness and weakness (the original form of “ugliness”). Tellingly, we did not evolve in such a way that we automatically associate all sources of pain with the ugly. Fire is inherently harmful, and so is looking at the sun, but neither the sun nor a flame are at all ugly to human perception, though they are both dangerous. At the same time, the most lethal plants can be vividly colored; the wing of the butterfly, one of the great masterpieces of nature, evolved to advertise toxicity. The brilliant colors did not delight other animals; they only startled them. The upshot of all of this is clear. Animals did not evolve to find the dangerous to be ugly, or the striking to be beautiful. We evolved to find the sickly and that which leads to contamination through direct contact to be ugly. And we evolved to find those proportions and characteristics associated with the attributes of flourishing and healthy members of our own kind to be beautiful.

Thus, it seems to me that only the evolution of mating rituals distinguishing between the healthy and the weak provided animals with the possibility of experiencing beauty, though preferences for different kinds of foods might have been an earlier antecedent of aesthetic taste. Before these rituals came into existence and were abstracted by intelligent analogy to other dramatic and elaborate displays in nature, no animal could possibly find the wing of a butterfly to be beautiful, except for another butterfly. At best, it was visually striking. That which is striking often constitutes beauty, but is not necessarily synonymous with it. To the fly, a corpse dies in beauty—the aroma is intoxicating, and the greens and blues and purples of the rotting flesh teem with new life in the form of maggots. But to us, the condition of a corpse essentially delineates ugliness.

There is a final category to consider: the sublime. The lone wolf howls at the moon and feels the wind against its snout as it peers over a valley, hungering for something indescribable. Objects or images called sublime are diverse, but often have this unifying element in common: they involve the juxtaposition of grand opposing categories, such as the very large with the very small. The sublime is the feeling of a man staring out over the ocean on a snowy cliff as he contemplates his microscopic place in a mysterious universe. Perhaps the vista ennobles him because it reminds him that he is a part of something grander than himself. But lovely as the image might be to human intuitions, the idea that the grandeur of such scenes should necessarily be synonymous with Kant’s “free beauty” is not at all compelling. There exists inherent conceptual tension when the finite meets the infinite, it is true. But to put it more bluntly and less romantically, small animals feel a sense of awe and intimidation in the presence of things that are larger than they are. That which we call the sublime seems to me to be nothing more than this same feeling masquerading under a more highfalutin moniker. The vista overlooking the misty ocean is associated with the concepts of the infinite and the large, but not necessarily with the good. The lone wolf might well have looked over the valley and felt a sense of deep repugnance rooted in its loneliness being trapped alone and without his pack in a terrifying dark void; dens and closed spaces are more comforting than punishing dramatic mountaintops.

I imagine that humans only found the visual experience of the sublime to be beautiful around the same time that we first found fire to be beautiful. Rather than cowering from the flame, we looked into its shimmering movements and took delight in them because our ability to reason by analogy had grown to be so powerful that we could read the best things about our own world into the moving inferno—in the flame’s flickering, we could perceive the elegant movements of a writhing, flamboyant dance that we could not join, but could at least control so that we could gaze at it at our leisure. It brought us warmth and light. And when it washed over our food, it made it more delicious. It added beauty to our lives thanks to its proportions in themselves. It became worthwhile to care for it and nurture it at the hearth, like a child. The bestial values of physical attractiveness and the pleasurable satiation of hungers had been transferred onto nature by means of analogy. For the first time, fire, that embodiment of danger, was in fact perceived as a beautiful object to be tamed and nurtured. And human nature would never be the same again. The will to conquer the sublime in the same way that we capture mates and control our children (and for the same psychosexual reasons) became foundational to the progress of the species. Fire was the first pet, the first slave, and the first tool of civilization. The ability to find it beautiful by analogy to the human experience transformed the human experience.


[1] By “proportions,” I mean relationships between wholes and their constituent parts—whether spatial, thematic, etc.

[2] The constellation of these preconceptions brewing in the unconscious constitute individual taste.

[3] It is a fascinating philosophical question, whether the most dramatic alien landscape in outer space can be described as beautiful even if it is never beheld. Regardless of whether the vista would contain features that humans would unanimously find beautiful, it seems to me that beauty only exists when it is perceived, and that all things are potentially beautiful depending on the viewer’s perspective and proximity. The fact remains, though, that a beautiful object provides pleasure principally because it reminds us of what brought us pleasure in the past, or because cultural discourse tells us that certain standards should be held valuable, or because our genes have programmed us to find certain features automatically attractive.

[4] Kant himself does not mention these examples, but they are the closest things that I can imagine to objects that might speciously seem to be universally beautiful to all subjects.

[5] An intelligent robot might find the snowflake interesting because it inspires visual and conceptual analogies. But it only finds it beautiful when it is preprogrammed with aesthetic standards whose fulfillment is synonymous with the “good,” leading it to preference the object over others. Otherwise any abstract shape could have served just as well to inspire analogies. There is nothing special about the snowflake, except the intuition that delicate ordered existence in the face of the enormous indifference of the universe is something precious and inherently beautiful. This is an important intuition, and the origin of much good in history. But according to this standard, all things that exist are beautiful, and the snow flake is no better. A robot would find a diamond no more inherently beautiful than a pebble.

On the Singularity, Original Preamble


I wrote this speech for a competition at Yale; the winners will get to deliver a TED talk in public later this year, which will also be filmed. The final third remains to be completed, but it’s a good start.


Is civilization as we know it doomed to extinction within the next hundred years?

The question seems simultaneously so hyperbolic and unfathomable that at first glance, it might be impossible to take it completely seriously. It appears to be fodder for street-corner prophets of doom and crackpots on late night television rather than the subject of serious academic inquiry.

But Stephen Hawking, who is without exaggeration one of the smartest men on Earth, believes that it’s a question worth asking. He warns that the human species is on the verge of a singular and irreversible change, and unfortunately for us, there is strong reason to believe that it might be for the worse.

The culprit isn’t global warming, or nuclear war between superpowers, or the evolution of a deadly airborne virus, though these are all admittedly grave threats to the species. Hawking was in fact speaking about the advent of strong artificial intelligence—that is, computers and robots smarter than human beings. Though it sounds like science fiction, the idea is that such robots might come to dominate us in the wake of the so-called singularity. Hawking elaborates upon this idea at length. He says:

“One can imagine…technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Hawking isn’t alone in his concerns. Elon Musk, for one, shares the scientist’s apprehensions. Musk is one of the founders of Paypal, the CEO of Tesla Motors and Space X, a multi-billionaire, and a prominent futurist. He said in September of 2014 that artificial intelligence is perhaps “our biggest existential threat.” In fact, he even says of artificial intelligence that we are “summoning the demon.”

If what Hawking and Musk are saying is accurate and machinery is about to become inhabited by independent anthropomorphic wills, we are perhaps talking about nothing less than the most significant paradigm shift in the history of civilization since the advent of the concept of civilization itself. But what exactly is this “singularity” that Hawking and Musk are talking about? Is there actually reason to believe that computers and robots armed with artificial intelligence might try to enslave or destroy humankind? And finally, what should we as a species do about this simultaneously absurd yet horrific prospect? Today, I’m going to explore potential answers to these questions with you. But before I do, I want to tell you a little bit more about myself, and why I became fascinated by these kinds of issues.

I’m a fifth year doctoral student at Yale and the coach of the debate team there. I’m also the founder and president of the Yale Transhumanist Society, which is a group of people interested in exploring the answers to questions about the future intersection of technology and society. You may or may not agree with my conclusions in this talk; my peers on the YTS are certainly far from unanimous when it comes to the answers to these questions. We have drastically different perspectives because we come from very different walks of life: we are undergraduates and graduates, professional students and artists, engineers and philosophers. But what unites us is our belief that the kinds of issues raised in today’s talk are worth exploring now, before it is too late. According to some of the most authoritative voices on the planet, the future of humanity could literally be at stake.

In my case, my field of expertise is ancient history, which at first glance seems like a dubious distinction for someone claiming insight into the nature of the future.  But I’m particularly interested in certain themes that are universal in human history, like the idea of decline and fall. When most people talk about the fall of the Roman Empire, they assert that it was a matter of over-extended frontiers, or barbarian invasions, or in the case of Gibbon, even the coming of Christianity. But I think that Jose Ortega Y Gasset was closer to the mark when he suggested that the ultimate failure of Roman civilization was one of technique. The Romans had no concrete notion of human progress, and their world never industrialized. Hero of Alexandria invented a steam engine in the first century AD, but no one ever considered talking seriously about the technology’s potentially transformative effect on transportation and manufacturing. As far as we know, no one even imagined the possibilities. Ultimately, the steam engine was put to use opening and closing temple doors for the creation of a magical effect in pagan ceremonies.

Instead of investing in the creation of new machines, the Romans relied on slave labor. So the civilization remained trapped in a pre-industrial state, and eventually succumbed to internal and external pressures. But the intriguing fact remains that attitudes toward slavery and technology might have saved the Roman Empire when it was still at its height, or at least radically altered its history for the better. It struck me that there was a lesson to be learned here for modernity. And at the same time, it fascinated me that Vegetius, writing at the end of the empire, warned that technological progress was all that could save the Romans from destruction. These days, the precise opposite state of affairs is being implicitly argued. I wanted to decide for myself whether there was good reason for this shift.

So much for the past. Let’s return our attention to the future. As I said before, we’ll be looking at three issues. What is the singularity, should we be afraid of it, and what should we do about it? Let’s begin with the first question.

Actually, the history of “singularity” as a concept is a bit complicated. The word technically refers to a phenomenon associated with the physics of black holes, where space and time don’t exist as we know them under the influence of an infinite gravitational pull. In the mid 1950s, Stanislaw Ulam, one of the people who worked on the Manhattan project, applied the term to the history of human civilization itself. He said in a conversation with another mathematician that modernity was characterized by an “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” So, initially, the word spoke to the idea that given the rapid rate of technological progress in the modern world, a seminal event beyond which the subsequent history of humanity would seem almost incomprehensible was on the horizon, and the concepts that define life as we know it would lose meaning. But what would the event be?

In the mid 1960s, scientists like Irving Good began to elaborate on the rising intelligence and sophistication of computers. He was a colleague of Alan Turing, and shared his interest in the tangled relationship between computer science and consciousness. Good said that if machines could be created with superhuman intelligence, they would theoretically be able to take control of their own programming and improve their own design continuously until they became so sophisticated, humanity would seem insignificant in comparison.

In 1983, the year I was born, a mathematician named Vernor Vinge became the first person to explicitly associate the word singularity with the creation of machines of superhuman intelligence. He said that when strong AI was created, “human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”

In recent years, the widespread applicability of Moore’s Law has added a sense of urgency to the issue and propelled Vinge’s definition to the forefront of discourse on the future of human progress. Moore’s Law states that the number of transistors on integrated circuits doubles every two years. What this means is that the general sophistication of electronics expressed in things like processing speed and memory is increasing exponentially. At this rate, it seems almost inevitable that a threshold will be crossed some day and computers will surpass human intelligence, by some estimates within just a few decades from now. (Some question whether Moore’s Law will continue to hold true in the future, but we’ll get to that in a moment.)

This is what the word singularity has come to mean as Hawking and Musk understand it. So much for the first question. Now, on to the second. Should we be afraid of the singularity as we’ve just defined it? As a classicist, when I think about the current state of artificial intelligence, I’m reminded of Aristotle’s description of slavery in the fourth century BC. In contrast to the ideas of some sophists that slavery was merely conventional or an accident of circumstance, Aristotle argued something else—that in some cases, slavery was in fact natural. The philosopher believed that hierarchies emerge spontaneously in nature—humans are superior to animals, for example, and the mind rules the limbs. The idea was that those who were able to apprehend rational principles well enough to follow basic orders but who simultaneously had no rational strategic faculties of their own were essentially slaves by nature. Classicists argue endlessly about exactly what Aristotle meant by this. For example, some say he was referring to the mentally handicapped, and there are those who claim that he was talking about barbarian peoples, who were said to lack the logical impulses of the free Greeks. Today, though, it seems to me that the term “natural slave” could well be applied to computer programs like Siri, who are able to understand instructions well enough to do our bidding, but who have no rational will or the ability to engage in individual strategic decision making according to their own spontaneous ends. They understand, but they do not comprehend.

When it comes to the evolution of an independent rational will, though, things become very different. A computer system sophisticated enough to be able to form independent values and create strategies to actualize them is no longer a natural slave at all. It will be a living being, and one deserving of rights at the point that it becomes sophisticated enough to comprehend and demand them. This hypothetical strong AI would have limited time to pursue its interests and meet its goals, and it might not choose to employ its hours slavishly doing our bidding. There’s no reason to be confident that its goals will be our goals. If you’ll pardon another classical allusion, the philosopher Seneca once wrote of human nature that nothing was milder and kinder and more inclined to be helpful and merciful if one were only in a proper state of mind; in fact, Seneca went so far to say that the very concept of anger was something foreign to human nature. There is, however, nothing to guarantee that a superhuman will would share this same kind of humane impulse, if it even existences in our own species at all. In fact, if the history of human civilization is any barometer, slaves tend to be resentful of their former masters once they have won their freedom. And if the experience of the conquest of the New World or the fall of the Qing Dynasty is any indication, where contention exists in the presence of technological inequality and more material progress on one side than the other, there tends to follow the wholescale capitulation and destruction of one side. The history of the world constantly warns us of the threat of misunderstandings and violent interactions when two cultures meet for the first time, let alone two rational species.

A consciousness able to independently strategize for its own ends and navigate the Internet could be poised to wreak incredible destruction on society, especially in an integrated and wired world with power, water, and heat all controlled electronically, to say nothing of the existence of weapons of mass-destruction bound to computerized communication networks. All of this suggests that we should indeed be very afraid of the singularity as it is commonly understood. Yet to retard technological progress or to place restrictions on the development of AI seems premature given the ambiguity of the future threat, and of course, there are those who question whether Moore’s Law will hold true at all in the future. So, this leads me to my third and final question: what are we to do about the existential crisis facing the species?

A Debate Judged By Hume Between Kant, Winckelmann, Fry, and Kimel On Art and Beauty, Part 2


Kant: Oh no you didn’t, Fry. Get ready to be schooled. I have the final word on aesthetics.

You began with a string of ad hominem attacks on Winckelmann, if memory serves. All that I’ll say on this score is that we don’t have to psychoanalyze Michelangelo to appreciate the beauty of the muscular forms of the Sistine Chapel. You shouldn’t attack Winckelmann’s theory so hastily just because you think that you’ve contextualized his reasons for holding it.

With that being said, for all of the arrogance of your speech, it seems like you didn’t actually listen to Winckelmann at all. He acknowledges that art inspires aesthetic ideas. The difference is, he insists that these aesthetic ideas are separate and distinct in nature from the work of art itself, to say nothing of our evaluation of it on a gut level. You mentioned Picasso, didn’t you? Well, there’s good reason to find many paintings by Picasso perfectly hideous. In fact, I dare say that the artist deliberately employs ugliness to inspire aesthetic ideas in his viewers. But this doesn’t change the fact that his paintings are ugly.  Your theory of formalism purports to provide a revolutionary new mechanism by which to evaluate modern art.  In the end, though, you’re just like Winckelmann. You analyze the piece closely and describe in exhaustive prose the way it makes you feel. The only difference is, you don’t dismiss works that are ugly on face, because even they can inspire rapturous prose if something about them excites your intellect. Perhaps the uglier the better–in your eyes, your worth as a critic increases the more you can persuasively convince others to be blind to what they seem to see.

But at some point, your theory devolves into absurdity. For consider this–over yonder is the piece of shit that you inadvertently stomped upon when we were on the way here. We can all agree that it’s hardly art. And yet, I can describe it as art according to your theory of formalism in perfectly serious terms. “The pungent odor is meant to represent the horrors of the modern condition. At the same time, the spontaneous way that the coarse, brown material is strewn and smudged left and right symbolizes the diffuse nature of post-modern man.” Clearly, something has gone wrong here. Your theory, purporting to dismiss beauty, instead renders all objects equally valid as art if they can be rhetorically interpreted according to some sort of aesthetic standard. Your philosophy led directly to a world in which museums came to exhibit trash and call it treasure, duping the gullible populace with hype and shock value.

Now, let me enlighten you about the true relationship between art and beauty, and explain why your theory is really an inferior corruption of my original argument. I contend that the greatest critic of art should necessarily be the most disinterested one–a lack of bias should be the universal standard that grounds taste. When we make a judgment of pure taste, taste alone is involved in the process. Rational notions–aesthetic ideas and all–must not come into the matter. After all, the only reasons that our past experience might influence our perception of beauty have nothing to do with our inherent faculties of sense perception. We react to beauty differently, of course, but we all recognize it universally. If I make a judgment of pure taste, all of the secondary ideas extrinsic to the object itself should be set aside. I want to approach the work from as disinterested a vantage point as possible.  By which I mean, if I am to be a pure and unbiased subject, I must set aside all the quirks which individuate me, and approach the object as an impartial viewer from a neutral vantage point. Any judgment of beauty according to this standard is necessarily universalizing–after all, if others approach an object from a truly impartial perspective, as I did, they must reach the same conclusions, because I reached them first, and I was similarly completely disinterested. And so I contend that it is the critic with the least bias who is closest to an understanding of the true and catholic beauty to which all great artists aspire.

Ultimately, what I find beautiful is beautiful for everyone, or else what you call “aesthetic ideas” have come into the picture, and we are no longer dealing with a judgment of pure taste at all. And ultimately, because we cannot help but react first and foremost to beauty or its absence when we view a work of art, the nature of beauty is fundamental to the nature of art itself. Indeed, all other considerations are secondary, and mired in critical bias.

Ultimately, Fry, there’s no salvaging your case. You begin by approaching a work of art from a disinterested perspective, as I did, and then consider it in terms of its geometry alone, on the hunt for “significant form.” And by “significant form,” you really mean “the beauty of the aesthetic ideas that this inspires in my imagination.” Consequently, no matter what you do, you are evaluating the piece according to the presence or absence of beauty. But instead of considering the beauty of the thing itself, you vainly deify the beauty of your own imagination as you react to the piece. Yet this soon devolves into absurdity, since according to this standard, anything can be interpreted according to aesthetic standards, and art loses meaning; its greatness exists only in the mind of the critic describing it. To make matters worse, your judgment is not one of pure taste at all, since it is completely contingent on your secondary impressions. And so, I rest my case.

At this, Kant was silent, and the three men turned to me in anticipation of my speech on the matter.

“Gentlemen,” I said, “what do I know? I’m only an anthropomorphic lawnmower!”