Sign Up for Our Bi-Weekly Email

Expand your perspective with thought-provoking insights, quotes, and videos hand-picked by our editors—along with the occasional update about the world of EnlightenNext.

Privacy statement

Your email address is kept confidential, and will never be published, sold or given away without your explicit consent. Thank you for joining our mailing list!

 

Shaping Postbiological Cultural Evolution


Now that supersmart computers are no longer a distant fantasy, how do we keep our transhuman future from becoming a nightmare?

by James N. Gardner
 

In the opening chapter of The Crooked Timber of Humanity, British intellectual historian Isaiah Berlin famously observed that two factors, above all others, shaped human history in the twentieth century. The first was the flourishing of the natural sciences and technology, which Berlin celebrated as “the greatest success story of our time.” The second factor consisted of “the great ideological storms that have altered the lives of virtually all mankind: the Russian Revolution and its aftermath—totalitarian tyrannies of both right and left and the explosions of nationalism, racism, and, in places, of religious bigotry.”

Both of these great movements began, Berlin reminded us, “with ideas in people’s heads: ideas about what relations between men have been, are, might be and should be.” It was for this reason, Berlin believed, that “we cannot confine our attention to the great impersonal forces, natural and man-made, which act upon us.” Rather, we desperately need to launch a kind of Manhattan Project in cultural anthropology. “The goals and motives that guide human action,” he wrote, “must be looked at in the light of all that we know and understand; their roots and growth, their essence, and above all their validity, must be critically examined with every intellectual resource that we have.”

Postbiological Cultural Evolution

The urgency of such an effort has grown since The Crooked Timber of Humanity was published in 1990, in large part because of the very success of the historical factor Berlin lauded: the exponentially increasing capabilities of science and technology. Many analysts have noted that most of our powerful technologies can be put to evil as well as beneficial uses. Nuclear science, for example, can light a city with electricity or destroy it with an explosion. Genetic engineering can cure dreadful maladies or create unstoppable plagues.

Some thoughtful observers are beginning to focus on an even more portentous possibility: that we may be approaching a kind of cultural tipping point—what futurist Ray Kurzweil calls a looming singularity—after which human history as we currently know it will be superseded by hypervelocity cultural evolution driven by transhuman computer intelligence. If this prospect is realistic, then a key task may be not only to comprehend the ideas that are currently driving historical trends (Berlin’s charge to his fellow intellectual historians) but also to attempt to actually shape them so as to ensure that the better angels of our nature prevail in the strange new transhuman cultural environment that may lie just over history’s frontier.


Samuel Butler: Darwin’s Forgotten Contemporary

Just four years after the publication of Charles Darwin’s The Origin of Species, Samuel Butler, a contemporary of Darwin, offered a prescient insight into the potential of artificial life to supersede the squishy biological processes that constitute the only kind of life with which humanity is familiar. In an 1863 letter entitled “Darwin among the Machines,” Butler offered this startling vision of the future of terrestrial evolution:

What would happen if technology continued to evolve so much more rapidly than the animal and vegetable kingdoms? Would it displace us in the supremacy of earth? Just as the vegetable kingdom was slowly developed from the mineral, and as in like manner the animal supervened upon the vegetable, so now in these last few ages an entirely new kingdom has sprung up, of which we as yet have only seen what will one day be considered the antediluvian prototypes of the race. . . . We are daily giving [machines] greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race.

Only now, nearly a century and a half after Butler articulated this disconcerting prophecy, are the implications of his revolutionary insights finally beginning to sink in. With the publication of path-breaking books about the future of computer-based artificial intelligence, such as Ray Kurzweil’s The Singularity Is Near, we are witnessing an intellectual awakening that is unique in the history of mankind. A handful of cutting-edge opinion leaders are starting to focus seriously on the possible economic, cultural, and philosophical consequences of what may turn out to be the most profound evolutionary development since the Cambrian explosion: the emergence of a radically new form of life and intelligence on our planet that stands poised to inherit a future that will be shaped by hypervelocity cultural evolution and self-directed intelligent design.

The daunting challenge that humanity faces—let’s call it the Butler Challenge in honor of Darwin’s forgotten contemporary—is to understand and attempt to shape the powerful, perhaps irresistible, cultural forces that are propelling the biosphere toward a transhuman and postbiological future.


Strategies for Shaping Tomorrow

The California-based Singularity Institute for Artificial Intelligence is one of a handful of think tanks and research centers around the world that have seriously embarked on the study of ways to avoid the emergence of unfriendly artificial intelligence. Outside of this tiny community of dedicated researchers, the topic of prophylaxis against unfriendly AI seems premature at best—why should we worry about the potential appearance of hostile AI when we have not yet succeeded in creating general AI? The short answer from Eliezer Yudkowsky, a leading researcher affiliated with the Singularity Institute, is that if we wait until an AI acquires transhuman intelligence, it will be too late to retrofit that particular AI with human-tolerant sensibilities or instincts.

For Yudkowsky, the key strategy for avoiding an existential catastrophe for humanity is to figure out a way to build an AI that is benignly motivated toward human beings from its inception. No one has the slightest notion of how to program innate human friendliness into an artificial intelligence, but it is certainly an approach worth pursuing.

An alternative approach may be to design a set of cultural attractors that could conceivably help steer the developmental direction of the future cultural environment in which AI will emerge toward human-friendly sensibilities and outcomes. This would be an exercise in a possible future scientific discipline—what I call memetic engineering.

What particular cultural attractor might serve as an appropriate tool for memetic engineers embarking on this daunting endeavor? Perhaps a new cosmology that embraces both human and transhuman artificial intelligence. Indeed, a novel scientific worldview that places life and intelligence at the center of the vast, seemingly impersonal physical processes of the cosmos may conceivably offer the best hope for meeting this challenge.

The essence of this worldview would be the idea that we inhabit a universe custom-made for the purpose of yielding life and ever-ascending intelligence. And that every creature and intelligent entity—great and small, biological and postbiological—plays some indefinable role in an awesome process by which intelligence gains hegemony over inanimate nature. This notion implies that every living thing and postbiological form of intelligence is linked together in a joint endeavor of vast scope and indefinable duration. We soldier on—bacteria, people, extraterrestrials (if they exist), and hyperintelligent computers—pressing forward, against the implacable foe that is entropy, toward a distant future we can only faintly imagine. But it is together, in a spirit of cooperation and kinship, that we journey toward our distant destination.

This vision—the concept of an intelligent universe populated by a cosmic community encompassing both biological and postbiological forms of intelligence—may turn out to be the key tool with which memetic engineers can build the cultural foundation for a benign cosmic future in which human beings no longer play the dominant role.

James Gardner is the author, most recently, of The Intelligent Universe: AI, ET, and the Emerging Mind of the Cosmos. This essay is adapted from a chapter in a forthcoming book to be published by NASA entitled Cosmos and Culture.



 

Subscribe to What Is Enlightenment? magazine today and get 40% off the cover price.

Subscribe Give a gift Renew
Subscribe
 

This article is from
Welcome to EnlightenNext

 

December 2008–February 2009