Wednesday, August 24, 2011
Imagine you know everything on Wikipedia, in the Oxford English Dictionary, and the contents of every book in digital form. When someone asks you what you did twenty years ago, on demand you recall with perfect accuracy every sensation and thought from that moment. Sifting and parsing all of this information is effortless and unconscious. Any fact, instant of time, skill, technique, or data point that you’ve experienced or can access on the internet is in your mind.
Cybernetic brains might make that possible. As computing power and storage continue to plod along their 18-month doubling cycle, there is no reason to believe we won’t at least have cybernetic sub-brains within the coming century. We already offload a tremendous amount of information and communication to our computers and smartphones. Why not make the process more integrated? Of course, what I’m engaging in right now is rampant speculation. But a neuro-computer interface is a possibility. More than that: cyber-brains may be necessary.
The idea of a cyber-brain is pretty simple. Our brains are all-in-one systems that store, process, organize, and collect data. A cybernetic brain would augment one, many, or all parts of that system. The processing and organization part, not to mention analysis and synthesis, would require something resembling artificial intelligence. People would probably be wary to jack themselves into an A.I. helper brain. So, based on current trends and my rudimentary knowledge of computer progress, my guess is that cybernetic collection, storage, and retrieval of information will be the easiest pieces to integrate into our biological brains: a neural external hard drive. We’ve externalized the storage process for ages – the written word, anyone? But what if we could internalize it again?
That’s what cyber-brains could allow. Ever since we started writing things down, we’ve been trying to make it faster and easier to write, to read what others write, and to remember what we read. A cyber-brain takes the externalization potential of computers (massive amounts of stable and inexpensive data storage with rapid and accurate recall) and removes the lag time. Instead of sitting at your computer or pulling out your phone, opening the file, and taking in the contents, the information is already in your cyber-sub-brain. Anything you store on your cyber-brain, from a song to a novel to the contents of Wikipedia, would be as easily and rapidly accessible as your most vivid memories currently are. Speaking of, your memories would be stored more accurately and permanently than regular ol’ neurons can allow. Almost any piece of information you might need, whether experienced or downloaded, would be at your mental fingertips.
We face a spectacular information glut. It is impossible for any one person to, say, watch every good movie on Netflix, read every informative entry on Wikipedia, and follow every worthy news story. There just isn’t enough time to absorb and process all that content. But what if I didn’t have to actually watch or play or read the item in question to grok its quality and content? Cyber-brains might allow you to, a la Neo and Trinity in The Matrix, to download huge data sets and immediately utilize them. The major advantage is that the time-cost of gathering information becomes nearly zero. Thus, the extra time is freed up for information to be analyzed, synthesized, and, more importantly, utilized.
In the coming years, we may need a form of externalized cybernetic memory to compensate for the overwhelming influx of data. The ability to take digital files and put that content within direct, immediate access of the mind would at least give the average person a fighting chance.The possible benefits are almost unimaginable. Instead of the current information crisis, where the wealth of the world’s knowledge is available at a mouse-click but there is literally not enough time to absorb it all, we would be faced with a world of ultra-informed individuals. What would that world look like?
The optimistic part of me wants to believe all of that data would become knowledge that would lead to happier relationships, more logical decisions (e.g. voting, finances), and a better world would result. The pessimistic part of me fears a world of cynics and nihilists, simultaneously overwhelmed by and indifferent to the wealth of information they possess. The world would continue as it is, just a bit more jaded by what we all know.
The realistic part of me suspects something in between. In a world of cyber-brains, everyone would have nearly the same degree of information. However, information is just information until a mind processes and understands it. Thinking would still take a lot of work, and sometimes letting someone else do the thinking for you is still easier. ”Education” would be all practice and application. Granted, your basic intelligence would limit your processing power. Even though an infant with a cyber-brain might “know” calculus, she wouldn’t be able to understand calculus. Epistemology aside, the take away point is that a cyber-brain would eliminate the need for lectures, text-books, and rote memorization. Critical thinking and creative utilization would become the main priorities of education. Perhaps social stratification due to pure intelligence would be more noticeable, or maybe it’ll be willpower and determination that draw the lines.
My hope is that people would at least be more skeptical and the most egregious liars (coughGlennBeckcough) would have much less flexibility in spinning the facts their way. The first step towards understanding is raw data. The more people who have data, the more people will have real knowledge. What they do with that knowledge is still their prerogative. So I suspect the more things change, the more they will stay the same.
Sadly, cyber-brains are still a long, long way away. Until then, I guess we just won’t know.
Link : http://blogs.discovermagazine.com/sciencenotfiction/2011/05/05/know-and-remember-everything-always-and-instantly/
''Our system of capitalism will eventually be irrelevant and counter productive (if not already) since we are approaching potentials which we can create abundance, sustainability, efficiency and well being for all the worlds people. Removing scarcity and the aberrant effects it causes such as theft. Abundance, sustainability, and efficiency are enemies of our economic structure for they are inverse to the mechanics required to perpetuate consumption.''
Tuesday, August 23, 2011
Viewed as a Computation
Moving from gestures to speech gives people a higher bandwidth channel for communicating their thoughts. Society becomes able to perform more complicated computations.
Hunting and fishing
Knowing where to look for game means mentally simulating animal behavior, that is, it means emulating a computation. Using bait means influencing an animal’s computation by applying the proper inputs.
Knowing that seeds compute plants involves insight into the process of wetware computation. Plowing is a form of soil randomization. Irrigation is a way to program the analog flow of water. Crop rotation is an algorithm to optimize yields.
Caring for animal requires insight into their computational homeostasis. Selecting optimal individuals for further breeding is genetic engineering on the hoof.
Wheeled carts allow long-range glider-like transferal of embodied information, making society’s computation more complex.
A legal code is a program for social interactions. Enforcing the code produces high-level determinism which makes the system easier to manipulate.
Surveys allow a society to determine simple address codes for physical locations. Space becomes digital.
Noting the solar system’s cycles marks coordinates in time. Time becomes digital.
Sailors learn to simulate and tweak the analog computation of airflow effects. Course planning involves higher-level simulation.
The clay and the brushed-on glazes are the input, the kiln is the computer, the pot is the output.
Brewing and fermentation
The vat is a biocomputer, sensitive to the input variables of malt, sugar, and yeast. Over time, the best yeast strains are sought out by tasting and comparing; this is hill-climbing in a gustatory fitness landscape.
Spinning and weaving
The yarn is computed from the fibers. Weaving digitizes a surface into warp/woof coordinates. The loom is the first programmable mechanical computer.
Mining, smelting, and metallurgy
Mining is a form of data retrieval. The blast furnace transforms ore inputs into slag and metal outputs. Metallurgy and chemistry concern the computational rules by which matter combines and transforms.
Writing translates speech into a format portable across space and time. A written text promotes long-distance information exchange and long-term memory storage.
Using a limited number of symbols digitizes writing. Use of the alphabet also simplifies the algorithm for writing. The democratization of writing allows people to write things they wouldn’t be allowed to say.
The type letters act as primitive symbols that are assembled into a kind of program--- which prints a page. Printing multiple copies of a text enhances class four communication.
The book amasses large amounts of text into portable form. The book is the precursor of the hard drive.
A university provides a node where adults can exchange very large amounts of information. Given that the students go out and affect the society as a whole, the university is in some sense a central processing unit for the social hive mind, drawing together and processing society’s thoughts.
Water wheel and windmill
These devices convert chaotic fluid motions into regular periodic form. The excess information is returned to the fluid as turbulence.
Bullets are high-speed gliders. Shooting someone allows an individual to do a remote erase. Reckless, catastrophic killing enhances interest in long-term information storage.
By creating precise mechanical tools for making machines, we model the biological process of self-reproduction. The machines come alive and begin evolving towards greater complexity.
A finer-scale calendar, a zoom into the time dimension. Clocks use class two systems of gears that do the same thing over and over. Clocks are a tabletop model of determinism.
The steam engine is an artificially alive device that eats coal and transforms it into motion. The chaos of fire is converted into the reliable class two oscillation of the pistons.
When placed upon wheels, the steam engine becomes an autonomous glider. The country-to-city diffusion rate is changed, which in turn alters the Zhabotinsky scrolls of population movement.
Internal combustion engine
An evolutionary advance above the steam engine, and an early example of compressing the size of computational hardware.
Factory assembly line
The factory represents a computing system that codifies the procedures of a given craft. The possibility of mass production allows us to view physical objects as information, as abstract procedures to be implemented as many times as we please. Three dimensional objects can now be reproduced and disseminated as readily as books. Mass-produced devices become plug-ins for the computations embodied in people’s homes.
A temporal sequence is modeled by a series of discrete frames. An early form of virtual reality.
The personal vehicle allows individuals to control transportation. A formerly centralized technology is now in the hands of the people. Meetings and markets can be freely arranged, making the economy’s computation more class four.
Electrical generators and motors
Electricity collapses the length of society’s computation cycles. The system clock speeds up. Electrical lights disrupt the cycle of day and night; computation becomes continuous. There is now less of a border between the media and the human nervous system. People begin to view themselves as components plugged into the hive mind.
Writing is transmitted as a digital binary code. Society begins to grow its electrical network.
Unlike the telegraph, the telephone is a peer-to-peer medium--- you can make a phone call from your home without having to deal with a telegrapher. People are free to exchange “unimportant” information, that is, to talk about their moods and emotions, thus in fact exchanging a much higher-level kind of information than before.
By designing new materials, chemists begin to program brute matter. Deformable and moldable, plastics can take on arbitrarily computed shapes. Objects are now programmable.
While books could broadcast digitized thoughts, radio broadcasts analog emotion. The hive mind gains power, as listeners form realtime virtual crowds.
When riding in a plane, one can look out the window and see a landscape as an undivided whole, gaining a notion of a nation as a unit. With familiarity, people stop looking out the airplane windows, and air travel becomes a hyperlink, a teleportation device. In the United States, the “flyover” states become invisible to the cultural powers, promoting a schism in the hive mind.
Since moving objects are important, our eyes have evolved to stare at flickering things; therefore we find TV hypnotic. Watching TV is work, our minds labor to fill in the missing parts of the virtual reality. Society gains a stronger hive mind than ever before. But at the same time, the hive mind is debased by ever more centralized control.
The physicists complete the chemists’ work, and even atoms become programmable. We see the must fundamental units of matter as information to be manipulated.
Billed as the universal machine, the computer is brittle and hard to use. The digitization of essentially everything begins, in most cases degrading and corrupting the information.
Email spreads the workplace into the home. The upside is that you don’t have to commute, the down side is that you can’t leave the office. Email is addictive, and people become ever more plugged in. Yet email provides an alternate to the centralized news network, and many smaller hive minds take form.
The hive mind expands its consciousness. And at the same time the subhives’ minds gain further definition. The web page does for publication what the automobile did for transport --- the gatekeepers lose importance. The Web becomes the ultimate global information resource, the universal data base. Social computation becomes nearly frictionless; people can interact at a distant every more effortlessly.
Biologists begin to program life. Society tries to apply legal codes to life, with unpleasant and confusing results. Real biological life continues anyway, still managing to avoid control.
A tight, personal, peer-to-peer medium that approaches telepathy. As people coordinate activities in real time, short-lived spontaneous mini-hive minds emerge.
by Rudy Rucker
Copyright © Rudy Rucker, 2011
[In honor of Marshall McLuhan, this essay is adapted from Rudy Rucker, The Lifebox, the Seashell and the Soul (Basic Books, New York 2005), and published online on July 24, 2011]
What does “being human” mean? Whatever you want it to. But since you’re human, I’ll bet that it includes some or all of life, love, family, beauty, dignity, sexuality, creativity, freedom, learning, and the like. There are also other, less important values: To scarf a boxful of PopTarts, to see our enemies suffer, or to watch soap-opera reruns all day.
All these desires, the elevated and the base, were sculpted by evolution, hundreds of thousands of years ago on the African savanna. And in the millennia since we developed intelligence, we have also been shaping new values on top of this ancestral set. The elevated ones, the “true” values, are those which we want to keep when we reflect on our evolution-driven and societally-built urges and select the best.
Transhumanism is usually associated with technophilia. This is because technology consists of new ways to help people achieve their goals. But achieving some values for some people can infringe on other values. Technology can dehumanize, as Jean-Jacques Rousseau and other nostalgists have pointed out. A factory worker who runs an automated loom machine 12 hours a day makes clothes that keep people warm and decorated. At the same time, he is deprived of most opportunities to do what our ancestors enjoyed doing, what hunter-gatherers do today, and what we enjoy doing and wish we could do more of: Flirting, playing, exploring, hunting, socializing, and and resting, with no pressures other than the needs of the moment.
Even the freely-chosen technological luxuries of today’s wealthy societies can be dehumanizing. They may satisfy our baser needs, but interfere with the more elevated values. Television entertains at will, but also provides passive entertainment in place of interaction and storytelling. Modern medicine, even as it saves lives, gives us anonymous carers and white-walled institutions, replacing empathic nursing by loved ones. Grains keep more people per square kilometer alive than do hunting and gathering, and people enjoy eating sugars, but carbohydrate-based nutrition brings obesity and diabetes–like some dystopian factory-farm human feedlot.
Even imperfect technology can humanize. When the Roman empire brought eight thousand Sarmatian soldiers from today’s Ukraine to Britain in the second century CE, they never saw their families again, nor did they speak to them or write to them. This had always been the fate of those who left home. When Jewish immigrants moved to their ancestral homeland a century ago, most never again spoke with or saw the loved ones they left behind, but they at least had the benefit of postal service. Today, globetrotters, even poor job-seekers, can talk to their family members at will, for free, with video; many can afford to fly home for a visit. True, ink on paper is not as good as a conversation, and video-calls aren’t as warm as a face-to-face chat with a hug, but those weren’t the alternatives that the technologies replaced. The technologies preserved family ties that could not otherwise be preserved, and what is more human than that?
The complaint goes: “Make real friends, not Facebook friends.” But for me at least, online social networks are not pushing aside face-to-face friendships; they are connecting me to old and new friends to whom I would never otherwise maintain my ties. Sometimes a real-world meeting results. This browser-based interaction is helping me be more human.
Cell phones can annoy, but they have also eliminated a specific kind of social misery; the unpleasantness of missing rendezvous with a friend, checking your watch and walking around the block, wondering if you set the right time and place. We’ve lost some good storylines–Dr. Zhivago would have caught up to Lara right away–but cellphones ensure that meetings with friends happens as they should, and that’s wonderfully human.
Other technologies have nearly eliminated other forms of suffering, at least in the richer parts of the world. Start with the big ones: we rarely have to see our children die of disease; with heating and air conditioning, we rarely have to feel too cold or too hot if we don’t want to. Now some small ones: we never have to search in frustration to hear favorite songs; we never have to continue wondering who acted that bit part in the movie we saw. These forms of unpleasantness have mostly disappeared.
These are the technologies of the past. We’ve gotten used to them. But transhumanists always look forward. The past is behind us, but we can change the future.
Of future technologies, those which modify the human body get the most attention In transhumanist contexts: brain implants, blood-stream nanobots, genetic enhancements. Anti-transhumanists like Leon Kass and Frances Fukuyama condemn transhumanism mostly for the yuck factor of these technologies–a deep-seated disgust with modifying the human body. Indeed, to the extent that the perfection of the human body at its unblemished best is one of those human values which matter to us, an extension of our ancestral desire to preserve health, Kass and Fukuyama are right.
Today’s medical technologies are easy to accept because almost all modify the body to counter pathologies: vaccines, surgery, or cochlear implants. Even Botox counters a pathology, if like transhumanists, you consider aging a pathology–though Botox does no more than remove aging’s shallower signs. But the future may go one step better, bringing us technologies which actually improve the body’s optimum above today’s baseline.
Depending on how we define our values, we might feel that improving our abilities beyond the baseline can still preserve the best of what we consider human, particularly if the visible form is left untouched. Or we might decide that physical features are not the essence of humanity. Considering that people have been dying their hair, tattooing their skin, and piercing their bodies for thousands of years, we might accept even visible enhancements as human and not monstrous.
Still, there is no need to get under our skin to make us more human. Eyeglasses can improve vision almost as well as laser surgery, and our eyes feed Wikipedia’s knowledge into our brains pretty well, if not as well as through direct brain interfaces.
Regardless of whether we assimilate future technologies or just use them externally, they can make us more human, more able to achieve our values than we are today. If poverty were eliminated through well-distributed innovation-driven wealth, then a major cause of dehumanizing indignity would disappear. Today, most people live in awareness and joy for a tiny fraction of their lives. If we could stimulate our brains to clear-minded alertness with smart-drugs or electrical implants, what could be more human? If we could, at will, connect ourselves to loved ones with a telepathic link, we could strengthen our empathy. What greater spiritual fulfillment could we hope for? If leisure time and greatly improved transportation let us visit other planets or the bottom of the sea inexpensively and safely, each one of us could satisfy our desire to explore the universe in a way that we cannot today.
Dangers remain. Future technologies, like those of the past, could dehumanize by satisfying base needs, while preventing achievement of more elevated values. Direct currents to the pleasure centers of the brain could meet the human desire for happiness while removing the incentive to work towards other values. Immersive virtual reality could entertain but distract from other important personal goals. And in the end, the ultimate dehumanizer would be a weapon which satisfies one not-so-nice human value–the desire for revenge–while extinguishing humanity and all human values with it.
Our technologies today have changed so much, yet sometimes it seems that nothing has changed. Achilles’ mourning for his battle-slain friend Patroclus 2200 years ago and Daisy Miller’s flirtations as she tours Europe 150 years ago seem the same, in essence, as our experiences of the twenty-first century. This is because we remain human as we always have. Little has changed in our basic motivations, like love and conquest.
Staying human is good. We should let technologies increase our abilities, but not change our fundamental preferences. More accurately, we can allow changes in some of our less desirable desires, as for ingesting sugars to the point of diabetes, or for fleeing shame through suicide. But we must avoid changing those preferences which, on reflection, we want to keep.
I don’t want a technology to change any of my most important goals. I don’t want anything to take away my love for newness and learning so that I’m satisfied watching TV all day; or to take away my compassion so that I turn into a psychopathic killer; or to take away my sense of beauty so that a mountain-top panorama means nothing to me. I want technologies to help me, and everyone else, better achieve our goals.
Transhumanism is all about being human. Becoming transhuman might mean changing our humanity by removing preferences which on reflection we don’t want, or by adding abilities beyond what we have today. But becoming transhuman means becoming more human–treasuring the human values which matter most to us.
By: Joshua Fox
Published: June 7, 2011
Joshua Fox works at IBM, where he co-founded the Guardium Data Redaction product and now manages its development. He has served as a software architect in various Israeli start-ups and growth companies. On the transhumanism side, he is a long-time supporter of the Singularity Institute for Artificial Intelligence. Links to his talks and articles are available at his website and blog.
Our modern biologically and genetically-defined sub-species, Homo sapiens sapiens, has been around for 100,000 to 200,000 years. There’s some plausibility in Ihde’s suggestion that the modern concept of human formed only in the last 3 or 4 centuries: the Cartesian-Lockean human. The emphasis on the rational capacities of human beings, however, lies further back with Plato and Aristotle (in their two quite differing ways). Aristotle didn’t have the Lockean notion of individual rights, but they weren’t a big stretch from the Great Greek’s view of the individual good as personal flourishing through the development of potential—development that would need a protected space. The Cartesian-Lockean human was crucially followed by the Darwinian and Freudian human, which took human beings out from the center of creation and some distance away from the transparently rational human of the old philosophers. Even so, I heartily agree that reassessing our interpretation of the ‘human’ is timely and important.
The biologists’ conception of what it is to be a member of the human species so far remains useful: Our species is a group of “interbreeding natural populations that are reproductively isolated from other such groups.”1 Although useful, that species-based definition and the related genetically-delimited identification of “human” is becoming increasingly inadequate as our further evolution depends more on the scientific and technological products of our minds. The transhumans or posthumans we may become as individuals (if we live long enough) or as a species may quite possibly share our current DNA, but implants, regenerative medicine, medical nanotechnology, neural-computer interfaces, and other technologies and cultural practices are likely to gradually render our chromosomes almost vestigial components of our individual and species identity.
While I agree with Ihde on the need for (further) discussion of the concepts and significance of human, transhuman, and posthuman, I find many of his comments to be directed at transhumanists who barely exist (if at all). I resonate with the project of understanding potentially obfuscating “idols” such as Bacon described. But Ihde’s discussion of his own four idols seems to be more of a straw man than an accurate critique of contemporary transhumanist views. I find this to be true especially of his Idol of Paradise and Idol of Prediction. The other two idols—of Intelligent Design and the Cyborg contain relatively little critical commentary, and so I find less in them to object to.
A few years ago, I received a telephone call from researchers from the Oxford English Dictionary who were looking into the possibility of adding “transhumanism” to that authoritative bible of word usage. That addition has just now happened—a little behind the widespread adoption of the term around the world. Although Dante and Huxley used the term earlier, I first (and independently) coined the modern sense of the term around two decades ago in my essay “Transhumanism: Toward a Futurist Philosophy.” My currently preferred definition, shared by other transhumanists is as follows:
Transhumanism is both a reason-based philosophy and a cultural movement that affirms the possibility and desirability of fundamentally improving the human condition by means of science and technology. Transhumanists seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.
Since I will argue that most of Ihde’s critical comments and Idols succeed in damaging only views that few or no transhumanists actually hold, it makes sense for me to establish my knowledge of those views. Apart from first defining and explaining the philosophical framework of transhumanism, I wrote the Principles of Extropy and co-founded Extropy Institute to explore it and to spur the development of a movement (for want of a better term) based on transhumanism. That movement has grown from numerous sources in addition to my own work and become a global philosophy attracting a remarkable amount of commentary, both pro and con. In some minds (certainly in that of Francis Fukuyama) it has become “the most dangerous idea in the world.”
Ihde’s own four idols of thought refer more to straw positions than to real views held by most contemporary transhumanists. That doesn’t mean that he went astray in choosing Francis Bacon and his four idols from his 1620 work Novum Organum2 as an inspiration. Around the same time that I defined “transhumanism” I also suggested that transhumanists consider dropping the Western traditional but terribly outdated Christian calendar for a new one in which year zero would be the year in which Novum Organum was published (so that we would now be entering 389 PNO, or Post Novum Organum, rather than 2009). Despite Aristotle’s remarkable work on the foundations of logic and his unprecedented study “On the Parts of Animals”, Bacon’s work first set out the essence of the scientific method. That conceptual framework is, of course, utterly central to the goals of transhumanism—as well as the key to seeing where Ihde’s Idols (especially that of Paradise) fail accurately to get to grips with real, existing transhumanist thought.
Bacon’s own four idols still have much to recommend them. His Idols of the Tribe and of the Cave could plausibly be seen as the core of important ideas from today’s cognitive and social psychology. These idols could comfortably encompass the work on biases and heuristics by Kahneman and Tversky and other psychologists and behavioral finance and economics researchers. The Idols of the Cave are deceptive thoughts that arise within the mind of the individual. These deceptive thoughts come in many differing forms. In the case of Don Ihde’s comments on transhumanist thinking, we might define a sub-species of Bacon’s Idol and call it the Idol of Non-Situated Criticism. (A close cousin of The Idol of the Straw Man.)
Many of Ihde’s comments sound quite sensible and reasonable, but to whom do they apply? The only transhumanists Ihde mentions (without actually referencing any specific works of theirs) are Hans Moravec, Marvin Minsky, and Ray Kurzweil. In “The Idol of Prediction,” Ihde says “In the same narratives concerning the human, the posthuman and the transhuman…” but never tells us just which narratives he’s talking about. The lack of referents will leave most readers with a distorted view of true transhumanism. There are silly transhumanists of course, just as silly thinkers can be found in any other school of thought. I take my job here to be distinguishing the various forms of transhumanism held by most transhumanists from the easy but caricatured target created by Ihde (and many other critics).
Critics’ misconceptions are legion, but here I will focus on those found in Ihde’s paper. I declare that:
- Transhumanism is about continual improvement, not perfection or paradise.
- Transhumanism is about improving nature’s mindless “design”, not guaranteeing perfect technological solutions.
- Transhumanism is about morphological freedom, not mechanizing the body.
- Transhumanism is about trying to shape fundamentally better futures, not predicting specific futures.
- Transhumanism is about critical rationalism, not omniscient reason.
From Utopia to Extropia
According to Ihde, “technofantasy hype is the current code for magic.” As an example, he picks on the poor, foolish fellow (Lewis L. Strauss) who fantasized that nuclear fission would provide a limitless supply of energy “too cheap to meter.” Technofantasy is magical thinking because magic produces outcomes that are completely free of trade-offs and unclear and unintended consequences. Magical technologies simply “make it so.” In these technofantasies, “only the paradisical [sic] results are desired.” It might have been better if Ihde had talked of “divine thinking” rather than “magical thinking” since, in a great many fables and other stories, the use of magic does bring unintended consequences (perhaps most famously in the various genie-in-a-bottle tales). Still, the point is clear. But does it apply to actual transhumanist thinkers? After all, Ihde’s well-worn example is not from a transhumanist, but from an excessively enthusiastic promoter of nuclear fission as an energy source.
It is easy to throw around a term like “technofantasy,” but exactly is it? What appears to be fantasy, what appears to be a magical technology, depends on the time frame you adopt. Clearly many of today’s technologies would appear magical to people from a few centuries ago. That point was stated memorably in Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”3 Take someone from, let’s say, the 15th century, and expose them to air travel, television, or Google and they would probably ask what powerful demon or mage created them.
Of course there is such a thing as technofantasy: it’s imaginary technology that ignores the laws of physics as we currently understand them. Any remarkable technology, so long as it is not physically impossible, cannot reasonably be described as magical thinking. Projecting technological developments within the limits of science is projection or “exploratory engineering,” not fantasy—a distinction crucial to separating the genres of “hard science fiction” from “soft” SF and outright fantasy. Seamless and “magical” operation remains a worthy goal for real technologies, however difficult it may be to achieve (as in “transparent computing”). Hence the ring of truth from Gehm’s Corollary to Clarke's Third Law: “Any technology distinguishable from magic is insufficiently advanced.”
Although seamless and reliable technologies deserve a place as a goal for transhumanists, the ideas of perfection and paradise do not. We find those concepts in religious thinking but not in transhumanism. There are one or two possible exceptions: Some Singularitarians may be more prone to a kind of magical thinking in the sense that they see the arrival of greater than human intelligence almost instantly transforming the world beyond recognition. But even they are acutely aware of the dangers of super-intelligent AI. In contrast to Ihde’s straw man characterization, most transhumanists—and certainly those who resonate with the transhumanist philosophy of extropy—do not see utopia or perfection as even a goal, let alone an expected future posthuman world. Rather, transhumanism, like Enlightenment humanism, is a meliorist view. Transhumanists reject all forms of apologism—the view that it is wrong for humans to attempt to alter the conditions of life for the better.
The Idol of Paradise and the idea of a Platonically perfect, static utopia, is so antithetical to true transhumanism that I coined the term “extropia” to label a conceptual alternative. Transhumanists seek neither utopia nor dystopia. They seek perpetual progress—a never-ending movement toward the ever-distant goal of extropia. One of the Principles of Extropy (the first systematic formulation of transhumanist philosophy that I wrote two decades ago) is Perpetual Progress. This states that transhumanists “seek continual improvement in ourselves, our cultures, and our environments. We seek to improve ourselves physically, intellectually, and psychologically. We value the perpetual pursuit of knowledge and understanding.” This principle captures the way transhumanists challenge traditional assertions that we should leave human nature fundamentally unchanged in order to conform to “God’s will” or to what is considered “natural.”
Transhumanists go beyond most of our traditional humanist predecessors in proposing fundamental alterations in human nature in pursuit of these improvements. We question traditional, biological, genetic, and intellectual constraints on our progress and possibility. The unique conceptual abilities of our species give us the opportunity to advance nature’s evolution to new peaks. Rather than accepting the undesirable aspects of the human condition, transhumanists of all stripes challenge natural and traditional limitations on our possibilities. We champion the use of science and technology to eradicate constraints on lifespan, intelligence, personal vitality, and freedom.
Or, as I put it in a “Letter to Mother Nature”: “We have decided that it is time to amend the human constitution. We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence. We intend to make you proud of us. Over the coming decades we will pursue a series of changes to our own constitution…”
Ihde’s positioning of transhumanist thinking as paradisiacal is particularly odd and frustrating given the rather heavy emphasis on risks in modern transhumanist writing. Personally, I think that emphasis has gone too far. Reading Ihde and many other transhumanist-unfriendly critics, you get the impression that transhumanists are careening into a fantastically imagined future, worshipping before the idols of Technology and Progress while giving the finger to caution, risk, trade-offs, and side-effects. These critics cannot have actually read much transhumanist writing—certainly not anything written in the last decade. If they had, they would have immediately run into innumerable papers on and discussions of advanced artificial intelligence, of runaway nanotechnology, of “existential risk.” They would have come across risk-focused worries by organizations such as the Foresight Institute and the Council on Responsible Nanotechnology. They would have come across my own Proactionary Principle, with its explicit and thorough consideration of risks, side-effects and remote, unforeseen outcomes, and the need to use the best available methods for making decisions and forecasts about technological outcomes.
Intelligent Design and Intelligent Technology
In what seems to me like something of a tangent to his discussion of magical thinking, Ihde says that “Desire-fantasy, with respect to technologies, harbor an internal contradiction.” He sees a contradiction in wanting to have a technological enhancement and in having that enhancement become (a part of) us. On one hand, if we define the terms just right, it has to be a contradiction to simultaneously have an enhancement and to be enhanced.
But there is no contradiction in the idea that a technology can develop so that it enhances us and eventually becomes part of us. I explored this idea in detail in my doctoral dissertation, The Diachronic Self: Identity, Continuity, Transformation.4 If we absorb a technology, integrating it into ourselves, we can both have and be the technology in the relevant senses. This is much like taking a vaccine now—it’s an externally devised technology that alters our immune system, but it alters and becomes part of us. Or consider how an externally developed technology like gene therapy or artificial neurons can become integrated into who we are.
Ihde refers to the Idol of Intelligent Design as “a kind of arrogance connected to an overestimation of our own design abilities, also embedded in these discussions.” Again, he provides no referents for “these discussions.” He contrasts this idol with a “human-material or human-technology set of interactions which through experience and over time yield to emergent trajectories with often unexpected results.” This idol is indeed a problem. But Ihde’s discussion implies that it’s a problem among transhumanist thinkers. Given the absence of actual examples, it’s hard to evaluate this implicit claim. His loaded term “arrogance” doesn’t help. When does confidence become arrogance? Were the Wright brothers arrogant in their belief that they could achieve flight?
What really distinguishes transhumanist views of technology is expressed by what I called “Intelligent Technology” in the Philosophy of Extropy. I declared that “Technology is a natural extension and expression of human intellect and will, of creativity, curiosity, and imagination.” I expressed the transhumanist project of encouraging the development of ever more flexible, smart, responsive technology. I spoke for practically all transhumanists in suggesting that “We will co-evolve with the products of our minds, integrating with them, finally integrating our intelligent technology into ourselves in a posthuman synthesis, amplifying our abilities and extending our freedom.” As bold and unapologetic a statement as this is (befitting a transhumanist declaration) it says nothing about expecting perfectly reliable technologies that have no unintended consequences or outcomes that may trouble us.
Along with an overall (practical or active) optimism regarding technology, there’s a strong strain among transhumanists (and especially in the Principles of Extropy) of critical rationalism and spontaneous order. It’s true that older technophiles—especially those who might reasonably be labeled “technocrats”—have sought to impose on society a technologically mediated vision of a better future. Transhumanists have far more often challenged this approach—what Hayek called “constructivist rationalism,” preferring a self-critical rationalism (or pancritical rationalism5). Critical rationalism distinguishes us from Bacon who, like Descartes, believed that the path to genuine knowledge lay in first making a comprehensive survey of what is reliably known rather than merely believed.
Adding to the limits to confidence imposed by critical rationalism as opposed to constructivist rationalism, many transhumanists show a great appreciation for spontaneous order and its attendant unintended consequences, as outlined in my “Order Without Orderers.”6 Outcomes of people using technologies will never be quite as we might expect. Technology-in-use can differ drastically from technology-as-designed. When particle physicists starting using Tim Berners Lee’s hypertextual Web at the start of the 1990s, they had no idea what would quickly develop out of it. But these unexpected outcomes and spontaneous developments don’t mean that we should stop trying to design better technologies and to improve our abilities at foreseeing ways in which they could go wrong.
The Body in Transhumanism
Ihde is right that the cyborg can be an idol. In his discussion of this idol, however, he never explicitly suggests that transhumanists idolize the cyborg. That’s just as well, since transhumanists generally look down on the Cyborg concept as primitive and unhelpful. It is the critics who try to force the square peg of transhumanist views of the body into the round hole of the “cyborg.” This most often takes the form of accusing us of seeking to mechanize the human body, or of fearing, hating, or despising our fleshiness, the fallacies of which I discussed in “Beyond the Machine: Technology and Posthuman Freedom.”7 A classic example of this straw man construction can be found in Erik Davis’Techgnosis. Thankfully, Ihde does not repeat this error.
True transhumanism doesn’t find the biological human body disgusting or frightening. It does find it to be a marvelous yet flawed piece of engineering, as expressed in Primo Posthuman.8 It could hardly be otherwise, given that it was designed by a blind watchmaker, as Richard Dawkins put it. True transhumanism does seek to enable each of us to alter and improve (by our own standards) the human body. It champions what I called morphological freedom in my 1993 paper, “Technological Self-Transformation.”
The Role of Forecasting
“Idolatrous technofantasies” arise again, according to Ihde “In the same narratives concerning the human, the posthuman and the transhuman.” Which narratives are these? Again, we are left without a referent. The point of his discussion of prediction is to repeat his point about unintended consequences and difficulties in knowing how technologies will turn out. In this section, Ihde does finally mention two people who might be called transhumanists—Hans Moravec and Ray Kurzweil—although Kurzweil definitely resists the label. Ihde calls them “worshippers of the idol of prediction” and asks if they have any credibility. Instead of addressing that, he makes some comments on unintended consequences that might arise from downloading the human mind into a computer.
Both Moravec’s and Kurzweil’s forecasts of specific technological trends have turned out rather well so far. Of course it is easy to find lists of predictions from earlier forecasters that now, with hindsight, sound silly, and Ihde treats us to a few of them. Even there, and even with the assumption that accurate predicting is what matters in the whole transhuman/posthuman discussion, he fails to make a strong case for the futility or foolishness of predicting. He mentions an in-depth survey of predicted technologies from 1890 to 1940, noting that less than one-third of the 1500 predictions worked out well. He adds: “Chiding me for pointing this out in Nature and claiming these are pretty good odds, my response is that 50% odds are normal for a penny toss, and these are less than that!?”
The critics who chided Ihde for this are perfectly justified. He just digs himself deeper into the hole of error by bringing up the coin toss analogy. A coin has two sides, yielding two possibilities, so that the chance of a random prediction coming true is 50%. But technologies can develop in innumerable possible ways, not only because of future discoveries about that technology, but because of interactions with other technologies and because how technologies turn out usually depends heavily on how they are used. This error is especially odd considering how frequently Ihde flogs the dead horse of trade-offs and unintended consequences.
More importantly for these discussions of the transhuman and posthuman, it seems to me that Ihde doesn’t understand futurology or forecasting. The purpose of thinking about the future is not to make impossibly accurate pinpoint predictions. It’s to forecast possible futures so that we can prepare as well as possible for the upsides and downsides—so we can try to anticipate and improve on some of the trade-offs and side-effects and develop resilient responses, policies, and organizations. Rather than throwing up our hands in the face of an uncertain future, transhumanists and other futurists seek to better understand our options.
Ultimate skepticism concerning forecasting is not tenable, otherwise no one would ever venture to cross the road or save any money. Should we look at the uncertainty inherent in the future as an impenetrable black box? No. We need to distinguish different levels of uncertainty and then use the best available tools while developing better ones to make sense of possible outcomes. At the lowest level of uncertainty, there is only one possible outcome. In those situations, businesses use tools such as net present value.
Raise the level of uncertainty a bit and you’re in a situation where there are several distinct possible futures, one of which will occur. In these situations, you can make good use of tools such as scenario planning, game theory, and decision-tree real-options valuation. At a higher level of uncertainty, we face a range of futures and must use additional tools such as system dynamics models. When uncertainty is at its highest and the range of possible outcomes is unbounded, we can only look to analogies and reference cases and try to devise resilient strategies and designs.9
Transhumanists are far from being dummies when it comes to looking ahead. But it’s true that many transhumanists are far from perfect in their approach to forecasting and foresight. My biggest complaint with many of my colleagues is that their vision is overly technocentric. Rather than “The Idol of Prediction,” a better critical construct would have been “The Idol of Technocentrism.” Not surprisingly, many transhumanists have a heavily technical background, especially in the computer and information sciences and the physical sciences. With my own background in economics, politics, philosophy, and psychology, I see a paucity of the social sciences among even sophisticated seers such as Ray Kurzweil, which I debated with him in 2002.10
None of Ihde’s Idols apply to true transhumanism. But they do add up to a simple message: People’s actions have unintended consequences, people are clueless about possible futures, and it is arrogant and hubristic to pursue fundamental improvements to the human condition. This ultimately pessimistic and existentially conservative message does indeed conflict directly with true transhumanism. Transhumanists do in fact understand unintended consequences and limits to our understanding, but they continue to strive for fundamental advances. I am wary of all “isms,” but these kinds of critiques of transhumanism spur me to renew my identification with that label even as I engage more deeply in cleaning up such misconceptions.
1. Mayr, 1963, p.12.
2. Bacon, 1620.
3. Clarke, 1973.
4. More, 1995.
5. More, 1994b.
6. Ibid. More, 1991.
7. More, 1997.
8. Vita-More. 1997, 2004.
9. Courtney, 2001.
10. Kurzweil and More, 2002.
Bacon, Francis, 1620, Novum Organum.
Clarke, Arthur C., “Hazards of Prophecy: The Failure of Imagination” in Profiles of the Future (revised edition, 1973).
Courtney, Hugh, 2001, 20/20 Foresight: Crafting Strategy in an Uncertain World. Harvard Business School Press.
Davis, Erik, 2005, Techgnosis: Myth, Magic & Mysticism in the Age of Information. Five Star.
Ihde, Don, 2008, “Of Which Human Are We Post?” The Global Spiral.
Kurzweil, Ray, 2006, The Singularity is Near: When Humans Transcend Biology. Penguin.
Kurzweil, Ray and Max More, 2002, “Max More and Ray Kurzweil on the Singularity.” KurzweilAI.net.
Mayr, Ernst: 1963, 1970, Population, Species, and Evolution. Harvard University Press, Cambridge, Massachusetts.
More, Max, 1990, 1992, 1993, 1998, “Principles of Extropy”
—— 1990, 1994, 1996, “Transhumanism: Toward a Futurist Philosophy.” Extropy #6.
—— 1991, “Order Without Orderers”, Extropy #7.
—— 1993, “Technological Self-Transformation: Expanding Personal Extropy.” Extropy #10, vol. 4, no. 2, pp. 15-24.
—— 1994a, “On Becoming Posthuman.” Free Inquiry.
—— 1994b, “Pancritical Rationalism: An Extropic Metacontext for Memetic Progress.”
—— 1995, The Diachronic Self: Identity, Continuity, Transformation.
—— 1997, “Beyond the Machine: Technology and Posthuman Freedom.” Paper in proceedings of Ars Electronica. (FleshFactor: Informationmaschine Mensch), Ars Electronica Center, Springer, Wien, New York, 1997.
—— 1998, “Virtue and Virtuality” (Von erweiterten Sinnen zu Erfahrungsmaschinen) in Der Sinn der Sinne (Kunst und Austellungshalle der Bundesrepublik Deutschland, Gottingen.)
—— 1999, “Letter to Mother Nature” (part of “The Ultrahuman Revolution: Amendments to the Human Constitution.”) Biotech Futures Conference, U.C. Berkeley.
—— 2004a, The Proactionary Principle.
—— 2004b, “Superlongevity without Overpopulation”, chapter in The Scientific Conquest of Death. (Immortality Institute.)
—— 2005, “How to Choose a Forecasting Method”, ManyWorlds.
Vita-More, 1997. “Primo Posthuman future Body Prototype” http://www.natasha.cc/primo.htm andhttp://www.kurzweilai.net/meme/frame.html?main=/articles/art0405.html
Vita-More, 2004. “The New [human] Genre — Primo Posthuman”. Delivered at Ciber@RT Conference, Bilbao, Spain April, 2004,