Holding On to Reality
 

Read the introduction to Holding On to Reality: The Nature of Information at the Turn of the Millennium.

 

How We Became Posthuman
 

Read the prologue to How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics.

 

Copyright

An interview/dialogue
with Albert Borgmann
and N. Katherine Hayles
on humans and machines


Question: This email message, like most of the email found in the inbox of your computer's email program, was written and sent by a person, and not by some disembodied intelligent machine. However, these days, it's possible to imagine that this message was machine-generated. In your books, Holding On to Reality and How We Became Posthuman you both discuss how we got to this point. Could you summarize briefly, as a place to begin?

Albert Borgmann: Your scenario shows that today we are dealing with a new kind of information we may call technological information. It was preceded first by natural information—tracks, smoke, fire rings. Such information (it still is all about us) can leave us uncertain as to who the person was that left tracks or built a fire in the distance. Natural information was followed by cultural information, best represented by writing—a story, for instance. Such a story may give us the picture of a fictional person. But here we are actively engaged in bringing the person to life and hardly confused about whether or not there is an actual person.

Technological information is so much more massive than natural and unlike cultural information takes on a life of its own so that we may be deceived or uncertain about whether a real person has addressed us from within cyberspace. It would take artificial intelligence and much more advanced virtual reality to give that uncertainty real oomph. Neither is feasible as far as I am concerned. But interestingly, people, when entering cyberspace, sometimes reduce themselves to the shallow, disjointed, and cliché-ridden persona that can be mimicked by information technology and so become co-conspirators of their confusions about who is who. If we keep our mooring in reality and conduct ourselves thoughtfully in cyberspace, we will not fall prey to deception.

N. Katherine Hayles: In How We Became Posthuman, I tell three interrelated stories: how information lost its body, that is, how it was conceptualized as an entity that can flow between substrates but is not identical with its material bases; how the cyborg emerged as a technological and cultural construction in the post-World War II period; and the transformation from the human to the posthuman. All three stories are relevant to seeing an email message and not knowing if it was human or machine-generated.

For now, however, let me concentrate on the transformation from the human to the posthuman. Recent research programs in computer science, cognitive sciences, artificial life and artificial intelligence have argued for a view of the human so different from that which emerged from the Enlightenment that it can appropriately be called "posthuman." Whereas the human has traditionally been associated with consciousness, rationality, free will, autonomous agency, and the right of the subject to possess himself, the posthuman sees human behavior as the result of a number of autonomous agents running their programs more or less independently of one another. Complex behavior in this view is an emergent property that arises when these programs, each fairly simple in itself, begin reacting with one another. Consciousness, long regarded as the seat of identity, in this model is relegated to an "epiphenomenon." Agency still exists, but it is distributed and largely unconscious, or at least a-conscious.

The effect of these changed views is to envision the human in terms that make it much more like an intelligent machine, which allows the human to be more easily spliced into distributed cognitive systems where part of the intelligence resides in the human, part in a variety of intelligent machines, and part in the interfaces through which they interact. At the same time, intelligent agent programs are being developed using "emotional computing" techniques that allow these artificial systems to respond to unexpected situations in ways that more closely resemble human responses.

The upshot, then, is that both artificial and human intelligences are being reconceptualized in ways that facilitate their interactions with one another. Although I have written this summary, it could easily have been produced by such a system as the "Amalthaea" intelligent agent system being developed at the MIT Media Lab by Patti Maes and Alexandros Moukos. Are you sure I did write this message?

Q: It sounds like you two disagree about the extent to which artificial intelligence could mimic human intelligence. But you both seem to be saying that's not the central issue anyway. The real issue is not whether a machine will be built that can replicate human behavior, but whether humans will begin (or continue) to think of themselves as machines. Is that right?

Borgmann: One thing Katherine and I agree on is that humans are essentially embodied and therefore cannot escape their bodies no matter how or what they think of themselves. Of course being mistaken about one's bodily existence can have strong cultural and moral consequences. But the crucial error these days is not to think of oneself as a machine but to shift one's moral center of gravity into a machine of sorts—cyberspace.

By cyberspace I mean the realm of electronically and digitally mediated information (soon to include television). Some regions of cyberspace are indisputably sober and beneficial and require highly skilled engagement, viz., the areas where computers are used for research and design. In the realm of leisure and consumption, however, cyberspace will very much resemble television, except that cyberspace is much more diverse and allows for (increasingly easy) interaction. The temptation to entrust one's curiosity and desires primarily to cyberspace will be even greater than it is now. To do so is not to commit a cognitive error but to become an accomplice in the diminishment of one's person and one's world. Just as you cannot escape your body, you cannot really and finally escape reality. But you can degrade to utilities what should be celebrated as the splendor of tangible presence.

Hayles: Humans thinking of themselves as machines has a long history, dating back to the classical era. Since World War II and the development of intelligent machines, this tendency has greatly increased, as the work of Sherry Turkle, among others, has shown. Think of all the everyday expressions that now equate human thought with computers: "That doesn't compute for me"; "my memory is overloaded"; and my favorite, drawn from Turkle's account of the world of hackers: "Reality is not my best window."

There are important limitations to the human-computer equation. It is by no means clear that human thought does operate in the same way as computer calculation, and computers can never experience emotions in anything like the same way that humans do. In my view, the easy equation between humans and computers needs to be challenged, especially when it leads to important social and cultural consequences. My vision of how computers and humans can enter into productive partnerships, however, is rather different than Albert's. I don't think the idea that humans will "live" in cyberspace will last very long. It's clear to most people, I think, that they have real lives in the real world, and that the illusion one can live in virtual reality is mostly a fantasy of technofreaks and science fiction writers.

What will happen, and is already happening, is the development of distributed cognitive environments in which humans and computers interact in hundreds of ways daily, often unobtrusively. Think of how often you use computers now, often without knowing it. When you heat your coffee in the microwave, the settings are controlled by a computer chip. When you glance at your watch to see if you have time to drink the coffee, you are probably relying on the computer chip that makes your watch intelligent. When you go out and start your car to drive off to work, the ignition system and probably many other systems as well rely on computer chips. More computers control the recognition system that makes the electronic doors swing open as you approach. As you run for the elevator, sensors connected with yet more chips make the door spring back as you touch them. And on and on.

Computers aren't just in boxes anymore; they are moved out into the world to become distributed throughout the environment. "Eversion," my colleague Marcus Novak has called this phenomenon, in contrast to the "immersion" of the much more limited and localized virtual reality environments. The effect of moving in these distributed cognitive environments is often to enhance human functioning, as the ordinary examples above illustrate. Of course, there is also a downside. As cognition becomes distributed, humans no longer control all the parameters, and in some situations, they don't control the crucial ones, for example in automated weapon systems.

Should we therefore hit the panic button and start building big bonfires into which we will toss all the computers? One way to avoid looking at this situation apocalyptically (which may be titillating but in my view always risks serious distortions) is to think about distributed cognition in historical terms, as something that began happening as soon as the earliest humans began developing technology. External memory storage, for example, isn't limited to computers. It happens as early as humans drawing animals and figures on cave walls to convey information about hunting and ritual activities. Putting contemporary developments in these kinds of contexts will help us, in my view, get away from scare scenarios and begin to think in more sophisticated ways about how human-computer interactions can be fruitful and richly articulated.

Q: You can't escape your body and no one really can live in cyberspace. But can't the possibilities for disembodied communication and exploration presented by cyberspace actually be liberating, for instance, to those terrified of face to face contact or negatively objectified by a "real" culture that idealizes the young, the thin, etc.? Being in the tangible presence of reality is not always so splendid. Those who speak positively of cyberspace say the existence of that network empowers individuals. Is that illusory, misguided?

On the other hand, as Katherine points out, there might be some very real dangers lurking in the fantastically convenient world of computer "eversion." Consider 2001's HAL 2000, for instance, a computer programmed perhaps a bit too closely on the human cognitive model. Assuming you don't want to end up there, where do you draw the line? HMO's are considering programming their computers to make medical diagnoses and recommend treatments based on probability distributions. That's a timesaver, to be sure, but has it crossed the line between calculation and moral judgement? Can even the finest "emotional computing" techniques ever transform a computer into an independent moral actor?

Borgmann: The claim that cyberspace liberates people from the accidents of gender, race, class, and bodily appearance is often made by advocates of electronically distributed education. But to conceal a problem is not to solve it. We have to learn to respect and encourage people as they actually exist. The "liberated" students or citizens of cyberspace, moreover, have to bleach out their presence to that of a person who is without gender, social background, and racial heritage. Otherwise they betray what is supposed to remain hidden. And it turns out that there are loudmouths and bullies in cyberspace as often as in reality. The fuzzed identities of cyberspace, moreover, lend themselves to their own kind of mischief.

The insertion of microchips in the appliances and gadgets of everyday life is for the most part the continuation of another kind of liberation, from the claims of things rather than persons. It is a disburdenment that is at the center of the technological culture. We are concerned, as we should be, that some of the disburdening devices are not going to work correctly or safely, and we are particularly worried about automated systems and, again, properly so. Information technology is much more fallible and fragile than most people realize.

But there is an issue that should concern us precisely when automated devices work well. The instances Katherine mentions present relatively trivial benefits and marginal improvements over their less sophisticated predecessors. But when such sophistication reaches a critical mass as it does in a so-called smart house where every last and least domestic chore and burden is anticipated and taken over by an automatic device, inhabitants become the passive content of their sophisticated container. The vision of such an environment often carries the implied promise that people will use their disburdened condition creatively and inventively. But assuming that in the smart house the blandishments of cyberspace will present themselves with even greater diversity and glamour, most people will likely do what they now do in their relatively engaging homes with its relatively primitive access to cyberspace, viz., television—they will immerse themselves in the warm bath of electronic entertainment.

There are, thank God, indications of a hunger for reality and of a growing desire to seek the engagement of real people and real things. Whether one supports this resolution of the ambiguities of cyberspace or not, one should certainly agree with Katherine that widening and deepening the context of the notions that keep us enthralled (something she does so well in her book) will give us the leeway to consider our predicament more resourcefully.

Hayles: Almost two decades ago, Joseph Weizenbaum in "Computer Power and Human Reason" made the argument that judgment should be a uniquely human capacity—that computers can only calculate, not engage in moral reasoning. However, new programming techniques based on recursive feedback, parallel processing, and neural nets are making it possible for computers to engage in more sophisticated decision-making than in Weizenbaum's day. It isn't so clear now that computers can't engage in "moral reasoning." The line, it seems to me, can't be drawn in an a priori way, which is what Weizenbaum was proposing. Instead, it seems to me more a pragmatic or practical question: what can computers do, and how reliably can they do it? Bear in mind that humans are not perfect decision-makers, either, so the comparison ought not to be between perfection and computers, but between computers and normal human judgment, with all of its fallibility.

There are already many instances in which humans depend for their lives on computer decisions. Consider the X-29 fighter jet, which has forward-swept wings and is aerodynamically unstable—so unstable, in fact, it cannot be successfully flown by a human alone. There are three computers on board all running the same software, and they "vote" on what actions to take. If two of the three agree, the plane is flown according to that decision. (The triple redundancy is to minimize the possibility of fatal computer malfunction). This is an example of how agency and decision-making has become a distributed function involving both human and non-human actors. I think we will see more and more situations like this in the decades to come. Whatever line one draws, it will necessarily change as computers continue to develop and evolve.

Should we regard this with alarm? More properly, I think, with caution. I can imagine a similar argument made when cavemen tamed fire—some arguing that fire is a dangerous force that can easily get out of control and destroy those who would make use of it. Well, yes, this does happen occasionally, but who would now think of life without "domesticated" fire? Technology always implies interdependence, and in many cases, interdependence so woven into the fabric of society that it cannot be renounced without catastrophic loss. So now with computers.

Q: Albert speaks of "a hunger for reality and of a growing desire to seek the engagement of real people and real things." What are some examples of that hunger and desire? Does this represent a step beyond the posthuman—to be conscious both of the interdependence of human life with machines and the differences between humans and machines? Perhaps, to be engaged with machines but not enthralled?

Borgmann: Getting a reading of contemporary culture is a fine and difficult art. You have to begin with observations and hunches. You see a park being recovered from neglect and danger, a theater being restored to its former glory, old apartment buildings being rehabilitated. You see people returning to the streets, entertained by street musicians at the corner or an opera singer on a stage in the park. You see people working for the preservation of a mountain range or a stand of trees for no other reason than that these things should be celebrated rather than turned into something that is useful and has a market price.

It is the luminous and consoling reality, of course, that people try to retrieve. Spending sleepless nights at the bedside of children mortally sick with diphtheria or scarlet fever was very real once, but it is not a reality we want to have back. (How to define reality more precisely is a complex issue to which Katherine has made notable contributions.) On occasion your intuitions about the growing thirst for reality are unexpectedly confirmed by asides of perceptive authors who in writing about something else entirely cannot help noticing how insubstantial and unreal our world has become. I am thinking of writers like Joe Klein and Sven Birkerts.

At length, however, social and cultural theorists have to test and temper their observations against the findings of social scientists. The splendor of reality and people's response to it arenot exactly social science categories. The Census Bureau in fact often aggregates categories in a way that makes a distinction between engagement in reality and indulgence in consumption impossible. But the the Census does provide evidence that people want out of their technological and mediated cocoons. So do the writings of Juliet B. Schor, Robert Wuthnow, John P. Robinson and Geoffrey Godbey, among others. The revival of urbanism and the vigor of environmentalism are the best indications that people are seeking the engagement of real persons and the commanding presence of reality.

Hayles: In my view, machines are "real things," so I don't see an engagement with machines as in any way antithetical to contemporary reality. I do think it is important not to elide the very real differences that exist between humans and machines, especially the different embodiments that humans and machines have. Certainly I think that Albert is correct in insisting that virtual reality will never displace the three-dimensional world in which our perceptual systems evolved; the richness, diversity, and spontaneity of this immensely complex environment makes even the most sophisticated computer simulation look like a stick world by comparison. Where I differ, perhaps, is in seeing the situation not as a dichotomy between the real and virtual but rather as space in which the natural and the artificial are increasing entwined. I foresee a proliferation of what Bruno Latour calls "quasi-objects," hybrid objects produced by a collaboration between nature and culture—genetically engineered plants and animals, humans who have had gene thereapy, humans with cybernetic implants and explants, intelligent agent systems with evolutionary programs who have evolved to the point where they can converse in a convincing fashion with humans, and so forth. But then, this is nothing so very new, except for the techniques involved, for humans have been producing hybridized environments for a very long time. Our challenge now, it seems to me, is to think carefully about how these technologies can be used to enhance human well-being and the fullness and richness of human-being-in-the-world, which can never be reduced merely to information processing or information machines.



Copyright notice: ©1999 The University of Chicago. All rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. copyright law, and it may be archived and redistributed in electronic form, provided that this entire notice, including copyright information, is carried and provided that the University of Chicago Press is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the University of Chicago Press.


Albert Borgmann
Holding On to Reality: The Nature of Information at the Turn of the Millennium
©1999, 288 pages, 18 figures
Cloth $22.00 ISBN: 0-226-06625-8
Paper $14.00 ISBN: 0-226-06623-1

N. Katherine Hayles
How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics
©1999, 338 pages, 5 line drawings
Cloth $49.00 ISBN: 0-226-32145-2
Paper $19.00 ISBN: 0-226-32146-0

For information on purchasing these books—from bookstores or here online—please go to the webpages for Holding On to Reality or How We Became Posthuman.


See also: