Why the Future Doesn’t Need Us (+Audio)

From the moment I became involved in the cre­ation of new tech­nolo­gies, their eth­i­cal dimen­sions have con­cerned me, but it was only in the autumn of 1998 that I became anx­iously aware of how great are the dan­gers fac­ing us in the 21st cen­tury. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inven­tor of the first read­ing machine for the blind and many other amaz­ing things.

Ray and I were both speak­ers at George Gilder’s Tele­cosm con­fer­ence, and I encoun­tered him by chance in the bar of the hotel after both our ses­sions were over. I was sit­ting with John Searle, a Berke­ley philoso­pher who stud­ies con­scious­ness. While we were talk­ing, Ray approached and a con­ver­sa­tion began, the sub­ject of which haunts me to this day.

I had missed Ray’s talk and the sub­se­quent panel that Ray and John had been on, and they now picked right up where they’d left off, with Ray say­ing that the rate of improve­ment of tech­nol­ogy was going to accel­er­ate and that we were going to become robots or fuse with robots or some­thing like that, and John coun­ter­ing that this couldn’t hap­pen, because the robots couldn’t be conscious.

While I had heard such talk before, I had always felt sen­tient robots were in the realm of sci­ence fic­tion. But now, from some­one I respected, I was hear­ing a strong argu­ment that they were a near-​term pos­si­bil­ity. I was taken aback, espe­cially given Ray’s proven abil­ity to imag­ine and cre­ate the future. I already knew that new tech­nolo­gies like genetic engi­neer­ing and nan­otech­nol­ogy were giv­ing us the power to remake the world, but a real­is­tic and immi­nent sce­nario for intel­li­gent robots sur­prised me.

It’s easy to get jaded about such break­throughs. We hear in the news almost every day of some kind of tech­no­log­i­cal or sci­en­tific advance. Yet this was no ordi­nary pre­dic­tion. In the hotel bar, Ray gave me a par­tial preprint of his then-​forthcoming book The Age of Spir­i­tual Machines, which out­lined a utopia he fore­saw — one in which humans gained near immor­tal­ity by becom­ing one with robotic tech­nol­ogy. On read­ing it, my sense of unease only inten­si­fied; I felt sure he had to be under­stat­ing the dan­gers, under­stat­ing the prob­a­bil­ity of a bad out­come along this path.

First let us pos­tu­late that the com­puter sci­en­tists suc­ceed in devel­op­ing intel­li­gent machines that can do all things bet­ter than human beings can do them. In that case pre­sum­ably all work will be done by vast, highly orga­nized sys­tems of machines and no human effort will be nec­es­sary. Either of two cases might occur. The machines might be per­mit­ted to make all of their own deci­sions with­out human over­sight, or else human con­trol over the machines might be retained.

If the machines are per­mit­ted to make all their own deci­sions, we can’t make any con­jec­tures as to the results, because it is impos­si­ble to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be fool­ish enough to hand over all the power to the machines. But we are sug­gest­ing nei­ther that the human race would vol­un­tar­ily turn power over to the machines nor that the machines would will­fully seize power. What we do sug­gest is that the human race might eas­ily per­mit itself to drift into a posi­tion of such depen­dence on the machines that it would have no prac­ti­cal choice but to accept all of the machines’ deci­sions. As soci­ety and the prob­lems that face it become more and more com­plex and machines become more and more intel­li­gent, peo­ple will let machines make more of their deci­sions for them, sim­ply because machine-​made deci­sions will bring bet­ter results than man-​made ones. Even­tu­ally a stage may be reached at which the deci­sions nec­es­sary to keep the sys­tem run­ning will be so com­plex that human beings will be inca­pable of mak­ing them intel­li­gently. At that stage the machines will be in effec­tive con­trol. Peo­ple won’t be able to just turn the machines off, because they will be so depen­dent on them that turn­ing them off would amount to suicide.

On the other hand it is pos­si­ble that human con­trol over the machines may be retained. In that case the aver­age man may have con­trol over cer­tain pri­vate machines of his own, such as his car or his per­sonal com­puter, but con­trol over large sys­tems of machines will be in the hands of a tiny elite — just as it is today, but with two dif­fer­ences. Due to improved tech­niques the elite will have greater con­trol over the masses; and because human work will no longer be nec­es­sary the masses will be super­flu­ous, a use­less bur­den on the sys­tem. If the elite is ruth­less they may sim­ply decide to exter­mi­nate the mass of human­ity. If they are humane they may use pro­pa­ganda or other psy­cho­log­i­cal or bio­log­i­cal tech­niques to reduce the birth rate until the mass of human­ity becomes extinct, leav­ing the world to the elite. Or, if the elite con­sists of soft-​hearted lib­er­als, they may decide to play the role of good shep­herds to the rest of the human race. They will see to it that everyone’s phys­i­cal needs are sat­is­fied, that all chil­dren are raised under psy­cho­log­i­cally hygienic con­di­tions, that every­one has a whole­some hobby to keep him busy, and that any­one who may become dis­sat­is­fied under­goes “treat­ment” to cure his “prob­lem.” Of course, life will be so pur­pose­less that peo­ple will have to be bio­log­i­cally or psy­cho­log­i­cally engi­neered either to remove their need for the power process or make them “sub­li­mate” their drive for power into some harm­less hobby. These engi­neered human beings may be happy in such a soci­ety, but they will most cer­tainly not be free. They will have been reduced to the sta­tus of domes­tic ani­mals.1

In the book, you don’t dis­cover until you turn the page that the author of this pas­sage is Theodore Kaczyn­ski — the Unabomber. I am no apol­o­gist for Kaczyn­ski. His bombs killed three peo­ple dur­ing a 17-​year ter­ror cam­paign and wounded many oth­ers. One of his bombs gravely injured my friend David Gel­ern­ter, one of the most bril­liant and vision­ary com­puter sci­en­tists of our time. Like many of my col­leagues, I felt that I could eas­ily have been the Unabomber’s next target.

Kaczynski’s actions were mur­der­ous and, in my view, crim­i­nally insane. He is clearly a Lud­dite, but sim­ply say­ing this does not dis­miss his argu­ment; as dif­fi­cult as it is for me to acknowl­edge, I saw some merit in the rea­son­ing in this sin­gle pas­sage. I felt com­pelled to con­front it.

Kaczynski’s dystopian vision describes unin­tended con­se­quences, a well-​known prob­lem with the design and use of tech­nol­ogy, and one that is clearly related to Murphy’s law — “Any­thing that can go wrong, will.” (Actu­ally, this is Finagle’s law, which in itself shows that Fina­gle was right.) Our overuse of antibi­otics has led to what may be the biggest such prob­lem so far: the emer­gence of antibiotic-​resistant and much more dan­ger­ous bac­te­ria. Sim­i­lar things hap­pened when attempts to elim­i­nate malar­ial mos­qui­toes using DDT caused them to acquire DDT resis­tance; malar­ial par­a­sites like­wise acquired multi-​drug-​resistant genes.2

The cause of many such sur­prises seems clear: The sys­tems involved are com­plex, involv­ing inter­ac­tion among and feed­back between many parts. Any changes to such a sys­tem will cas­cade in ways that are dif­fi­cult to pre­dict; this is espe­cially true when human actions are involved.

I started show­ing friends the Kaczyn­ski quote from The Age of Spir­i­tual Machines; I would hand them Kurzweil’s book, let them read the quote, and then watch their reac­tion as they dis­cov­ered who had writ­ten it. At around the same time, I found Hans Moravec’s book Robot: Mere Machine to Tran­scen­dent Mind. Moravec is one of the lead­ers in robot­ics research, and was a founder of the world’s largest robot­ics research pro­gram, at Carnegie Mel­lon Uni­ver­sity. Robot gave me more mate­r­ial to try out on my friends — mate­r­ial sur­pris­ingly sup­port­ive of Kaczynski’s argu­ment. For example:

The Short Run (Early 2000s)

Bio­log­i­cal species almost never sur­vive encoun­ters with supe­rior com­peti­tors. Ten mil­lion years ago, South and North Amer­ica were sep­a­rated by a sunken Panama isth­mus. South Amer­ica, like Aus­tralia today, was pop­u­lated by mar­su­pial mam­mals, includ­ing pouched equiv­a­lents of rats, deers, and tigers. When the isth­mus con­nect­ing North and South Amer­ica rose, it took only a few thou­sand years for the north­ern pla­cen­tal species, with slightly more effec­tive metab­o­lisms and repro­duc­tive and ner­vous sys­tems, to dis­place and elim­i­nate almost all the south­ern marsupials.

In a com­pletely free mar­ket­place, supe­rior robots would surely affect humans as North Amer­i­can pla­cen­tals affected South Amer­i­can mar­su­pi­als (and as humans have affected count­less species). Robotic indus­tries would com­pete vig­or­ously among them­selves for mat­ter, energy, and space, inci­den­tally dri­ving their price beyond human reach. Unable to afford the neces­si­ties of life, bio­log­i­cal humans would be squeezed out of existence.

There is prob­a­bly some breath­ing room, because we do not live in a com­pletely free mar­ket­place. Gov­ern­ment coerces non­mar­ket behav­ior, espe­cially by col­lect­ing taxes. Judi­ciously applied, gov­ern­men­tal coer­cion could sup­port human pop­u­la­tions in high style on the fruits of robot labor, per­haps for a long while.

A text­book dystopia — and Moravec is just get­ting wound up. He goes on to dis­cuss how our main job in the 21st cen­tury will be “ensur­ing con­tin­ued coop­er­a­tion from the robot indus­tries” by pass­ing laws decree­ing that they be “nice,“3 and to describe how seri­ously dan­ger­ous a human can be “once trans­formed into an unbounded super­in­tel­li­gent robot.” Moravec’s view is that the robots will even­tu­ally suc­ceed us — that humans clearly face extinction.

I decided it was time to talk to my friend Danny Hillis. Danny became famous as the cofounder of Think­ing Machines Cor­po­ra­tion, which built a very pow­er­ful par­al­lel super­com­puter. Despite my cur­rent job title of Chief Sci­en­tist at Sun Microsys­tems, I am more a com­puter archi­tect than a sci­en­tist, and I respect Danny’s knowl­edge of the infor­ma­tion and phys­i­cal sci­ences more than that of any other sin­gle per­son I know. Danny is also a highly regarded futur­ist who thinks long-​term — four years ago he started the Long Now Foun­da­tion, which is build­ing a clock designed to last 10,000 years, in an attempt to draw atten­tion to the piti­fully short atten­tion span of our soci­ety. (See “Test of Time

So I flew to Los Ange­les for the express pur­pose of hav­ing din­ner with Danny and his wife, Pati. I went through my now-​familiar rou­tine, trot­ting out the ideas and pas­sages that I found so dis­turb­ing. Danny’s answer — directed specif­i­cally at Kurzweil’s sce­nario of humans merg­ing with robots — came swiftly, and quite sur­prised me. He said, sim­ply, that the changes would come grad­u­ally, and that we would get used to them.

But I guess I wasn’t totally sur­prised. I had seen a quote from Danny in Kurzweil’s book in which he said, “I’m as fond of my body as any­one, but if I can be 200 with a body of sil­i­con, I’ll take it.” It seemed that he was at peace with this process and its atten­dant risks, while I was not.

While talk­ing and think­ing about Kurzweil, Kaczyn­ski, and Moravec, I sud­denly remem­bered a novel I had read almost 20 years ago -The White Plague, by Frank Her­bert — in which a mol­e­c­u­lar biol­o­gist is dri­ven insane by the sense­less mur­der of his fam­ily. To seek revenge he con­structs and dis­sem­i­nates a new and highly con­ta­gious plague that kills widely but selec­tively. (We’re lucky Kaczyn­ski was a math­e­mati­cian, not a mol­e­c­u­lar biol­o­gist.) I was also reminded of the Borg ofStar Trek, a hive of partly bio­log­i­cal, partly robotic crea­tures with a strong destruc­tive streak. Borg-​like dis­as­ters are a sta­ple of sci­ence fic­tion, so why hadn’t I been more con­cerned about such robotic dystopias ear­lier? Why weren’t other peo­ple more con­cerned about these night­mar­ish scenarios?

Part of the answer cer­tainly lies in our atti­tude toward the new — in our bias toward instant famil­iar­ity and unques­tion­ing accep­tance. Accus­tomed to liv­ing with almost rou­tine sci­en­tific break­throughs, we have yet to come to terms with the fact that the most com­pelling 21st-​century tech­nolo­gies — robot­ics, genetic engi­neer­ing, and nan­otech­nol­ogy — pose a dif­fer­ent threat than the tech­nolo­gies that have come before. Specif­i­cally, robots, engi­neered organ­isms, and nanobots share a dan­ger­ous ampli­fy­ing fac­tor: They can self-​replicate. A bomb is blown up only once — but one bot can become many, and quickly get out of control.

Much of my work over the past 25 years has been on com­puter net­work­ing, where the send­ing and receiv­ing of mes­sages cre­ates the oppor­tu­nity for out-​of-​control repli­ca­tion. But while repli­ca­tion in a com­puter or a com­puter net­work can be a nui­sance, at worst it dis­ables a machine or takes down a net­work or net­work ser­vice. Uncon­trolled self-​replication in these newer tech­nolo­gies runs a much greater risk: a risk of sub­stan­tial dam­age in the phys­i­cal world.

Each of these tech­nolo­gies also offers untold promise: The vision of near immor­tal­ity that Kurzweil sees in his robot dreams dri­ves us for­ward; genetic engi­neer­ing may soon pro­vide treat­ments, if not out­right cures, for most dis­eases; and nan­otech­nol­ogy and nanomed­i­cine can address yet more ills. Together they could sig­nif­i­cantly extend our aver­age life span and improve the qual­ity of our lives. Yet, with each of these tech­nolo­gies, a sequence of small, indi­vid­u­ally sen­si­ble advances leads to an accu­mu­la­tion of great power and, con­comi­tantly, great danger.

What was dif­fer­ent in the 20th cen­tury? Cer­tainly, the tech­nolo­gies under­ly­ing the weapons of mass destruc­tion (WMD) — nuclear, bio­log­i­cal, and chem­i­cal (NBC) — were pow­er­ful, and the weapons an enor­mous threat. But build­ing nuclear weapons required, at least for a time, access to both rare — indeed, effec­tively unavail­able — raw mate­ri­als and highly pro­tected infor­ma­tion; bio­log­i­cal and chem­i­cal weapons pro­grams also tended to require large-​scale activities.

The 21st-​century tech­nolo­gies — genet­ics, nan­otech­nol­ogy, and robot­ics (GNR) — are so pow­er­ful that they can spawn whole new classes of acci­dents and abuses. Most dan­ger­ously, for the first time, these acci­dents and abuses are widely within the reach of indi­vid­u­als or small groups. They will not require large facil­i­ties or rare raw mate­ri­als. Knowl­edge alone will enable the use of them.

Thus we have the pos­si­bil­ity not just of weapons of mass destruc­tion but of knowledge-​enabled mass destruc­tion (KMD), this destruc­tive­ness hugely ampli­fied by the power of self-​replication.

I think it is no exag­ger­a­tion to say we are on the cusp of the fur­ther per­fec­tion of extreme evil, an evil whose pos­si­bil­ity spreads well beyond that which weapons of mass destruc­tion bequeathed to the nation-​states, on to a sur­pris­ing and ter­ri­ble empow­er­ment of extreme individuals.

Noth­ing about the way I got involved with com­put­ers sug­gested to me that I was going to be fac­ing these kinds of issues.

My life has been dri­ven by a deep need to ask ques­tions and find answers. When I was 3, I was already read­ing, so my father took me to the ele­men­tary school, where I sat on the principal’s lap and read him a story. I started school early, later skipped a grade, and escaped into books — I was incred­i­bly moti­vated to learn. I asked lots of ques­tions, often dri­ving adults to distraction.

As a teenager I was very inter­ested in sci­ence and tech­nol­ogy. I wanted to be a ham radio oper­a­tor but didn’t have the money to buy the equip­ment. Ham radio was the Inter­net of its time: very addic­tive, and quite soli­tary. Money issues aside, my mother put her foot down — I was not to be a ham; I was anti­so­cial enough already.

I may not have had many close friends, but I was awash in ideas. By high school, I had dis­cov­ered the great sci­ence fic­tion writ­ers. I remem­ber espe­cially Heinlein’s Have Space­suit Will Travel and Asimov’s I, Robot, with its Three Laws of Robot­ics. I was enchanted by the descrip­tions of space travel, and wanted to have a tele­scope to look at the stars; since I had no money to buy or make one, I checked books on telescope-​making out of the library and read about mak­ing them instead. I soared in my imagination.

Thurs­day nights my par­ents went bowl­ing, and we kids stayed home alone. It was the night of Gene Roddenberry’s orig­i­nal Star Trek, and the pro­gram made a big impres­sion on me. I came to accept its notion that humans had a future in space, Western-​style, with big heroes and adven­tures. Roddenberry’s vision of the cen­turies to come was one with strong moral val­ues, embod­ied in codes like the Prime Direc­tive: to not inter­fere in the devel­op­ment of less tech­no­log­i­cally advanced civ­i­liza­tions. This had an incred­i­ble appeal to me; eth­i­cal humans, not robots, dom­i­nated this future, and I took Roddenberry’s dream as part of my own.

I excelled in math­e­mat­ics in high school, and when I went to the Uni­ver­sity of Michi­gan as an under­grad­u­ate engi­neer­ing stu­dent I took the advanced cur­ricu­lum of the math­e­mat­ics majors. Solv­ing math prob­lems was an excit­ing chal­lenge, but when I dis­cov­ered com­put­ers I found some­thing much more inter­est­ing: a machine into which you could put a pro­gram that attempted to solve a prob­lem, after which the machine quickly checked the solu­tion. The com­puter had a clear notion of cor­rect and incor­rect, true and false. Were my ideas cor­rect? The machine could tell me. This was very seductive.

I was lucky enough to get a job pro­gram­ming early super­com­put­ers and dis­cov­ered the amaz­ing power of large machines to numer­i­cally sim­u­late advanced designs. When I went to grad­u­ate school at UC Berke­ley in the mid-​1970s, I started stay­ing up late, often all night, invent­ing new worlds inside the machines. Solv­ing prob­lems. Writ­ing the code that argued so strongly to be written.

In The Agony and the Ecstasy, Irv­ing Stone’s bio­graph­i­cal novel of Michelan­gelo, Stone described vividly how Michelan­gelo released the stat­ues from the stone, “break­ing the mar­ble spell,” carv­ing from the images in his mind.4 In my most ecsta­tic moments, the soft­ware in the com­puter emerged in the same way. Once I had imag­ined it in my mind I felt that it was already there in the machine, wait­ing to be released. Stay­ing up all night seemed a small price to pay to free it — to give the ideas con­crete form.

After a few years at Berke­ley I started to send out some of the soft­ware I had writ­ten — an instruc­tional Pas­cal sys­tem, Unix util­i­ties, and a text edi­tor called vi (which is still, to my sur­prise, widely used more than 20 years later) — to oth­ers who had sim­i­lar small PDP-​11 and VAX mini­com­put­ers. These adven­tures in soft­ware even­tu­ally turned into the Berke­ley ver­sion of the Unix oper­at­ing sys­tem, which became a per­sonal “suc­cess dis­as­ter” — so many peo­ple wanted it that I never fin­ished my PhD. Instead I got a job work­ing for Darpa putting Berke­ley Unix on the Inter­net and fix­ing it to be reli­able and to run large research appli­ca­tions well. This was all great fun and very reward­ing. And, frankly, I saw no robots here, or any­where near.

Still, by the early 1980s, I was drown­ing. The Unix releases were very suc­cess­ful, and my lit­tle project of one soon had money and some staff, but the prob­lem at Berke­ley was always office space rather than money — there wasn’t room for the help the project needed, so when the other founders of Sun Microsys­tems showed up I jumped at the chance to join them. At Sun, the long hours con­tin­ued into the early days of work­sta­tions and per­sonal com­put­ers, and I have enjoyed par­tic­i­pat­ing in the cre­ation of advanced micro­proces­sor tech­nolo­gies and Inter­net tech­nolo­gies such as Java and Jini.

From all this, I trust it is clear that I am not a Lud­dite. I have always, rather, had a strong belief in the value of the sci­en­tific search for truth and in the abil­ity of great engi­neer­ing to bring mate­r­ial progress. The Indus­trial Rev­o­lu­tion has immea­sur­ably improved everyone’s life over the last cou­ple hun­dred years, and I always expected my career to involve the build­ing of worth­while solu­tions to real prob­lems, one prob­lem at a time.

I have not been dis­ap­pointed. My work has had more impact than I had ever hoped for and has been more widely used than I could have rea­son­ably expected. I have spent the last 20 years still try­ing to fig­ure out how to make com­put­ers as reli­able as I want them to be (they are not nearly there yet) and how to make them sim­ple to use (a goal that has met with even less rel­a­tive suc­cess). Despite some progress, the prob­lems that remain seem even more daunting.

But while I was aware of the moral dilem­mas sur­round­ing technology’s con­se­quences in fields like weapons research, I did not expect that I would con­front such issues in my own field, or at least not so soon.

Per­haps it is always hard to see the big­ger impact while you are in the vor­tex of a change. Fail­ing to under­stand the con­se­quences of our inven­tions while we are in the rap­ture of dis­cov­ery and inno­va­tion seems to be a com­mon fault of sci­en­tists and tech­nol­o­gists; we have long been dri­ven by the over­ar­ch­ing desire to know that is the nature of science’s quest, not stop­ping to notice that the progress to newer and more pow­er­ful tech­nolo­gies can take on a life of its own.

I have long real­ized that the big advances in infor­ma­tion tech­nol­ogy come not from the work of com­puter sci­en­tists, com­puter archi­tects, or elec­tri­cal engi­neers, but from that of phys­i­cal sci­en­tists. The physi­cists Stephen Wol­fram and Brosl Has­s­lacher intro­duced me, in the early 1980s, to chaos the­ory and non­lin­ear sys­tems. In the 1990s, I learned about com­plex sys­tems from con­ver­sa­tions with Danny Hillis, the biol­o­gist Stu­art Kauff­man, the Nobel-​laureate physi­cist Mur­ray Gell-​Mann, and oth­ers. Most recently, Has­s­lacher and the elec­tri­cal engi­neer and device physi­cist Mark Reed have been giv­ing me insight into the incred­i­ble pos­si­bil­i­ties of mol­e­c­u­lar electronics.

In my own work, as code­signer of three micro­proces­sor archi­tec­tures — SPARC, pico­Java, and MAJC — and as the designer of sev­eral imple­men­ta­tions thereof, I’ve been afforded a deep and first­hand acquain­tance with Moore’s law. For decades, Moore’s law has cor­rectly pre­dicted the expo­nen­tial rate of improve­ment of semi­con­duc­tor tech­nol­ogy. Until last year I believed that the rate of advances pre­dicted by Moore’s law might con­tinue only until roughly 2010, when some phys­i­cal lim­its would begin to be reached. It was not obvi­ous to me that a new tech­nol­ogy would arrive in time to keep per­for­mance advanc­ing smoothly.

But because of the recent rapid and rad­i­cal progress in mol­e­c­u­lar elec­tron­ics — where indi­vid­ual atoms and mol­e­cules replace lith­o­graph­i­cally drawn tran­sis­tors — and related nanoscale tech­nolo­gies, we should be able to meet or exceed the Moore’s law rate of progress for another 30 years. By 2030, we are likely to be able to build machines, in quan­tity, a mil­lion times as pow­er­ful as the per­sonal com­put­ers of today — suf­fi­cient to imple­ment the dreams of Kurzweil and Moravec.

As this enor­mous com­put­ing power is com­bined with the manip­u­la­tive advances of the phys­i­cal sci­ences and the new, deep under­stand­ings in genet­ics, enor­mous trans­for­ma­tive power is being unleashed. These com­bi­na­tions open up the oppor­tu­nity to com­pletely redesign the world, for bet­ter or worse: The repli­cat­ing and evolv­ing processes that have been con­fined to the nat­ural world are about to become realms of human endeavor.

In design­ing soft­ware and micro­proces­sors, I have never had the feel­ing that I was design­ing an intel­li­gent machine. The soft­ware and hard­ware is so frag­ile and the capa­bil­i­ties of the machine to “think” so clearly absent that, even as a pos­si­bil­ity, this has always seemed very far in the future.

But now, with the prospect of human-​level com­put­ing power in about 30 years, a new idea sug­gests itself: that I may be work­ing to cre­ate tools which will enable the con­struc­tion of the tech­nol­ogy that may replace our species. How do I feel about this? Very uncom­fort­able. Hav­ing strug­gled my entire career to build reli­able soft­ware sys­tems, it seems to me more than likely that this future will not work out as well as some peo­ple may imag­ine. My per­sonal expe­ri­ence sug­gests we tend to over­es­ti­mate our design abilities.

Given the incred­i­ble power of these new tech­nolo­gies, shouldn’t we be ask­ing how we can best coex­ist with them? And if our own extinc­tion is a likely, or even pos­si­ble, out­come of our tech­no­log­i­cal devel­op­ment, shouldn’t we pro­ceed with great caution?

The dream of robot­ics is, first, that intel­li­gent machines can do our work for us, allow­ing us lives of leisure, restor­ing us to Eden. Yet in his his­tory of such ideas, Dar­win Among the Machines, George Dyson warns: “In the game of life and evo­lu­tion there are three play­ers at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I sus­pect, is on the side of the machines.” As we have seen, Moravec agrees, believ­ing we may well not sur­vive the encounter with the supe­rior robot species.

How soon could such an intel­li­gent robot be built? The com­ing advances in com­put­ing power seem to make it pos­si­ble by 2030. And once an intel­li­gent robot exists, it is only a small step to a robot species — to an intel­li­gent robot that can make evolved copies of itself.

A sec­ond dream of robot­ics is that we will grad­u­ally replace our­selves with our robotic tech­nol­ogy, achiev­ing near immor­tal­ity by down­load­ing our con­scious­nesses; it is this process that Danny Hillis thinks we will grad­u­ally get used to and that Ray Kurzweil ele­gantly details in The Age of Spir­i­tual Machines. (We are begin­ning to see inti­ma­tions of this in the implan­ta­tion of com­puter devices into the human body, as illus­trated on thecover of Wired 8.02.)

But if we are down­loaded into our tech­nol­ogy, what are the chances that we will there­after be our­selves or even human? It seems to me far more likely that a robotic exis­tence would not be like a human one in any sense that we under­stand, that the robots would in no sense be our chil­dren, that on this path our human­ity may well be lost.

Genetic engi­neer­ing promises to rev­o­lu­tion­ize agri­cul­ture by increas­ing crop yields while reduc­ing the use of pes­ti­cides; to cre­ate tens of thou­sands of novel species of bac­te­ria, plants, viruses, and ani­mals; to replace repro­duc­tion, or sup­ple­ment it, with cloning; to cre­ate cures for many dis­eases, increas­ing our life span and our qual­ity of life; and much, much more. We now know with cer­tainty that these pro­found changes in the bio­log­i­cal sci­ences are immi­nent and will chal­lenge all our notions of what life is.

Tech­nolo­gies such as human cloning have in par­tic­u­lar raised our aware­ness of the pro­found eth­i­cal and moral issues we face. If, for exam­ple, we were to reengi­neer our­selves into sev­eral sep­a­rate and unequal species using the power of genetic engi­neer­ing, then we would threaten the notion of equal­ity that is the very cor­ner­stone of our democracy.

Given the incred­i­ble power of genetic engi­neer­ing, it’s no sur­prise that there are sig­nif­i­cant safety issues in its use. My friend Amory Lovins recently cowrote, along with Hunter Lovins, an edi­to­r­ial that pro­vides an eco­log­i­cal view of some of these dan­gers. Among their con­cerns: that “the new botany aligns the devel­op­ment of plants with their eco­nomic, not evo­lu­tion­ary, suc­cess.” (See “A Tale of Two Bota­nies,” page 247.)
Amory’s long career has been focused on energy and resource effi­ciency by tak­ing a whole-​system view of human-​made sys­tems; such a whole-​system view often finds sim­ple, smart solu­tions to oth­er­wise seem­ingly dif­fi­cult prob­lems, and is use­fully applied here as well.

After read­ing the Lovins’ edi­to­r­ial, I saw an op-​ed by Gregg East­er­brook in The New York Times (Novem­ber 19, 1999) about genet­i­cally engi­neered crops, under the head­line: “Food for the Future: Some­day, rice will have built-​in vit­a­min A. Unless the Lud­dites win.”

Are Amory and Hunter Lovins Lud­dites? Cer­tainly not. I believe we all would agree that golden rice, with its built-​in vit­a­min A, is prob­a­bly a good thing, if devel­oped with proper care and respect for the likely dan­gers in mov­ing genes across species boundaries.

Aware­ness of the dan­gers inher­ent in genetic engi­neer­ing is begin­ning to grow, as reflected in the Lovins’ edi­to­r­ial. The gen­eral pub­lic is aware of, and uneasy about, genet­i­cally mod­i­fied foods, and seems to be reject­ing the notion that such foods should be per­mit­ted to be unlabeled.

But genetic engi­neer­ing tech­nol­ogy is already very far along. As the Lovins note, the USDA has already approved about 50 genet­i­cally engi­neered crops for unlim­ited release; more than half of the world’s soy­beans and a third of its corn now con­tain genes spliced in from other forms of life.

While there are many impor­tant issues here, my own major con­cern with genetic engi­neer­ing is nar­rower: that it gives the power — whether mil­i­tar­ily, acci­den­tally, or in a delib­er­ate ter­ror­ist act — to cre­ate a White Plague.

The many won­ders of nan­otech­nol­ogy were first imag­ined by the Nobel-​laureate physi­cist Richard Feyn­man in a speech he gave in 1959, sub­se­quently pub­lished under the title “There’s Plenty of Room at the Bot­tom.” The book that made a big impres­sion on me, in the mid-‘80s, was Eric Drexler’s Engines of Cre­ation, in which he described beau­ti­fully how manip­u­la­tion of mat­ter at the atomic level could cre­ate a utopian future of abun­dance, where just about every­thing could be made cheaply, and almost any imag­in­able dis­ease or phys­i­cal prob­lem could be solved using nan­otech­nol­ogy and arti­fi­cial intelligences.

A sub­se­quent book, Unbound­ing the Future: The Nan­otech­nol­ogy Rev­o­lu­tion, which Drexler cowrote, imag­ines some of the changes that might take place in a world where we had molecular-​level “assem­blers.” Assem­blers could make pos­si­ble incred­i­bly low-​cost solar power, cures for can­cer and the com­mon cold by aug­men­ta­tion of the human immune sys­tem, essen­tially com­plete cleanup of the envi­ron­ment, incred­i­bly inex­pen­sive pocket super­com­put­ers — in fact, any prod­uct would be man­u­fac­turable by assem­blers at a cost no greater than that of wood — space­flight more acces­si­ble than transoceanic travel today, and restora­tion of extinct species.

I remem­ber feel­ing good about nan­otech­nol­ogy after read­ing Engines of Cre­ation. As a tech­nol­o­gist, it gave me a sense of calm — that is, nan­otech­nol­ogy showed us that incred­i­ble progress was pos­si­ble, and indeed per­haps inevitable. If nan­otech­nol­ogy was our future, then I didn’t feel pressed to solve so many prob­lems in the present. I would get to Drexler’s utopian future in due time; I might as well enjoy life more in the here and now. It didn’t make sense, given his vision, to stay up all night, all the time.

Drexler’s vision also led to a lot of good fun. I would occa­sion­ally get to describe the won­ders of nan­otech­nol­ogy to oth­ers who had not heard of it. After teas­ing them with all the things Drexler described I would give a home­work assign­ment of my own: “Use nan­otech­nol­ogy to cre­ate a vam­pire; for extra credit cre­ate an antidote.”

With these won­ders came clear dan­gers, of which I was acutely aware. As I said at a nan­otech­nol­ogy con­fer­ence in 1989, “We can’t sim­ply do our sci­ence and not worry about these eth­i­cal issues.“5 But my sub­se­quent con­ver­sa­tions with physi­cists con­vinced me that nan­otech­nol­ogy might not even work — or, at least, it wouldn’t work any­time soon. Shortly there­after I moved to Col­orado, to a skunk works I had set up, and the focus of my work shifted to soft­ware for the Inter­net, specif­i­cally on ideas that became Java and Jini.

Then, last sum­mer, Brosl Has­s­lacher told me that nanoscale mol­e­c­u­lar elec­tron­ics was now prac­ti­cal. This was new news, at least to me, and I think to many peo­ple — and it rad­i­cally changed my opin­ion about nan­otech­nol­ogy. It sent me back to Engines of Cre­ation. Reread­ing Drexler’s work after more than 10 years, I was dis­mayed to real­ize how lit­tle I had remem­bered of its lengthy sec­tion called “Dan­gers and Hopes,” includ­ing a dis­cus­sion of how nan­otech­nolo­gies can become “engines of destruc­tion.” Indeed, in my reread­ing of this cau­tion­ary mate­r­ial today, I am struck by how naive some of Drexler’s safe­guard pro­pos­als seem, and how much greater I judge the dan­gers to be now than even he seemed to then. (Hav­ing antic­i­pated and described many tech­ni­cal and polit­i­cal prob­lems with nan­otech­nol­ogy, Drexler started the Fore­sight Insti­tute in the late 1980s “to help pre­pare soci­ety for antic­i­pated advanced tech­nolo­gies” — most impor­tant, nanotechnology.)

The enabling break­through to assem­blers seems quite likely within the next 20 years. Mol­e­c­u­lar elec­tron­ics — the new sub­field of nan­otech­nol­ogy where indi­vid­ual mol­e­cules are cir­cuit ele­ments — should mature quickly and become enor­mously lucra­tive within this decade, caus­ing a large incre­men­tal invest­ment in all nanotechnologies.

Unfor­tu­nately, as with nuclear tech­nol­ogy, it is far eas­ier to cre­ate destruc­tive uses for nan­otech­nol­ogy than con­struc­tive ones. Nan­otech­nol­ogy has clear mil­i­tary and ter­ror­ist uses, and you need not be sui­ci­dal to release a mas­sively destruc­tive nan­otech­no­log­i­cal device — such devices can be built to be selec­tively destruc­tive, affect­ing, for exam­ple, only a cer­tain geo­graph­i­cal area or a group of peo­ple who are genet­i­cally distinct.

An imme­di­ate con­se­quence of the Faus­t­ian bar­gain in obtain­ing the great power of nan­otech­nol­ogy is that we run a grave risk — the risk that we might destroy the bios­phere on which all life depends.

As Drexler explained:

Plants” with “leaves” no more effi­cient than today’s solar cells could out-​compete real plants, crowd­ing the bios­phere with an ined­i­ble foliage. Tough omniv­o­rous “bac­te­ria” could out-​compete real bac­te­ria: They could spread like blow­ing pollen, repli­cate swiftly, and reduce the bios­phere to dust in a mat­ter of days. Dan­ger­ous repli­ca­tors could eas­ily be too tough, small, and rapidly spread­ing to stop — at least if we make no prepa­ra­tion. We have trou­ble enough con­trol­ling viruses and fruit flies.

Among the cognoscenti of nan­otech­nol­ogy, this threat has become known as the “gray goo prob­lem.” Though masses of uncon­trolled repli­ca­tors need not be gray or gooey, the term “gray goo” empha­sizes that repli­ca­tors able to oblit­er­ate life might be less inspir­ing than a sin­gle species of crab­grass. They might be supe­rior in an evo­lu­tion­ary sense, but this need not make them valuable.

The gray goo threat makes one thing per­fectly clear: We can­not afford cer­tain kinds of acci­dents with repli­cat­ing assemblers.

Gray goo would surely be a depress­ing end­ing to our human adven­ture on Earth, far worse than mere fire or ice, and one that could stem from a sim­ple lab­o­ra­tory acci­dent.6 Oops.

It is most of all the power of destruc­tive self-​replication in genet­ics, nan­otech­nol­ogy, and robot­ics (GNR) that should give us pause. Self-​replication is the modus operandi of genetic engi­neer­ing, which uses the machin­ery of the cell to repli­cate its designs, and the prime dan­ger under­ly­ing gray goo in nan­otech­nol­ogy. Sto­ries of run-​amok robots like the Borg, repli­cat­ing or mutat­ing to escape from the eth­i­cal con­straints imposed on them by their cre­ators, are well estab­lished in our sci­ence fic­tion books and movies. It is even pos­si­ble that self-​replication may be more fun­da­men­tal than we thought, and hence harder — or even impos­si­ble — to con­trol. A recent arti­cle by Stu­art Kauff­man in Nature titled “Self-​Replication: Even Pep­tides Do It” dis­cusses the dis­cov­ery that a 32-​amino-​acid pep­tide can “auto­catal­yse its own syn­the­sis.” We don’t know how wide­spread this abil­ity is, but Kauff­man notes that it may hint at “a route to self-​reproducing mol­e­c­u­lar sys­tems on a basis far wider than Watson-​Crick base-​pairing.“7

In truth, we have had in hand for years clear warn­ings of the dan­gers inher­ent in wide­spread knowl­edge of GNR tech­nolo­gies — of the pos­si­bil­ity of knowl­edge alone enabling mass destruc­tion. But these warn­ings haven’t been widely pub­li­cized; the pub­lic dis­cus­sions have been clearly inad­e­quate. There is no profit in pub­li­ciz­ing the dangers.

The nuclear, bio­log­i­cal, and chem­i­cal (NBC) tech­nolo­gies used in 20th-​century weapons of mass destruc­tion were and are largely mil­i­tary, devel­oped in gov­ern­ment lab­o­ra­to­ries. In sharp con­trast, the 21st-​century GNR tech­nolo­gies have clear com­mer­cial uses and are being devel­oped almost exclu­sively by cor­po­rate enter­prises. In this age of tri­umphant com­mer­cial­ism, tech­nol­ogy — with sci­ence as its hand­maiden — is deliv­er­ing a series of almost mag­i­cal inven­tions that are the most phe­nom­e­nally lucra­tive ever seen. We are aggres­sively pur­su­ing the promises of these new tech­nolo­gies within the now-​unchallenged sys­tem of global cap­i­tal­ism and its man­i­fold finan­cial incen­tives and com­pet­i­tive pressures.

This is the first moment in the his­tory of our planet when any species, by its own vol­un­tary actions, has become a dan­ger to itself — as well as to vast num­bers of others.

It might be a famil­iar pro­gres­sion, tran­spir­ing on many worlds — a planet, newly formed, placidly revolves around its star; life slowly forms; a kalei­do­scopic pro­ces­sion of crea­tures evolves; intel­li­gence emerges which, at least up to a point, con­fers enor­mous sur­vival value; and then tech­nol­ogy is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by exper­i­ment, and that knowl­edge of these laws can be made both to save and to take lives, both on unprece­dented scales. Sci­ence, they rec­og­nize, grants immense pow­ers. In a flash, they cre­ate world-​altering con­trivances. Some plan­e­tary civ­i­liza­tions see their way through, place lim­its on what may and what must not be done, and safely pass through the time of per­ils. Oth­ers, not so lucky or so pru­dent, perish.

That is Carl Sagan, writ­ing in 1994, in Pale Blue Dot, a book describ­ing his vision of the human future in space. I am only now real­iz­ing how deep his insight was, and how sorely I miss, and will miss, his voice. For all its elo­quence, Sagan’s con­tri­bu­tion was not least that of sim­ple com­mon sense — an attribute that, along with humil­ity, many of the lead­ing advo­cates of the 21st-​century tech­nolo­gies seem to lack.

I remem­ber from my child­hood that my grand­mother was strongly against the overuse of antibi­otics. She had worked since before the first World War as a nurse and had a com­mon­sense atti­tude that tak­ing antibi­otics, unless they were absolutely nec­es­sary, was bad for you.

It is not that she was an enemy of progress. She saw much progress in an almost 70-​year nurs­ing career; my grand­fa­ther, a dia­betic, ben­e­fited greatly from the improved treat­ments that became avail­able in his life­time. But she, like many lev­el­headed peo­ple, would prob­a­bly think it greatly arro­gant for us, now, to be design­ing a robotic “replace­ment species,” when we obvi­ously have so much trou­ble mak­ing rel­a­tively sim­ple things work, and so much trou­ble man­ag­ing — or even under­stand­ing — ourselves.

I real­ize now that she had an aware­ness of the nature of the order of life, and of the neces­sity of liv­ing with and respect­ing that order. With this respect comes a nec­es­sary humil­ity that we, with our early-​21st-​century chutz­pah, lack at our peril. The com­mon­sense view, grounded in this respect, is often right, in advance of the sci­en­tific evi­dence. The clear fragility and inef­fi­cien­cies of the human-​made sys­tems we have built should give us all pause; the fragility of the sys­tems I have worked on cer­tainly hum­bles me.

We should have learned a les­son from the mak­ing of the first atomic bomb and the result­ing arms race. We didn’t do well then, and the par­al­lels to our cur­rent sit­u­a­tion are troubling.

The effort to build the first atomic bomb was led by the bril­liant physi­cist J. Robert Oppen­heimer. Oppen­heimer was not nat­u­rally inter­ested in pol­i­tics but became painfully aware of what he per­ceived as the grave threat to West­ern civ­i­liza­tion from the Third Reich, a threat surely grave because of the pos­si­bil­ity that Hitler might obtain nuclear weapons. Ener­gized by this con­cern, he brought his strong intel­lect, pas­sion for physics, and charis­matic lead­er­ship skills to Los Alamos and led a rapid and suc­cess­ful effort by an incred­i­ble col­lec­tion of great minds to quickly invent the bomb.

What is strik­ing is how this effort con­tin­ued so nat­u­rally after the ini­tial impe­tus was removed. In a meet­ing shortly after V-​E Day with some physi­cists who felt that per­haps the effort should stop, Oppen­heimer argued to con­tinue. His stated rea­son seems a bit strange: not because of the fear of large casu­al­ties from an inva­sion of Japan, but because the United Nations, which was soon to be formed, should have fore­knowl­edge of atomic weapons. A more likely rea­son the project con­tin­ued is the momen­tum that had built up — the first atomic test, Trin­ity, was nearly at hand.

We know that in prepar­ing this first atomic test the physi­cists pro­ceeded despite a large num­ber of pos­si­ble dan­gers. They were ini­tially wor­ried, based on a cal­cu­la­tion by Edward Teller, that an atomic explo­sion might set fire to the atmos­phere. A revised cal­cu­la­tion reduced the dan­ger of destroy­ing the world to a three-​in-​a-​million chance. (Teller says he was later able to dis­miss the prospect of atmos­pheric igni­tion entirely.) Oppen­heimer, though, was suf­fi­ciently con­cerned about the result of Trin­ity that he arranged for a pos­si­ble evac­u­a­tion of the south­west part of the state of New Mex­ico. And, of course, there was the clear dan­ger of start­ing a nuclear arms race.

Within a month of that first, suc­cess­ful test, two atomic bombs destroyed Hiroshima and Nagasaki. Some sci­en­tists had sug­gested that the bomb sim­ply be demon­strated, rather than dropped on Japan­ese cities — say­ing that this would greatly improve the chances for arms con­trol after the war — but to no avail. With the tragedy of Pearl Har­bor still fresh in Amer­i­cans’ minds, it would have been very dif­fi­cult for Pres­i­dent Tru­man to order a demon­stra­tion of the weapons rather than use them as he did — the desire to quickly end the war and save the lives that would have been lost in any inva­sion of Japan was very strong. Yet the over­rid­ing truth was prob­a­bly very sim­ple: As the physi­cist Free­man Dyson later said, “The rea­son that it was dropped was just that nobody had the courage or the fore­sight to say no.”

It’s impor­tant to real­ize how shocked the physi­cists were in the after­math of the bomb­ing of Hiroshima, on August 6, 1945. They describe a series of waves of emo­tion: first, a sense of ful­fill­ment that the bomb worked, then hor­ror at all the peo­ple that had been killed, and then a con­vinc­ing feel­ing that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after the bomb­ing of Hiroshima.

In Novem­ber 1945, three months after the atomic bomb­ings, Oppen­heimer stood firmly behind the sci­en­tific atti­tude, say­ing, “It is not pos­si­ble to be a sci­en­tist unless you believe that the knowl­edge of the world, and the power which this gives, is a thing which is of intrin­sic value to human­ity, and that you are using it to help in the spread of knowl­edge and are will­ing to take the consequences.”

Oppen­heimer went on to work, with oth­ers, on the Acheson-​Lilienthal report, which, as Richard Rhodes says in his recent book Visions of Tech­nol­ogy, “found a way to pre­vent a clan­des­tine nuclear arms race with­out resort­ing to armed world gov­ern­ment”; their sug­ges­tion was a form of relin­quish­ment of nuclear weapons work by nation-​states to an inter­na­tional agency.

This pro­posal led to the Baruch Plan, which was sub­mit­ted to the United Nations in June 1946 but never adopted (per­haps because, as Rhodes sug­gests, Bernard Baruch had “insisted on bur­den­ing the plan with con­ven­tional sanc­tions,” thereby inevitably doom­ing it, even though it would “almost cer­tainly have been rejected by Stal­in­ist Rus­sia any­way”). Other efforts to pro­mote sen­si­ble steps toward inter­na­tion­al­iz­ing nuclear power to pre­vent an arms race ran afoul either of US pol­i­tics and inter­nal dis­trust, or dis­trust by the Sovi­ets. The oppor­tu­nity to avoid the arms race was lost, and very quickly.

Two years later, in 1948, Oppen­heimer seemed to have reached another stage in his think­ing, say­ing, “In some sort of crude sense which no vul­gar­ity, no humor, no over­state­ment can quite extin­guish, the physi­cists have known sin; and this is a knowl­edge they can­not lose.”

In 1949, the Sovi­ets exploded an atom bomb. By 1955, both the US and the Soviet Union had tested hydro­gen bombs suit­able for deliv­ery by air­craft. And so the nuclear arms race began.

Nearly 20 years ago, in the doc­u­men­tary The Day After Trin­ity, Free­man Dyson sum­ma­rized the sci­en­tific atti­tudes that brought us to the nuclear precipice:

I have felt it myself. The glit­ter of nuclear weapons. It is irre­sistible if you come to them as a sci­en­tist. To feel it’s there in your hands, to release this energy that fuels the stars, to let it do your bid­ding. To per­form these mir­a­cles, to lift a mil­lion tons of rock into the sky. It is some­thing that gives peo­ple an illu­sion of illim­itable power, and it is, in some ways, respon­si­ble for all our trou­bles — this, what you might call tech­ni­cal arro­gance, that over­comes peo­ple when they see what they can do with their minds.“8

Now, as then, we are cre­ators of new tech­nolo­gies and stars of the imag­ined future, dri­ven — this time by great finan­cial rewards and global com­pe­ti­tion — despite the clear dan­gers, hardly eval­u­at­ing what it may be like to try to live in a world that is the real­is­tic out­come of what we are cre­at­ing and imagining.

In 1947, The Bul­letin of the Atomic Sci­en­tists began putting a Dooms­day Clock on its cover. For more than 50 years, it has shown an esti­mate of the rel­a­tive nuclear dan­ger we have faced, reflect­ing the chang­ing inter­na­tional con­di­tions. The hands on the clock have moved 15 times and today, stand­ing at nine min­utes to mid­night, reflect con­tin­u­ing and real dan­ger from nuclear weapons. The recent addi­tion of India and Pak­istan to the list of nuclear pow­ers has increased the threat of fail­ure of the non­pro­lif­er­a­tion goal, and this dan­ger was reflected by mov­ing the hands closer to mid­night in 1998.

In our time, how much dan­ger do we face, not just from nuclear weapons, but from all of these tech­nolo­gies? How high are the extinc­tion risks?

The philoso­pher John Leslie has stud­ied this ques­tion and con­cluded that the risk of human extinc­tion is at least 30 per­cent,9 while Ray Kurzweil believes we have “a bet­ter than even chance of mak­ing it through,” with the caveat that he has “always been accused of being an opti­mist.” Not only are these esti­mates not encour­ag­ing, but they do not include the prob­a­bil­ity of many hor­rid out­comes that lie short of extinction.

Faced with such assess­ments, some seri­ous peo­ple are already sug­gest­ing that we sim­ply move beyond Earth as quickly as pos­si­ble. We would col­o­nize the galaxy using von Neu­mann probes, which hop from star sys­tem to star sys­tem, repli­cat­ing as they go. This step will almost cer­tainly be nec­es­sary 5 bil­lion years from now (or sooner if our solar sys­tem is dis­as­trously impacted by the impend­ing col­li­sion of our galaxy with the Androm­eda galaxy within the next 3 bil­lion years), but if we take Kurzweil and Moravec at their word it might be nec­es­sary by the mid­dle of this century.

What are the moral impli­ca­tions here? If we must move beyond Earth this quickly in order for the species to sur­vive, who accepts the respon­si­bil­ity for the fate of those (most of us, after all) who are left behind? And even if we scat­ter to the stars, isn’t it likely that we may take our prob­lems with us or find, later, that they have fol­lowed us? The fate of our species on Earth and our fate in the galaxy seem inex­tri­ca­bly linked.

Another idea is to erect a series of shields to defend against each of the dan­ger­ous tech­nolo­gies. The Strate­gic Defense Ini­tia­tive, pro­posed by the Rea­gan admin­is­tra­tion, was an attempt to design such a shield against the threat of a nuclear attack from the Soviet Union. But as Arthur C. Clarke, who was privy to dis­cus­sions about the project, observed: “Though it might be pos­si­ble, at vast expense, to con­struct local defense sys­tems that would ‘only’ let through a few per­cent of bal­lis­tic mis­siles, the much touted idea of a national umbrella was non­sense. Luis Alvarez, per­haps the great­est exper­i­men­tal physi­cist of this cen­tury, remarked to me that the advo­cates of such schemes were ‘very bright guys with no com­mon sense.’”

Clarke con­tin­ued: “Look­ing into my often cloudy crys­tal ball, I sus­pect that a total defense might indeed be pos­si­ble in a cen­tury or so. But the tech­nol­ogy involved would pro­duce, as a by-​product, weapons so ter­ri­ble that no one would bother with any­thing as prim­i­tive as bal­lis­tic mis­siles.”
10

In Engines of Cre­ation, Eric Drexler pro­posed that we build an active nan­otech­no­log­i­cal shield — a form of immune sys­tem for the bios­phere — to defend against dan­ger­ous repli­ca­tors of all kinds that might escape from lab­o­ra­to­ries or oth­er­wise be mali­ciously cre­ated. But the shield he pro­posed would itself be extremely dan­ger­ous — noth­ing could pre­vent it from devel­op­ing autoim­mune prob­lems and attack­ing the bios­phere itself.
11

Sim­i­lar dif­fi­cul­ties apply to the con­struc­tion of shields against robot­ics and genetic engi­neer­ing. These tech­nolo­gies are too pow­er­ful to be shielded against in the time frame of inter­est; even if it were pos­si­ble to imple­ment defen­sive shields, the side effects of their devel­op­ment would be at least as dan­ger­ous as the tech­nolo­gies we are try­ing to pro­tect against.

These pos­si­bil­i­ties are all thus either unde­sir­able or unachiev­able or both. The only real­is­tic alter­na­tive I see is relin­quish­ment: to limit devel­op­ment of the tech­nolo­gies that are too dan­ger­ous, by lim­it­ing our pur­suit of cer­tain kinds of knowledge.

Yes, I know, knowl­edge is good, as is the search for new truths. We have been seek­ing knowl­edge since ancient times. Aris­to­tle opened his Meta­physics with the sim­ple state­ment: “All men by nature desire to know.” We have, as a bedrock value in our soci­ety, long agreed on the value of open access to infor­ma­tion, and rec­og­nize the prob­lems that arise with attempts to restrict access to and devel­op­ment of knowl­edge. In recent times, we have come to revere sci­en­tific knowledge.

But despite the strong his­tor­i­cal prece­dents, if open access to and unlim­ited devel­op­ment of knowl­edge hence­forth puts us all in clear dan­ger of extinc­tion, then com­mon sense demands that we reex­am­ine even these basic, long-​held beliefs.

It was Niet­zsche who warned us, at the end of the 19th cen­tury, not only that God is dead but that “faith in sci­ence, which after all exists unde­ni­ably, can­not owe its ori­gin to a cal­cu­lus of util­ity; it must have orig­i­nated in spite of the fact that the disu­til­ity and dan­ger­ous­ness of the ‘will to truth,’ of ‘truth at any price’ is proved to it con­stantly.” It is this fur­ther dan­ger that we now fully face — the con­se­quences of our truth-​seeking. The truth that sci­ence seeks can cer­tainly be con­sid­ered a dan­ger­ous sub­sti­tute for God if it is likely to lead to our extinction.

If we could agree, as a species, what we wanted, where we were headed, and why, then we would make our future much less dan­ger­ous — then we might under­stand what we can and should relin­quish. Oth­er­wise, we can eas­ily imag­ine an arms race devel­op­ing over GNR tech­nolo­gies, as it did with the NBC tech­nolo­gies in the 20th cen­tury. This is per­haps the great­est risk, for once such a race begins, it’s very hard to end it. This time — unlike dur­ing the Man­hat­tan Project — we aren’t in a war, fac­ing an implaca­ble enemy that is threat­en­ing our civ­i­liza­tion; we are dri­ven, instead, by our habits, our desires, our eco­nomic sys­tem, and our com­pet­i­tive need to know.

I believe that we all wish our course could be deter­mined by our col­lec­tive val­ues, ethics, and morals. If we had gained more col­lec­tive wis­dom over the past few thou­sand years, then a dia­logue to this end would be more prac­ti­cal, and the incred­i­ble pow­ers we are about to unleash would not be nearly so troubling.

One would think we might be dri­ven to such a dia­logue by our instinct for self-​preservation. Indi­vid­u­als clearly have this desire, yet as a species our behav­ior seems to be not in our favor. In deal­ing with the nuclear threat, we often spoke dis­hon­estly to our­selves and to each other, thereby greatly increas­ing the risks. Whether this was polit­i­cally moti­vated, or because we chose not to think ahead, or because when faced with such grave threats we acted irra­tionally out of fear, I do not know, but it does not bode well.

The new Pandora’s boxes of genet­ics, nan­otech­nol­ogy, and robot­ics are almost open, yet we seem hardly to have noticed. Ideas can’t be put back in a box; unlike ura­nium or plu­to­nium, they don’t need to be mined and refined, and they can be freely copied. Once they are out, they are out. Churchill remarked, in a famous left-​handed com­pli­ment, that the Amer­i­can peo­ple and their lead­ers “invari­ably do the right thing, after they have exam­ined every other alter­na­tive.” In this case, how­ever, we must act more pre­sciently, as to do the right thing only at last may be to lose the chance to do it at all.

As Thoreau said, “We do not ride on the rail­road; it rides upon us”; and this is what we must fight, in our time. The ques­tion is, indeed, Which is to be mas­ter? Will we sur­vive our technologies?

We are being pro­pelled into this new cen­tury with no plan, no con­trol, no brakes. Have we already gone too far down the path to alter course? I don’t believe so, but we aren’t try­ing yet, and the last chance to assert con­trol — the fail-​safe point — is rapidly approach­ing. We have our first pet robots, as well as com­mer­cially avail­able genetic engi­neer­ing tech­niques, and our nanoscale tech­niques are advanc­ing rapidly. While the devel­op­ment of these tech­nolo­gies pro­ceeds through a num­ber of steps, it isn’t nec­es­sar­ily the case — as hap­pened in the Man­hat­tan Project and the Trin­ity test — that the last step in prov­ing a tech­nol­ogy is large and hard. The break­through to wild self-​replication in robot­ics, genetic engi­neer­ing, or nan­otech­nol­ogy could come sud­denly, repris­ing the sur­prise we felt when we learned of the cloning of a mammal.

And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruc­tion in the last cen­tury pro­vide a shin­ing exam­ple of relin­quish­ment for us to con­sider: the uni­lat­eral US aban­don­ment, with­out pre­con­di­tions, of the devel­op­ment of bio­log­i­cal weapons. This relin­quish­ment stemmed from the real­iza­tion that while it would take an enor­mous effort to cre­ate these ter­ri­ble weapons, they could from then on eas­ily be dupli­cated and fall into the hands of rogue nations or ter­ror­ist groups.

The clear con­clu­sion was that we would cre­ate addi­tional threats to our­selves by pur­su­ing these weapons, and that we would be more secure if we did not pur­sue them. We have embod­ied our relin­quish­ment of bio­log­i­cal and chem­i­cal weapons in the 1972 Bio­log­i­cal Weapons Con­ven­tion (BWC) and the 1993 Chem­i­cal Weapons Con­ven­tion (CWC).12

As for the con­tin­u­ing siz­able threat from nuclear weapons, which we have lived with now for more than 50 years, the US Senate’s recent rejec­tion of the Com­pre­hen­sive Test Ban Treaty makes it clear relin­quish­ing nuclear weapons will not be polit­i­cally easy. But we have a unique oppor­tu­nity, with the end of the Cold War, to avert a mul­ti­po­lar arms race. Build­ing on the BWC and CWC relin­quish­ments, suc­cess­ful abo­li­tion of nuclear weapons could help us build toward a habit of relin­quish­ing dan­ger­ous tech­nolo­gies. (Actu­ally, by get­ting rid of all but 100 nuclear weapons world­wide — roughly the total destruc­tive power of World War II and a con­sid­er­ably eas­ier task — we could elim­i­nate this extinc­tion threat.
13)

Ver­i­fy­ing relin­quish­ment will be a dif­fi­cult prob­lem, but not an unsolv­able one. We are for­tu­nate to have already done a lot of rel­e­vant work in the con­text of the BWC and other treaties. Our major task will be to apply this to tech­nolo­gies that are nat­u­rally much more com­mer­cial than mil­i­tary. The sub­stan­tial need here is for trans­parency, as dif­fi­culty of ver­i­fi­ca­tion is directly pro­por­tional to the dif­fi­culty of dis­tin­guish­ing relin­quished from legit­i­mate activities.

I frankly believe that the sit­u­a­tion in 1945 was sim­pler than the one we now face: The nuclear tech­nolo­gies were rea­son­ably sep­a­ra­ble into com­mer­cial and mil­i­tary uses, and mon­i­tor­ing was aided by the nature of atomic tests and the ease with which radioac­tiv­ity could be mea­sured. Research on mil­i­tary appli­ca­tions could be per­formed at national lab­o­ra­to­ries such as Los Alamos, with the results kept secret as long as possible.

The GNR tech­nolo­gies do not divide clearly into com­mer­cial and mil­i­tary uses; given their poten­tial in the mar­ket, it’s hard to imag­ine pur­su­ing them only in national lab­o­ra­to­ries. With their wide­spread com­mer­cial pur­suit, enforc­ing relin­quish­ment will require a ver­i­fi­ca­tion regime sim­i­lar to that for bio­log­i­cal weapons, but on an unprece­dented scale. This, inevitably, will raise ten­sions between our indi­vid­ual pri­vacy and desire for pro­pri­etary infor­ma­tion, and the need for ver­i­fi­ca­tion to pro­tect us all. We will undoubt­edly encounter strong resis­tance to this loss of pri­vacy and free­dom of action.

Ver­i­fy­ing the relin­quish­ment of cer­tain GNR tech­nolo­gies will have to occur in cyber­space as well as at phys­i­cal facil­i­ties. The crit­i­cal issue will be to make the nec­es­sary trans­parency accept­able in a world of pro­pri­etary infor­ma­tion, pre­sum­ably by pro­vid­ing new forms of pro­tec­tion for intel­lec­tual property.

Ver­i­fy­ing com­pli­ance will also require that sci­en­tists and engi­neers adopt a strong code of eth­i­cal con­duct, resem­bling the Hip­po­cratic oath, and that they have the courage to whistle­blow as nec­es­sary, even at high per­sonal cost. This would answer the call — 50 years after Hiroshima — by the Nobel lau­re­ate Hans Bethe, one of the most senior of the sur­viv­ing mem­bers of the Man­hat­tan Project, that all sci­en­tists “cease and desist from work cre­at­ing, devel­op­ing, improv­ing, and man­u­fac­tur­ing nuclear weapons and other weapons of poten­tial mass destruc­tion.“14 In the 21st cen­tury, this requires vig­i­lance and per­sonal respon­si­bil­ity by those who would work on both NBC and GNR tech­nolo­gies to avoid imple­ment­ing weapons of mass destruc­tion and knowledge-​enabled mass destruction.

Thoreau also said that we will be “rich in pro­por­tion to the num­ber of things which we can afford to let alone.” We each seek to be happy, but it would seem worth­while to ques­tion whether we need to take such a high risk of total destruc­tion to gain yet more knowl­edge and yet more things; com­mon sense says that there is a limit to our mate­r­ial needs — and that cer­tain knowl­edge is too dan­ger­ous and is best forgone.

Nei­ther should we pur­sue near immor­tal­ity with­out con­sid­er­ing the costs, with­out con­sid­er­ing the com­men­su­rate increase in the risk of extinc­tion. Immor­tal­ity, while per­haps the orig­i­nal, is cer­tainly not the only pos­si­ble utopian dream.

I recently had the good for­tune to meet the dis­tin­guished author and scholar Jacques Attali, whose book Lignes d’horizons ( Mil­len­nium, in the Eng­lish trans­la­tion) helped inspire the Java and Jini approach to the com­ing age of per­va­sive com­put­ing, as pre­vi­ously described in this mag­a­zine. In his new book Fra­ter­nités, Attali describes how our dreams of utopia have changed over time:

At the dawn of soci­eties, men saw their pas­sage on Earth as noth­ing more than a labyrinth of pain, at the end of which stood a door lead­ing, via their death, to the com­pany of gods and to Eter­nity. With the Hebrews and then the Greeks, some men dared free them­selves from the­o­log­i­cal demands and dream of an ideal City where Lib­erty would flour­ish. Oth­ers, not­ing the evo­lu­tion of the mar­ket soci­ety, under­stood that the lib­erty of some would entail the alien­ation of oth­ers, and they sought Equal­ity.”

Jacques helped me under­stand how these three dif­fer­ent utopian goals exist in ten­sion in our soci­ety today. He goes on to describe a fourth utopia, Fra­ter­nity, whose foun­da­tion is altru­ism. Fra­ter­nity alone asso­ciates indi­vid­ual hap­pi­ness with the hap­pi­ness of oth­ers, afford­ing the promise of self-​sustainment.

This crys­tal­lized for me my prob­lem with Kurzweil’s dream. A tech­no­log­i­cal approach to Eter­nity — near immor­tal­ity through robot­ics — may not be the most desir­able utopia, and its pur­suit brings clear dan­gers. Maybe we should rethink our utopian choices.

Where can we look for a new eth­i­cal basis to set our course? I have found the ideas in the book Ethics for the New Mil­len­nium, by the Dalai Lama, to be very help­ful. As is per­haps well known but lit­tle heeded, the Dalai Lama argues that the most impor­tant thing is for us to con­duct our lives with love and com­pas­sion for oth­ers, and that our soci­eties need to develop a stronger notion of uni­ver­sal respon­si­bil­ity and of our inter­de­pen­dency; he pro­poses a stan­dard of pos­i­tive eth­i­cal con­duct for indi­vid­u­als and soci­eties that seems con­so­nant with Attali’s Fra­ter­nity utopia.

The Dalai Lama fur­ther argues that we must under­stand what it is that makes peo­ple happy, and acknowl­edge the strong evi­dence that nei­ther mate­r­ial progress nor the pur­suit of the power of knowl­edge is the key — that there are lim­its to what sci­ence and the sci­en­tific pur­suit alone can do.

Our West­ern notion of hap­pi­ness seems to come from the Greeks, who defined it as “the exer­cise of vital pow­ers along lines of excel­lence in a life afford­ing them scope.”
15

Clearly, we need to find mean­ing­ful chal­lenges and suf­fi­cient scope in our lives if we are to be happy in what­ever is to come. But I believe we must find alter­na­tive out­lets for our cre­ative forces, beyond the cul­ture of per­pet­ual eco­nomic growth; this growth has largely been a bless­ing for sev­eral hun­dred years, but it has not brought us unal­loyed hap­pi­ness, and we must now choose between the pur­suit of unre­stricted and undi­rected growth through sci­ence and tech­nol­ogy and the clear accom­pa­ny­ing dangers.

It is now more than a year since my first encounter with Ray Kurzweil and John Searle. I see around me cause for hope in the voices for cau­tion and relin­quish­ment and in those peo­ple I have dis­cov­ered who are as con­cerned as I am about our cur­rent predica­ment. I feel, too, a deep­ened sense of per­sonal respon­si­bil­ity — not for the work I have already done, but for the work that I might yet do, at the con­flu­ence of the sciences.

But many other peo­ple who know about the dan­gers still seem strangely silent. When pressed, they trot out the “this is noth­ing new” riposte — as if aware­ness of what could hap­pen is response enough. They tell me, There are uni­ver­si­ties filled with bioethi­cists who study this stuff all day long. They say, All this has been writ­ten about before, and by experts. They com­plain, Your wor­ries and your argu­ments are already old hat.

I don’t know where these peo­ple hide their fear. As an archi­tect of com­plex sys­tems I enter this arena as a gen­er­al­ist. But should this dimin­ish my con­cerns? I am aware of how much has been writ­ten about, talked about, and lec­tured about so author­i­ta­tively. But does this mean it has reached peo­ple? Does this mean we can dis­count the dan­gers before us?

Know­ing is not a ratio­nale for not act­ing. Can we doubt that knowl­edge has become a weapon we wield against ourselves?

The expe­ri­ences of the atomic sci­en­tists clearly show the need to take per­sonal respon­si­bil­ity, the dan­ger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, cre­ate insur­mount­able prob­lems in almost no time flat. We must do more think­ing up front if we are not to be sim­i­larly sur­prised and shocked by the con­se­quences of our inventions.

My con­tin­u­ing pro­fes­sional work is on improv­ing the reli­a­bil­ity of soft­ware. Soft­ware is a tool, and as a tool­builder I must strug­gle with the uses to which the tools I make are put. I have always believed that mak­ing soft­ware more reli­able, given its many uses, will make the world a safer and bet­ter place; if I were to come to believe the oppo­site, then I would be morally oblig­ated to stop this work. I can now imag­ine such a day may come.

This all leaves me not angry but at least a bit melan­cholic. Hence­forth, for me, progress will be some­what bittersweet.

Do you remem­ber the beau­ti­ful penul­ti­mate scene in Man­hat­tan where Woody Allen is lying on his couch and talk­ing into a tape recorder? He is writ­ing a short story about peo­ple who are cre­at­ing unnec­es­sary, neu­rotic prob­lems for them­selves, because it keeps them from deal­ing with more unsolv­able, ter­ri­fy­ing prob­lems about the universe.

He leads him­self to the ques­tion, “Why is life worth liv­ing?” and to con­sider what makes it worth­while for him: Grou­cho Marx, Willie Mays, the sec­ond move­ment of the Jupiter Sym­phony, Louis Armstrong’s record­ing of “Potato Head Blues,” Swedish movies, Flaubert’s Sen­ti­men­tal Edu­ca­tion, Mar­lon Brando, Frank Sina­tra, the apples and pears by Cézanne, the crabs at Sam Wo’s, and, finally, the show­stop­per: his love Tracy’s face.

Each of us has our pre­cious things, and as we care for them we locate the essence of our human­ity. In the end, it is because of our great capac­ity for car­ing that I remain opti­mistic we will con­front the dan­ger­ous issues now before us.

My imme­di­ate hope is to par­tic­i­pate in a much larger dis­cus­sion of the issues raised here, with peo­ple from many dif­fer­ent back­grounds, in set­tings not pre­dis­posed to fear or favor tech­nol­ogy for its own sake.

As a start, I have twice raised many of these issues at events spon­sored by the Aspen Insti­tute and have sep­a­rately pro­posed that the Amer­i­can Acad­emy of Arts and Sci­ences take them up as an exten­sion of its work with the Pug­wash Con­fer­ences. (These have been held since 1957 to dis­cuss arms con­trol, espe­cially of nuclear weapons, and to for­mu­late work­able policies.)

It’s unfor­tu­nate that the Pug­wash meet­ings started only well after the nuclear genie was out of the bot­tle — roughly 15 years too late. We are also get­ting a belated start on seri­ously address­ing the issues around 21st-​century tech­nolo­gies — the pre­ven­tion of knowledge-​enabled mass destruc­tion — and fur­ther delay seems unacceptable.

So I’m still search­ing; there are many more things to learn. Whether we are to suc­ceed or fail, to sur­vive or fall vic­tim to these tech­nolo­gies, is not yet decided. I’m up late again — it’s almost 6 am. I’m try­ing to imag­ine some bet­ter answers, to break the spell and free them from the stone.


1 The pas­sage Kurzweil quotes is from Kaczynski’s Unabomber Man­i­festo, which was pub­lished jointly, under duress, by The New York Times and The Wash­ing­ton Post to attempt to bring his cam­paign of ter­ror to an end. I agree with David Gel­ern­ter, who said about their decision:

It was a tough call for the news­pa­pers. To say yes would be giv­ing in to ter­ror­ism, and for all they knew he was lying any­way. On the other hand, to say yes might stop the killing. There was also a chance that some­one would read the tract and get a hunch about the author; and that is exactly what hap­pened. The suspect’s brother read it, and it rang a bell.

I would have told them not to pub­lish. I’m glad they didn’t ask me. I guess.”

( Draw­ing Life: Sur­viv­ing the Unabomber. Free Press, 1997: 120.)

2 Gar­rett, Lau­rie. The Com­ing Plague: Newly Emerg­ing Dis­eases in a World Out of Bal­ance. Pen­guin, 1994: 4752, 414, 419, 452.

3 Isaac Asi­mov described what became the most famous view of eth­i­cal rules for robot behav­ior in his book I, Robot in 1950, in his Three Laws of Robot­ics: 1. A robot may not injure a human being, or, through inac­tion, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings, except where such orders would con­flict with the First Law. 3. A robot must pro­tect its own exis­tence, as long as such pro­tec­tion does not con­flict with the First or Sec­ond Law.

4 Michelan­gelo wrote a son­net that begins:

Non ha l’ ottimo artista alcun concetto *

Ch’ un marmo solo in sè non circonscriva

Col suo sover­chio; e solo a quello arriva

La man che ubbidisce all’ intelleto.

Stone trans­lates this as:

The best of artists hath no thought to show *

which the rough stone in its super­flu­ous shell

doth not include; to break the mar­ble spell

is all the hand that serves the brain can do.

Stone describes the process: “He was not work­ing from his draw­ings or clay mod­els; they had all been put away. He was carv­ing from the images in his mind. His eyes and hands knew where every line, curve, mass must emerge, and at what depth in the heart of the stone to cre­ate the low relief.”

(The Agony and the Ecstasy. Dou­ble­day, 1961: 6, 144.)

5 First Fore­sight Con­fer­ence on Nan­otech­nol­ogy in Octo­ber 1989, a talk titled “The Future of Com­pu­ta­tion.” Pub­lished in Cran­dall, B. C. and James Lewis, edi­tors. Nan­otech­nol­ogy: Research and Per­spec­tives. MIT Press, 1992: 269. See alsowww​.fore​sight​.org/​C​o​n​f​e​r​e​n​c​e​s​/​M​N​T​01​/​N​a​n​o​1​.​h​t​m​l.

6 In his 1963 novel Cat’s Cra­dle, Kurt Von­negut imag­ined a gray-​goo-​like acci­dent where a form of ice called ice-​nine, which becomes solid at a much higher tem­per­a­ture, freezes the oceans.

7 Kauff­man, Stu­art. “Self-​replication: Even Pep­tides Do It.” Nature, 382, August 8, 1996: 496. Seewww​.santafe​.edu/​s​f​i​/​P​e​o​p​l​e​/​k​a​u​f​f​m​a​n​/​s​a​k​-​p​e​p​t​i​d​e​s​.​h​t​m​l.

8 Else, Jon. The Day After Trin­ity: J. Robert Oppen­heimer and The Atomic Bomb (avail­able at www​.pyra​mid​di​rect​.com).

9 This esti­mate is in Leslie’s book The End of the World: The Sci­ence and Ethics of Human Extinc­tion, where he notes that the prob­a­bil­ity of extinc­tion is sub­stan­tially higher if we accept Bran­don Carter’s Dooms­day Argu­ment, which is, briefly, that “we ought to have some reluc­tance to believe that we are very excep­tion­ally early, for instance in the ear­li­est 0.001 per­cent, among all humans who will ever have lived. This would be some rea­son for think­ing that humankind will not sur­vive for many more cen­turies, let alone col­o­nize the galaxy. Carter’s dooms­day argu­ment doesn’t gen­er­ate any risk esti­mates just by itself. It is an argu­ment for revis­ing the esti­mates which we gen­er­ate when we con­sider var­i­ous pos­si­ble dan­gers.” (Rout­ledge, 1996: 1, 3, 145.)

10 Clarke, Arthur C. “Pres­i­dents, Experts, and Aster­oids.” Sci­ence, June 5, 1998. Reprinted as “Sci­ence and Soci­ety” in Greet­ings, Carbon-​Based Bipeds! Col­lected Essays, 19341998. St. Martin’s Press, 1999: 526.

11 And, as David For­rest sug­gests in his paper “Reg­u­lat­ing Nan­otech­nol­ogy Devel­op­ment,” avail­able atwww​.fore​sight​.org/​N​a​n​o​R​e​v​/​F​o​r​r​e​s​t​1989​.​h​t​m​l, “If we used strict lia­bil­ity as an alter­na­tive to reg­u­la­tion it would be impos­si­ble for any devel­oper to inter­nal­ize the cost of the risk (destruc­tion of the bios­phere), so the­o­ret­i­cally the activ­ity of devel­op­ing nan­otech­nol­ogy should never be under­taken.” Forrest’s analy­sis leaves us with only gov­ern­ment reg­u­la­tion to pro­tect us — not a com­fort­ing thought.

12 Mesel­son, Matthew. “The Prob­lem of Bio­log­i­cal Weapons.” Pre­sen­ta­tion to the 1,818th Stated Meet­ing of the Amer­i­can Acad­emy of Arts and Sci­ences, Jan­u­ary 13, 1999. (min​erva​.amacad​.org/​a​r​c​h​i​v​e​/​b​u​l​l​e​t​i​n​4​.​h​t​m)

13 Doty, Paul. “The For­got­ten Men­ace: Nuclear Weapons Stock­piles Still Rep­re­sent the Biggest Threat to Civ­i­liza­tion.” Nature, 402, Decem­ber 9, 1999: 583.

14 See also Hans Bethe’s 1997 let­ter to Pres­i­dent Clin­ton, at www​.fas​.org/​b​e​t​h​e​c​r​.​h​t​m.

15 Hamil­ton, Edith. The Greek Way. W. W. Nor­ton & Co., 1942: 35.


Bill Joy, cofounder and Chief Sci­en­tist of Sun Microsys­tems, was cochair of the pres­i­den­tial com­mis­sion on the future of IT research, and is coau­thor ofThe Java Lan­guage Spec­i­fi­ca­tion. * His work on the Jini per­va­sive com­put­ing tech­nol­ogy was fea­tured inWired * 6.08.

joomla tem­platesfree joomla tem­platestem­plate joomla

On Iran’s History

2018 Genet­icMem­ory glob­bers joomla tem­plate