mrz
16-04-03, 02:47
Saving the universe by restricting research
Astrophysicist says technology has potential to annihilate
Keay Davidson, Chronicle Science Writer Monday, April 14, 2003
History's worst technological catastrophes could kill millions or billions of people in this century, and to prevent them, society may need to consider restricting specific types of scientific research, a famed astrophysicist proposes in a new book.
The proposal by Sir Martin Rees, Britain's astronomer royal, is an unusually high-placed challenge to the scientific community's traditional belief in the value of research that is "pure," unrestricted and independent of public oversight.
Because of the growing sophistication and proliferation of biotechnology, computer technology and nanotechnology, civilization could be ravaged or destroyed by irrational or evil amateur scientists who operate alone or in small groups akin to the terrorists of Sept. 11, 2001, Rees warns in the book, "Our Final Hour," just published by Basic Books.
As a result, "I think the odds are no better than 50-50 that our present civilization on Earth will survive to the end of the present century," Rees says.
Doomsday books have appeared for centuries. But Rees' book is unique, and not only because of his fame as a Cambridge University professor who is not prone to making scary public statements. Rees is one of the world's leading authorities on black holes and the origins and evolution of the universe.
WHEN GOOD SCIENCE GOES BAD
Another reason is that the book defies the scientific community's long- standing taboo on suggestions that humanity might be better off by not exploring certain avenues of science.
Only rarely have such suggestions been taken seriously in the scientific community, and never permanently. The most famous instance is the so-called Asilomar agreement of the 1970s, in which molecular biologists meeting at the Asilomar Conference Center in Pacific Grove temporarily agreed to limit their research because of concerns about possible accidents that might damage the environment.
He considers, for example, speculation that scientists might invent micro- robots that could reproduce out of control and devour Earth's surface, or that physicists might accidentally generate black holes or "rips" in the space-time continuum that could destroy Earth.
"Some experiments could conceivably threaten the entire Earth," he writes. "How close to zero should the claimed risk be before such experiments are sanctioned?"
EXPERIMENTS CAN GO AWRY
As a case study of such "extreme risks," Rees cites a controversial project that began in 2000 at Brookhaven National Laboratory on Long Island. Physicists there have used a particle accelerator to try to create a "quark- gluon plasma," a soup of extremely hot, dense subatomic particles that mimic conditions of the "Big Bang" that spawned our cosmos 13.7 billion years ago.
Critics speculated that this high concentration of energy might have one of three undesirable results:
-- It could form a black hole -- an object with such immense gravitational pull that nothing could escape, not even light -- which would "suck in everything around it."
-- The quark particles might form a very compressed object called a strangelet, "far smaller than a single atom," that could "infect" surrounding matter and "transform the entire planet Earth into an inert hyperdense sphere about 100 meters across."
-- Space itself, an invisible froth of subatomic forces and short-lived particles, might undergo a "phase transition" like water molecules that freeze into ice. Such an event could "rip the fabric of space itself. The boundary of the new-style vacuum would spread like an expanding bubble," devouring Earth and, eventually, the entire universe beyond it.
Could such bizarre tragedies really happen? To reassure the residents of Long Island and critics beyond, Brookhaven physicists presented calculations indicating the answer was no. Indeed, independent evidence indicates that similar concentrations of energy occur naturally in the cosmos, because of the interaction of cosmic-ray particles, without tearing the fabric of space.
Although Rees finds such counterarguments "reassuring" and believes a catastrophe is "very, very improbable," he cautions that "we cannot be 100 percent sure what might actually happen."
Which triggers his core question: Even if the odds against such a cosmic disaster are vanishingly small -- one estimate is 1 in 50 million -- are the potential benefits of the experiment worth risking the worst-case outcome, namely the annihilation of Earth and the entire universe?
Speaking of science in general, he says: "No decision to go ahead with an experiment with a conceivable 'Doomsday downside' should be made unless the general public (or a representative group of them) is satisfied that the risk is below what they collectively regard as an acceptable threshold. It isn't good enough to make a slapdash estimate of even the tiniest risk of destroying the world."
Should financial support "be withdrawn from a line of 'pure' research, even if it is undeniably interesting, if there is reason to expect that the outcome will be misused? I think it should, especially since the present (funding) allocation among different sciences is itself the outcome of a complicated 'tension' between extraneous factors."
FEAR OF ROBOTS TAKING OVER
Rees also entertains the creepier risks of nanotechnology, one goal of which is the construction of super-small robots that replicate like viruses. "Nanobots" might have useful purposes -- for example, patrolling the body for cancer cells. But some, including two Bay Area figures, nanotech guru Eric Drexler and Bill Joy, chief scientist at Sun Microsystems, have speculated that they might race out of control, devouring all matter and reducing Earth's surface to a "gray goo."
Rees waffles on the question of whether such a weird threat is possible. "After 2020," he cautions, "nanobots could be a reality; indeed, so many people may try to make nanoreplicators that the chance of one attempt triggering disaster would become substantial. It is easier to conceive of extra threats than of effective antidotes."
Rees admits there are no easy answers to the futuristic crises he depicts. Restrictions on research could backfire, for example: "The same techniques that could lead to voracious 'nanobots' might also be needed to create the nanotech analogue of vaccines that could immunize against them," he writes.
"I wouldn't characterize myself as being unrelievedly gloomy," Rees said in a recent phone interview. "It's just that the more I have followed science and its potential, the more I have been aware of both the exciting hopes and the unintended downsides."
E-mail Keay Davidson at [email protected]
Astrophysicist says technology has potential to annihilate
Keay Davidson, Chronicle Science Writer Monday, April 14, 2003
History's worst technological catastrophes could kill millions or billions of people in this century, and to prevent them, society may need to consider restricting specific types of scientific research, a famed astrophysicist proposes in a new book.
The proposal by Sir Martin Rees, Britain's astronomer royal, is an unusually high-placed challenge to the scientific community's traditional belief in the value of research that is "pure," unrestricted and independent of public oversight.
Because of the growing sophistication and proliferation of biotechnology, computer technology and nanotechnology, civilization could be ravaged or destroyed by irrational or evil amateur scientists who operate alone or in small groups akin to the terrorists of Sept. 11, 2001, Rees warns in the book, "Our Final Hour," just published by Basic Books.
As a result, "I think the odds are no better than 50-50 that our present civilization on Earth will survive to the end of the present century," Rees says.
Doomsday books have appeared for centuries. But Rees' book is unique, and not only because of his fame as a Cambridge University professor who is not prone to making scary public statements. Rees is one of the world's leading authorities on black holes and the origins and evolution of the universe.
WHEN GOOD SCIENCE GOES BAD
Another reason is that the book defies the scientific community's long- standing taboo on suggestions that humanity might be better off by not exploring certain avenues of science.
Only rarely have such suggestions been taken seriously in the scientific community, and never permanently. The most famous instance is the so-called Asilomar agreement of the 1970s, in which molecular biologists meeting at the Asilomar Conference Center in Pacific Grove temporarily agreed to limit their research because of concerns about possible accidents that might damage the environment.
He considers, for example, speculation that scientists might invent micro- robots that could reproduce out of control and devour Earth's surface, or that physicists might accidentally generate black holes or "rips" in the space-time continuum that could destroy Earth.
"Some experiments could conceivably threaten the entire Earth," he writes. "How close to zero should the claimed risk be before such experiments are sanctioned?"
EXPERIMENTS CAN GO AWRY
As a case study of such "extreme risks," Rees cites a controversial project that began in 2000 at Brookhaven National Laboratory on Long Island. Physicists there have used a particle accelerator to try to create a "quark- gluon plasma," a soup of extremely hot, dense subatomic particles that mimic conditions of the "Big Bang" that spawned our cosmos 13.7 billion years ago.
Critics speculated that this high concentration of energy might have one of three undesirable results:
-- It could form a black hole -- an object with such immense gravitational pull that nothing could escape, not even light -- which would "suck in everything around it."
-- The quark particles might form a very compressed object called a strangelet, "far smaller than a single atom," that could "infect" surrounding matter and "transform the entire planet Earth into an inert hyperdense sphere about 100 meters across."
-- Space itself, an invisible froth of subatomic forces and short-lived particles, might undergo a "phase transition" like water molecules that freeze into ice. Such an event could "rip the fabric of space itself. The boundary of the new-style vacuum would spread like an expanding bubble," devouring Earth and, eventually, the entire universe beyond it.
Could such bizarre tragedies really happen? To reassure the residents of Long Island and critics beyond, Brookhaven physicists presented calculations indicating the answer was no. Indeed, independent evidence indicates that similar concentrations of energy occur naturally in the cosmos, because of the interaction of cosmic-ray particles, without tearing the fabric of space.
Although Rees finds such counterarguments "reassuring" and believes a catastrophe is "very, very improbable," he cautions that "we cannot be 100 percent sure what might actually happen."
Which triggers his core question: Even if the odds against such a cosmic disaster are vanishingly small -- one estimate is 1 in 50 million -- are the potential benefits of the experiment worth risking the worst-case outcome, namely the annihilation of Earth and the entire universe?
Speaking of science in general, he says: "No decision to go ahead with an experiment with a conceivable 'Doomsday downside' should be made unless the general public (or a representative group of them) is satisfied that the risk is below what they collectively regard as an acceptable threshold. It isn't good enough to make a slapdash estimate of even the tiniest risk of destroying the world."
Should financial support "be withdrawn from a line of 'pure' research, even if it is undeniably interesting, if there is reason to expect that the outcome will be misused? I think it should, especially since the present (funding) allocation among different sciences is itself the outcome of a complicated 'tension' between extraneous factors."
FEAR OF ROBOTS TAKING OVER
Rees also entertains the creepier risks of nanotechnology, one goal of which is the construction of super-small robots that replicate like viruses. "Nanobots" might have useful purposes -- for example, patrolling the body for cancer cells. But some, including two Bay Area figures, nanotech guru Eric Drexler and Bill Joy, chief scientist at Sun Microsystems, have speculated that they might race out of control, devouring all matter and reducing Earth's surface to a "gray goo."
Rees waffles on the question of whether such a weird threat is possible. "After 2020," he cautions, "nanobots could be a reality; indeed, so many people may try to make nanoreplicators that the chance of one attempt triggering disaster would become substantial. It is easier to conceive of extra threats than of effective antidotes."
Rees admits there are no easy answers to the futuristic crises he depicts. Restrictions on research could backfire, for example: "The same techniques that could lead to voracious 'nanobots' might also be needed to create the nanotech analogue of vaccines that could immunize against them," he writes.
"I wouldn't characterize myself as being unrelievedly gloomy," Rees said in a recent phone interview. "It's just that the more I have followed science and its potential, the more I have been aware of both the exciting hopes and the unintended downsides."
E-mail Keay Davidson at [email protected]