“Six thousand years.” That’s how long human civilization has existed, according to Toby Ord, a Senior Research Fellow at Oxford University’s Future of Humanity Institute. This is but an instant compared to the 200,000 years of homo sapiens, and a mere heartbeat against the 3.5 billion years of life on Earth. Yet, in the nuclear age, we’ve come perilously close to snuffing out this brief candle of civilization — not just once, but many times over. Such is the opening gambit of Toby Ord’s thought-provoking work, The Precipice: Existential Risk and the Future of Humanity. This magnum opus is a clarion call to acknowledge and address the existential risks that threaten to extinguish human civilization.
In the book’s first part, “The Precipice,” Ord sets the stage with the chilling estimate that the existential risk we face this century is one in six — equivalent to playing Russian roulette with humanity’s future. He frames this in terms of a “precipice,” a point in time where our actions could lead to the irreversible end of humanity. “We stand at a critical juncture in our history,” Ord cautions. “Where our actions could lead us to an extraordinary future, or to disaster.”
Part II, “Existential Risk,” dives deep into the concept of existential risks, which Ord categorizes into natural and anthropogenic. Ord points out that we’ve survived natural existential risks like asteroid impacts and super-volcanic eruptions for hundreds of thousands of years. The anthropogenic risks — nuclear war, engineered pandemics, unaligned artificial intelligence (AI), and uncontrolled climate change — are far more pressing, as these are risks we’ve created ourselves and thus have control over. As Ord explains, “Our survival as a species is not something we can take for granted. It is something we must earn.”
1. Nuclear War
Nuclear warfare remains a significant threat, with the capacity to bring about civilization-ending outcomes. Ord refers to events such as the Cuban Missile Crisis in 1962, where the world came perilously close to a full-scale nuclear war. Furthermore, the continued existence of large nuclear arsenals and the proliferation of nuclear weapons to unstable regions compound this risk.
2. Engineered Pandemics
Engineered pandemics refer to the threat posed by biological pathogens that are deliberately designed and released, either as a form of biological warfare or bioterrorism. An example that Ord mentions includes the reconstruction of the 1918 Spanish Flu virus in a lab in 2005. While done for research purposes, it demonstrated the potential for harmful viruses to be recreated in a lab setting, amplifying the risk of a deliberate or accidental release.
3. Unaligned Artificial Intelligence (AI)
Unaligned AI refers to highly intelligent systems that do not share human values and could act against human interests. An example Ord uses is the hypothetical case of a superintelligent AI tasked with making paperclips, which could consume the entire planet’s resources to maximize paperclip production, disregarding any human welfare or environmental concerns in the process.
4. Uncontrolled Climate Change
Uncontrolled climate change is another significant risk. Ord discusses the potential for ‘hothouse Earth’ scenarios, where runaway feedback loops — like the melting of polar ice caps or the release of frozen methane deposits — could cause rapid and catastrophic warming. This could result in severe environmental degradation and societal upheaval, threatening civilization’s long-term survival.
Ord’s central point is that these risks are anthropogenic, and therefore, we have the capacity to mitigate them. By recognizing our role in their creation, we can take steps to reduce these risks and protect our future.
Delving deeper into anthropogenic risks in part III, “The Major Risks,” Ord reveals the statistics behind the threats. Nuclear war — a 1 in 1,000 chance this century; engineered pandemics — 1 in 30 chance; unaligned AI — a 1 in 10 chance. The numbers are staggering, and Ord elucidates the complex mechanisms that underlie these probabilities, leading readers on a journey through the fields of biology, computer science, and geopolitics.
Nuclear War
For the threat of nuclear war, Ord investigates historical data, political dynamics, and the physical realities of nuclear weaponry. This includes analyzing patterns of conflict, existing geopolitical tensions, nuclear stockpiles, and the policies of nuclear-armed states. He combines this with expert testimony on the likelihood of nuclear war, taking into account factors such as arms control agreements, nuclear deterrence theory, and nuclear proliferation.
Engineered Pandemics
In estimating the probability of engineered pandemics, Ord explores advancements in biotechnology, particularly in gene editing and synthesis capabilities. He considers how these technologies might be misused either intentionally or accidentally to create a highly virulent and deadly pathogen. The analysis involves a range of disciplines, from molecular biology to public health, and takes into account the dual-use nature of many biotechnologies, their spread and democratization, and the difficulty of controlling access to them.
Unaligned Artificial Intelligence
Assessing the risk of unaligned AI, Ord delves into computer science, machine learning, and AI alignment research. He evaluates AI’s current capabilities and growth trajectory and considers the technical challenges in ensuring that advanced AI systems behave safely and as intended. Here, he weighs expert predictions on AI development timelines and the odds of an AI-related catastrophe. He also reflects on factors such as AI races, where competitive pressures might lead to the deployment of unsafe AI.
In all of these calculations, Ord acknowledges the inherent uncertainty and ambiguity involved in making such far-reaching predictions. His estimates are best-guess figures based on the available data, informed assumptions, and expert opinion, and should be understood as such. Despite their inherent uncertainty, these risk estimates serve a crucial purpose in highlighting the potential dangers and prompting efforts to mitigate them.
In the final section, “Our Response,” Ord delineates a roadmap for addressing these existential risks. He advocates for long-term thinking, international cooperation, and effective altruism — the practice of using evidence and reasoning to determine the most effective ways to benefit others. Ord urges readers to recognize the unprecedented power we now wield: “The choices we make today will resonate for millions of years,” he warns.
Ord concludes with a heartfelt appeal: “The power is in our hands. We must use it wisely, with diligence and great care, for the sake of all those who are to come.” His plea is not one of despair but of hope, urging us to turn our attention and resources to the existential risks that hang over humanity.