A global catastrophic risk is a hypothetical future event which could damage human well-being on a global scale,[2] even endangering or destroying modern civilization.[3] An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an existential risk.[4]
Potential global catastrophic risks include anthropogenic risks, caused by humans (technology, governance, climate change), and non-anthropogenic or external risks.[3] Examples of technology risks are hostile artificial intelligence and destructive biotechnology or nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as a global war, including nuclear holocaust, bioterrorism using genetically modified organisms, cyberterrorism destroying critical infrastructure like the electrical grid; or the failure to manage a natural pandemic. Problems and risks in the domain of earth system governance include global warming, environmental degradation, including extinction of species, famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture.
Examples of non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a lethal gamma-ray burst, a geomagnetic storm destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the predictable Sun transforming into a red giant star engulfing the Earth.
Examples of non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a lethal gamma-ray burst, a geomagnetic storm destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the predictable Sun transforming into a red giant star engulfing the Earth.
A global catastrophic risk is any risk that is at least global in scope and is not subjectively imperceptible in intensity. Those that will affect all future generations and are "terminal"[clarification needed] in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity entirely or prevents any chance of civilization's recovery. [6]
Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner singles out such events as worthy of special attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole.[7] Posner's events include meteor impacts, runaway global warming, grey goo, bioterrorism, and particle accelerator accidents.
Studying near-human extinction directly is not possible, and modelling existential risks is difficult, due in part to survivorship bias.[8] However, individual civilizations have collapsed many times in human history. While there is no known precedent for a complete collapse into an amnesic pre-agricultural society, civilizations such as the Roman Empire have ended in a loss of centralized governance and a major civilization-wide loss of infrastructure and advanced technology. Societies are often resilient to catastrophe; for example, Medieval Europe survived the Black Death without suffering anything resembling a civilization collapse.[9]
Some risks are due to phenomena that have occurred in Earth's past and left a geological record. Together with contemporary observations, it is possible to make informed estimates of the likelihood such events will occur in the future. For example, an extinction-level comet or asteroid impact event before the year 2100 has been estimated at one-in-a-million.[10][11][further explanation needed] Supervolcanoes are another example. There are several known historical supervolcanoes, including Mt. Toba, which some say almost wiped out humanity at the time of its last eruption. The geologic record suggests this particular supervolcano re-erupts about every 50,000 years.[12][13][further explanation needed]
Without the benefit of geological records and direct observation, the relative danger posed by other threats is much more difficult to calculate. In addition, it is one thing to estimate the likelihood of an event taking place, but quite another to assess how likely an event is to cause extinction if it does occur, and most difficult of all, the risk posted by synergistic effects of multiple events taking place simultaneously.[citation needed]
The closest the Doomsday Clock has been to midnight is 2020, when the Clock was set to one minute forty seconds until midnight, due to continued relations troubles between the North Korean and United States governments, as well as rising tensions between the US and Iran.[14]
Given the limitations of ordinary calculation and modeling, expert elicitation is frequently used instead to obtain probability estimates.[15] In 2008, an informal survey of experts on different global catastrophic risks at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction by the year 2100. The conference report cautions that the results should be taken "with a grain of salt"; the results were not meant to capture all large risks and did not include things like climate change, and the results likely reflect many cognitive biases of the conference participants.[16]
Risk | Estimated probability for human extinction before 2100 |
---|---|
Overall probability | |
Molecular nanotechnology weapons | |
Superintelligent Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner singles out such events as worthy of special attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole.[7] Posner's events include meteor impacts, runaway global warming, grey goo, bioterrorism, and particle accelerator accidents.
Studying near-human extinction directly is not possible, and modelling existential risks is difficult, due in part to survivorship bias.[8] However, individual civilizations have collapsed many times in human history. While there is no known precedent for a complete collapse into an amnesic pre-agricultural society, civilizations such as the Roman Empire have ended in a loss of centralized governance and a major civilization-wide loss of infrastructure and advanced technology. Societies are often resilient to catastrophe; for example, Medieval Europe survived the Black Death without suffering anything resembling a civilization collapse.[9] Some risks are due to phenomena that have occurred in Earth's past and left a geological record. Together with contemporary observations, it is possible to make informed estimates of the likelihood such events will occur in the future. For example, an extinction-level comet or asteroid impact event before the year 2100 has been estimated at one-in-a-million.[10][11][further explanation needed] Supervolcanoes are another example. There are several known historical supervolcanoes, including Mt. Toba, which some say almost wiped out humanity at the time of its last eruption. The geologic record suggests this particular supervolcano re-erupts about every 50,000 years.[12][13][further explanation needed] Without the benefit of geological records and direct observation, the relative danger posed by other threats is much more difficult to calculate. In addition, it is one thing to estimate the likelihood of an event taking place, but quite another to assess how likely an event is to cause extinction if it does occur, a Without the benefit of geological records and direct observation, the relative danger posed by other threats is much more difficult to calculate. In addition, it is one thing to estimate the likelihood of an event taking place, but quite another to assess how likely an event is to cause extinction if it does occur, and most difficult of all, the risk posted by synergistic effects of multiple events taking place simultaneously.[citation needed] The closest the Doomsday Clock has been to midnight is 2020, when the Clock was set to one minute forty seconds until midnight, due to continued relations troubles between the North Korean and United States governments, as well as rising tensions between the US and Iran.[14] Given the limitations of ordinary calculation and modeling, expert elicitation is frequently used instead to obtain probability estimates.[15] In 2008, an informal survey of experts on different global catastrophic risks at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction by the year 2100. The conference report cautions that the results should be taken "with a grain of salt"; the results were not meant to capture all large risks and did not include things like climate change, and the results likely reflect many cognitive biases of the conference participants.[16] The 2016 annual report by the Global Challenges Foundation estimates that an average American is more than five times more likely to die during a human-extinction event than in a car crash.[18][19] There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next hundred years, but forecasting for this length of time is difficult. The types of threats posed by nature have been argued to be relatively constant, though this has been disputed,[20] and new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have been an issue only since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly. Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next hundred years, but forecasting for this length of time is difficult. The types of threats posed by nature have been argued to be relatively constant, though this has been disputed,[20] and new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have been an issue only since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly. Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history.[8] These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.[5] In addition to known and tangible risks, unforeseeable black swan extinction events may occur, presenting an additional methodological problem.[21] Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for four billion years before the expansion of the Sun makes the Earth uninhabitable.[22][23] Nick Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years.[6] Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people who will exist in the future. Exponential discounting might make these future benefits much less significant. However, Jason Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.[10] Exponential discounting might make these future benefits much less significant. However, Jason Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.[10]
Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[24] Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.[25]
Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.[26]
Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as concerned about 200,000 birds getting stuck in oil as they are about 2,000.[27] Similarly, people are often more concerned about threats to individuals than to larger groups.[26]
There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global good, so even if a large nation decreases it, that nation will enjoy only a small fraction of the benefit of doing so. Furthermore, the vast majority of the benefits may be enjoyed by far future generations, and though these quadrillions of future people would in theory perhaps be willing to pay massive sums for existential risk reduction, no mechanism for such a transaction exists.[5]
Some sources of catastrophic risk are natural, such as meteor impacts or supervolcanoes. Some of these have caused mass extinctions in the past. On the other hand, some risks are man-made, such as global warming,[28] environmental degradation, engineered pandemics and nuclear war.[29]
The Cambridge Project at The Cambridge Project at Cambridge University says the "greatest threats" to the human species are man-made; they are artificial intelligence, global warming, nuclear war, and rogue biotechnology.[30] The Future of Humanity Institute also states that human extinction is more likely to result from anthropogenic causes than natural causes.[5][31]
An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious di An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer.[48] The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies".[48]
Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System.[49] Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. For example, scientists worried that the first nuclear test might ignite the atmosphere.[50][51] Others worried that the RHIC[52] or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. These particular concerns have been challenged,[53][54][55][56] but the general concern remains.
Biotechnology could lead to the creation of a pandemic, chemical warfare could be taken to an extreme, nanotechnology could lead to grey goo in which out-of-control self-replicating robots consume all living matter on earth while building more of Biotechnology could lead to the creation of a pandemic, chemical warfare could be taken to an extreme, nanotechnology could lead to grey goo in which out-of-control self-replicating robots consume all living matter on earth while building more of themselves—in both cases, either deliberately or by accident.[57]
Global warming refers to the warming caused by human technology since the 19th century or earlier. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms. In November 2017, a statement by 15,364 scientists from 184 countries indicated that increasing levels of greenhouse gases from use of fossil fuels, human population growth, deforestation, and overuse of land for agricultural production, particularly by farming ruminants for meat consumption, are trending in ways that forecast an increase in human misery over coming decades.[3]
Romanian American economist Nicholas Georgescu-Roegen, a Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilization itself.[58]:303f Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that "... all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth]." [59]:370
Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of allocating earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is no way—or only little way—of Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of allocating earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is no way—or only little way—of knowing in advance if or when mankind will ultimately face extinction. In effect, any conceivable intertemporal allocation of the stock will inevitably end up with universal economic decline at some future point.[60]:253–256 [61]:165 [62]:168–171 [63]:150–153 [64]:106–109 [65]:546–549 [66]:142–145 [67]
Many nanoscale technologies are in development or currently in use.[68] The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision.[69] Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions.[68][69] When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software.[68]
Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons.[68] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[68]
Chris Phoenix and Treder classify catas Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons.[68] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[68]
Chris Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories:
Several researchers say the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government.[42][68][70] Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races):
Grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation[75] and has been a theme in mainstream media and fiction.[76][77] This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nowadays, however, nanotech experts—including Drexler—discredit the scenario. According to Phoenix, a "so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident".[78]
In 2018, the Club o In 2018, the Club of Rome called for greater climate change action and published its Climate Emergency Plan, which proposes ten action points to limit global average temperature increase to 1.5 degrees Celsius.[155] Further, in 2019, the Club published the more comprehensive Planetary Emergency Plan.[156]
The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".[157][158]
Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.[159]
Independent [159]
Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000), which aims to reduce the risk of a catastrophe caused by artificial intelligence,[160] with donors including Peter Thiel and Jed McCaleb.[161] The Nuclear Threat Initiative (est. 2001) seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event.[162] It maintains a nuclear material security index.[163] The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe.[164] Most of the research money funds projects at universities.[165] The Global Catastrophic Risk Institute (est. 2011) is a think tank for catastrophic risk. It is funded by the NGO Social and Environmental Entrepreneurs. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a yearly report on the state of global risks.[18][19] The Future of Life Institute (est. 2014) aims to support research and initiatives for safeguarding life considering new technologies and challenges facing humanity.[166] Elon Musk is one of its biggest donors.[167] The Center on Long-Term Risk (est. 2016), formerly known as the Foundational Research Institute, is a British organization focused on reducing risks of astronomical suffering (s-risks) from emerging technologies.[168]
University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk. It was founded by Nick Bostrom and is based at Oxford University. The Centre for the Study of Existential Risk (est. 2012) is a Cambridge-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare. All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."[169] Stephen Hawking was an acting adviser. The Millennium Alliance for Humanity and the Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academic in the humanities.[170][171] It was founded by Paul Ehrlich among others.[172] Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk.[173] The Center for Security and Emerging Technology was established in January 2019 at Georgetown's Walsh School of Foreign Service and will focus on policy research of emerging technologies with an initial emphasis on artificial intelligence.[174] They received a grant of 55M USD from Good Ventures as suggested by the Open Philanthropy Project.[174]
Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis.[175] GAR helps member states with training and coordination of response to epidemics.[176] The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source.[177] The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security and counter-terrorism.[178]
|