Katana VentraIP

Global catastrophic risk

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale,[2] even endangering or destroying modern civilization.[3] An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."[4]

"Existential threat" and "Doomsday scenario" redirect here. For other uses, see Doomsday (disambiguation).

Over the last two decades, a number of academic and non-profit organizations have been established to research global catastrophic and existential risks, formulate potential mitigation measures and either advocate for or implement these measures.[5][6][7][8]

Proposed mitigation[edit]

Multi-layer defense[edit]

Defense in depth is a useful framework for categorizing risk mitigation measures into three layers of defense:[35]

Organizations[edit]

The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".[59][60]


Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.[61]


Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000), which aims to reduce the risk of a catastrophe caused by artificial intelligence,[62] with donors including Peter Thiel and Jed McCaleb.[63] The Nuclear Threat Initiative (est. 2001) seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event.[8] It maintains a nuclear material security index.[64] The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe.[65] Most of the research money funds projects at universities.[66] The Global Catastrophic Risk Institute (est. 2011) is a US-based non-profit, non-partisan think tank founded by Seth Baum and Tony Barrett. GCRI does research and policy work across various risks, including artificial intelligence, nuclear war, climate change, and asteroid impacts.[67] The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a yearly report on the state of global risks.[68][69] The Future of Life Institute (est. 2014) works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit all life, through grantmaking, policy advocacy in the United States, European Union and United Nations, and educational outreach.[7] Elon Musk, Vitalik Buterin and Jaan Tallinn are some of its biggest donors.[70] The Center on Long-Term Risk (est. 2016), formerly known as the Foundational Research Institute, is a British organization focused on reducing risks of astronomical suffering (s-risks) from emerging technologies.[71]


University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk.[5] It was founded by Nick Bostrom and is based at Oxford University.[5] The Centre for the Study of Existential Risk (est. 2012) is a Cambridge University-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare.[6] All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."[72] Stephen Hawking was an acting adviser. The Millennium Alliance for Humanity and the Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academia in the humanities.[73][74] It was founded by Paul Ehrlich, among others.[75] Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk.[76] The Center for Security and Emerging Technology was established in January 2019 at Georgetown's Walsh School of Foreign Service and will focus on policy research of emerging technologies with an initial emphasis on artificial intelligence.[77] They received a grant of 55M USD from Good Ventures as suggested by Open Philanthropy.[77]


Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis.[78] GAR helps member states with training and coordination of response to epidemics.[79] The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source.[80] The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security and counter-terrorism.[81]

Avin, Shahar; Wintle, Bonnie C.; Weitzdörfer, Julius; ó Héigeartaigh, Seán S.; Sutherland, William J.; Rees, Martin J. (2018). . Futures. 102: 20–26. doi:10.1016/j.futures.2018.02.001.

"Classifying global catastrophic risks"

(2006) Endgame (ISBN 1-58322-730-X).

Derrick Jensen

(1972). The Limits to Growth (ISBN 0-87663-165-0).

Donella Meadows

(2003). The Future of Life. ISBN 0-679-76811-4

Edward O. Wilson

(February 25, 2021). "The Power of Catastrophic Thinking". The New York Review of Books. Vol. LXVIII, no. 3. pp. 26–29. p. 28: Whether you are searching for a cure for cancer, or pursuing a scholarly or artistic career, or engaged in establishing more just institutions, a threat to the future of humanity is also a threat to the significance of what you do.

Holt, Jim

Huesemann, Michael H., and Joyce A. Huesemann (2011). , Chapter 6, "Sustainability or Collapse", New Society Publishers, Gabriola Island, British Columbia, Canada, 464 pages (ISBN 0865717044).

Technofix: Why Technology Won't Save Us or the Environment

Radical Evolution, 2005 (ISBN 978-0385509657).

Joel Garreau

(1996). The End of the World (ISBN 0-415-14043-9).

John A. Leslie

(1990). The Collapse of Complex Societies, Cambridge University Press, Cambridge, UK (ISBN 9780521386739).

Joseph Tainter

Roger-Maurice Bonnet and , Surviving 1,000 Centuries Can We Do It? (2008), Springer-Praxis Books.

Lodewijk Woltjer

Walsh, Bryan (2019). End Times: A Brief Guide to the End of the World. Hachette Books.  978-0275948023.

ISBN

. BBC. February 19, 2019.

"Are we on the road to civilisation collapse?"

(August 5, 2022). "The Case for Longtermism". The New York Times.

MacAskill, William

from The Guardian. Ten scientists name the biggest dangers to Earth and assess the chances they will happen. April 14, 2005.

"What a way to go"

. The Guardian. February 6, 2020.

Humanity under threat from perfect storm of crises – study

by the Global Challenges Foundation

Annual Reports on Global Risk

Center on Long-Term Risk

Global Catastrophic Risk Policy

a TED talk

Stephen Petranek: 10 ways the world could end