Future of Humanity Institute
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School.[1] Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.[2]
Not to be confused with Future of Life Institute.Formation
2005
April 16, 2024
Research big-picture questions about humanity and its prospects
Sharing an office and working closely with the Centre for Effective Altruism, the institute's stated objective was to focus research where it can make the greatest positive difference for humanity in the long term.[3][4] It engaged in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations. The centre's largest research funders included Amlin, Elon Musk, the European Research Council, Future of Life Institute, and Leverhulme Trust.[5]
The Institute was closed down on 16 April 2024, having "faced increasing administrative headwinds within the Faculty of Philosophy".[6][7]
History[edit]
Nick Bostrom established the institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School.[1] Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers have been mentioned over 5,000 times in the media[8] and have given policy advice at the World Economic Forum, to the private and non-profit sector (such as the Macarthur Foundation, and the World Health Organization), as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States.
Bostrom and bioethicist Julian Savulescu also published the book Human Enhancement in March 2009.[9] Most recently, FHI has focused on the dangers of advanced artificial intelligence (AI). In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Superintelligence: Paths, Dangers, Strategies.[10][11]
In 2018, Open Philanthropy recommended a grant of up to approximately £13.4 million to FHI over three years, with a large portion conditional on successful hiring.[12]
Human enhancement and rationality[edit]
Closely linked to FHI's work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks of human enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI's bioethics research focuses on the potential consequences of gene therapy, life extension, brain implants and brain–computer interfaces, and mind uploading.[21]
FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress. FHI's work on human irrationality, as exemplified in cognitive heuristics and biases, includes an ongoing collaboration with Amlin to study the systemic risk arising from biases in modeling.[22][23]