Katana VentraIP

Artificial general intelligence

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks.[1] This is in contrast to narrow AI, which is designed for specific tasks.[2] AGI is considered one of various definitions of strong AI.

Not to be confused with Generative artificial intelligence.

Creating AGI is a primary goal of AI research and of companies such as OpenAI[3] and Meta.[4] A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.[5]


The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; and a minority believe it may never be achieved.[6] There is debate on the exact definition of AGI, and regarding whether modern large language models (LLMs) such as GPT-4 are early, incomplete forms of AGI.[7] AGI is a common topic in science fiction and futures studies.


Contention exists over the potential for AGI to pose a threat to humanity;[8] for example, OpenAI claims to treat it as an existential risk, while others find the development of AGI to be too remote to present a risk.[9][6][10]

Terminology

AGI is also known as strong AI,[11][12] full AI,[13] human-level AI[6] or general intelligent action.[14] However, some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.[a] In contrast, weak AI (or narrow AI) is able to solve one specific problem, but lacks general cognitive abilities.[15][12] Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.[a]


Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans,[16] while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.[17]


A framework for classifying AGI in levels was proposed in 2023 by Google DeepMind researchers. They define five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI is similarly defined but with a threshold of 100%. They consider that large language models like ChatGPT or LLaMA 2 were instances or emerging AGI.[18]

use strategy, solve puzzles, and make judgments under uncertainty

reason

including common sense knowledge

represent knowledge

plan

learn

communicate in

natural language

if necessary, in completion of any given goal

integrate these skills

Strong AI hypothesis: An artificial intelligence system can have "a mind" and "consciousness".

Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

Benefits

AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems.[119]


AGI could improve the productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer.[120] It could take care of the elderly,[121] and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education.[121] For virtually any job that benefits society if done well, it would probably sooner or later be preferable to leave it to an AGI. The need to work to subsist could become obsolete if the wealth produced is properly redistributed.[121][122] This also raises the question of the place of humans in a radically automated society.


AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks.[123] If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true),[124] it could take measures to drastically reduce the risks[123] while minimizing the impact of these measures on our quality of life.

"Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)

Cukier, Kenneth

"The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)

Gleick, James

"A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.)

Hughes-Castleberry, Kenna

"Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.)

Immerwahr, Daniel

"Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45.

Marcus, Gary

"In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.

Press, Eyal

"AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."

Roivainen, Eka

Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", , vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)

Foreign Affairs

The AGI portal maintained by Pei Wang