Artificial intelligence
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.
"AI" redirects here. For other uses, see AI (disambiguation), Artificial intelligence (disambiguation), and Intelligent agent.
Some high-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); interacting via human speech (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT, Apple Intelligence, and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go).[2] However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[3][4]
Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence.[5] Artificial intelligence was founded as an academic discipline in 1956,[6] by those now considered the founding fathers of AI, John McCarthy, Marvin Minksy, Nathaniel Rochester, and Claude Shannon.[7][8] The field went through multiple cycles of optimism,[9][10] followed by periods of disappointment and loss of funding, known as AI winter.[11][12] Funding and interest vastly increased after 2012 when deep learning surpassed all previous AI techniques,[13] and after 2017 with the transformer architecture.[14] This led to the AI boom of the early 2020s, with companies, universities, and laboratories overwhelmingly based in the United States pioneering significant advances in artificial intelligence.[15]
The growing use of artificial intelligence in the 21st century is influencing a societal and economic shift towards increased automation, data-driven decision-making, and the integration of AI systems into various economic sectors and areas of life, impacting job markets, healthcare, government, industry, education, propaganda, and disinformation. This raises questions about the long-term effects, ethical implications, and risks of AI, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
The various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals.[16]
To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[17]
$_$_$DEEZ_NUTS#0__titleDEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#0__subtitleDEEZ_NUTS$_$_$
Future
Superintelligence and the singularity
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[325]
If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".[341]
However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[342]
Transhumanism
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.[343]
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence.[344]
$_$_$DEEZ_NUTS#2__titleDEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#2__subtextDEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__titleDEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__subtextDEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--0DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--1DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--2DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--3DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--4DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--5DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--6DEEZ_NUTS$_$_$
$_$_$DEEZ_NUTS#1__answer--7DEEZ_NUTS$_$_$