Katana VentraIP

AI winter

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence").[2] Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] Three years later the billion-dollar AI industry began to collapse.


There were two major winters approximately 1974–1980 and 1987–2000,[3] and several smaller episodes, including the following:


Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment, leading to the current (as of 2024) AI boom.

The setbacks of the late 1980s and early 1990s[edit]

The collapse of the LISP machine market[edit]

In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and LISP Machines Inc. who built specialized computers, called LISP machines, that were optimized to process the programming language LISP, the preferred language for AI research in the USA.[35][36]


In 1987, three years after Minsky and Schank's prediction, the market for specialized LISP-based AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like Lucid and Franz LISP offered increasingly powerful versions of LISP that were portable to all UNIX systems. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines.[37] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987, some of them had become as powerful as the more expensive LISP machines. The desktop computers had rule-based engines such as CLIPS available.[38] These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year.[39]


By the early 1990s, most commercial LISP companies had failed, including Symbolics, LISP Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox, abandoned the field. A small number of customer companies (that is, companies using systems written in LISP and developed on LISP machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.[40]

Slowdown in deployment of expert systems[edit]

By the early 1990s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[41][42] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (see NASA, TEXSYS) supporting multiple-world scenarios that was difficult to understand and apply.


The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case-based reasoning or universal database access. The maturation of Common Lisp saved many systems such as ICAD which found application in knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from LISP to a C++ (variant) on the PC and helped establish object-oriented technology (including providing major support for the development of UML (see UML Partners).

Alex Castro, quoted in The Economist, 7 June 2007: "[Investors] were put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises."

[47]

Patty Tascarella in , 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[48]

Pittsburgh Business Times

in the New York Times, 2005: "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[49]

John Markoff

A survey of reports from the early 2000's suggests that AI's reputation was still poor:


Many researchers in AI in the mid 2000's deliberately called their work by other names, such as informatics, machine learning, analytics, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem. Although this may be partly because they consider their field to be fundamentally different from AI, it is also true that the new names help to procure funding by avoiding the stigma of false promises attached to the name "artificial intelligence".[49][50]


In the late 1990's and early 21st century, AI technology became widely used as elements of larger systems,[51][52] but the field is rarely credited for these successes. In 2006, Nick Bostrom explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[53] Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[54]

History of artificial neural networks

History of artificial intelligence

AI effect

Software crisis

(2020). The Alignment Problem: Machine learning and human values. W. W. Norton & Company. ISBN 978-0-393-86833-3. OCLC 1233266753.

Christian, Brian

. Paris: UNESCO. 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021.

UNESCO Science Report: the Race Against Time for Smarter Development

DiFeliciantonio, Chase (3 April 2023). . San Francisco Chronicle. Archived from the original on 19 June 2023. Retrieved 19 June 2023.

"AI has already changed the world. This report shows how"

Goswami, Rohan (5 April 2023). . CNBC. Archived from the original on 19 June 2023. Retrieved 19 June 2023.

"Here's where the A.I. jobs are"

(1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.

Crevier, Daniel

Howe, J. (November 1994). . Archived from the original on 17 August 2007. Retrieved 30 August 2007.

"Artificial Intelligence at Edinburgh University : a Perspective"

(2005). The Singularity is Near. Viking Press. ISBN 978-0-670-03384-3.

Kurzweil, Ray

(1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council.

Lighthill, Professor Sir James

"The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)

Gleick, James

"Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. Multiple tests of artificial-intelligence efficacy are needed because, "just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original Turing test." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.

Marcus, Gary

Luke Muehlhauser (September 2016). . Open Philanthropy Project.

"What should we learn from past AI forecasts?"

Gursoy F and Kakadiaris IA (2023) . Front. Big Data 6:1206139. doi: 10.3389/fdata.2023.1206139: Global trends in AI research and development are being largely influenced by the US. Such trends are very important for the field's future, especially in terms of allocating funds to avoid a second AI Winter, advance the betterment of society, and guarantee society's safe transition to the new sociotechnical paradigm. This paper examines, through a critical lens, the official AI R&D strategies of the US government in light of this urgent issue. It makes six suggestions to enhance AI research strategies in the US as well as globally.

Artificial intelligence research strategy of the United States: critical assessment and policy recommendations

Roivainen, Eka, "AI's IQ: aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."

ChatGPT

ComputerWorld article (February 2005)

AI Expert Newsletter (January 2005)

"If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"

- a collection of essays by Richard P. Gabriel, including several autobiographical essays

Patterns of Software

by John McCarthy

Review of "Artificial Intelligence: A General Survey"

Includes a link to the 90 minute 1973 "Controversy" debate from the Royal Academy of Lighthill vs. Michie, McCarthy and Gregory in response to Lighthill's report to the British government.

Other Freddy II Robot Resources