AI winter
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence").[2] Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] Three years later the billion-dollar AI industry began to collapse.
There were two major winters approximately 1974–1980 and 1987–2000,[3] and several smaller episodes, including the following:
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment, leading to the current (as of 2024) AI boom.
The setbacks of the late 1980s and early 1990s[edit]
The collapse of the LISP machine market[edit]
In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and LISP Machines Inc. who built specialized computers, called LISP machines, that were optimized to process the programming language LISP, the preferred language for AI research in the USA.[35][36]
In 1987, three years after Minsky and Schank's prediction, the market for specialized LISP-based AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like Lucid and Franz LISP offered increasingly powerful versions of LISP that were portable to all UNIX systems. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines.[37] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987, some of them had become as powerful as the more expensive LISP machines. The desktop computers had rule-based engines such as CLIPS available.[38] These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year.[39]
By the early 1990s, most commercial LISP companies had failed, including Symbolics, LISP Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox, abandoned the field. A small number of customer companies (that is, companies using systems written in LISP and developed on LISP machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.[40]
Slowdown in deployment of expert systems[edit]
By the early 1990s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[41][42] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (see NASA, TEXSYS) supporting multiple-world scenarios that was difficult to understand and apply.
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case-based reasoning or universal database access. The maturation of Common Lisp saved many systems such as ICAD which found application in knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from LISP to a C++ (variant) on the PC and helped establish object-oriented technology (including providing major support for the development of UML (see UML Partners).
A survey of reports from the early 2000's suggests that AI's reputation was still poor:
Many researchers in AI in the mid 2000's deliberately called their work by other names, such as informatics, machine learning, analytics, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem. Although this may be partly because they consider their field to be fundamentally different from AI, it is also true that the new names help to procure funding by avoiding the stigma of false promises attached to the name "artificial intelligence".[49][50]
In the late 1990's and early 21st century, AI technology became widely used as elements of larger systems,[51][52] but the field is rarely credited for these successes. In 2006, Nick Bostrom explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[53] Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[54]