Katana VentraIP

Philosophy of artificial intelligence

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science[1] that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will.[2][3] Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers.[4] These factors contributed to the emergence of the philosophy of artificial intelligence.

See also: Ethics of artificial intelligence

The philosophy of artificial intelligence attempts to answer such questions as follows:[5]


Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.


Important propositions in the philosophy of AI include some of the following:

"Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

[7]

A physical symbol system can have a mind and mental states.

[9]

Reasoning is nothing but reckoning.

[10]

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program (software) and a computer (hardware). The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules).[71] The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.[72]


This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):


In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):


This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).[73]

Other related questions[edit]

Can a machine have emotions?[edit]

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people".[74] Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[74] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[75]

Can a machine be self-aware?[edit]

"Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger.[76]

Can a machine be original or creative?[edit]

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest.[77] He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways.[78] It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned.[79]


In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings.[80] Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Views on the role of philosophy[edit]

Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated.[4] Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress.[93]

Conferences and literature[edit]

The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run by Vincent C. Müller.


The main bibliography on the subject, with several sub-sections, is on PhilPapers.


A recent survey for Philosophy of AI is Müller (2023).[3]

(1989). Artificial Knowing: Gender and the Thinking Machine. Routledge & CRC Press. ISBN 978-0-415-12963-3

Adam, Alison

(2019). Race After Technology: Abolitionist Tools for the New Jim Code. Wiley. ISBN 978-1-509-52643-7

Benjamin, Ruha

(2005), Consciousness: A Very Short Introduction, Oxford University Press

Blackmore, Susan

(2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press, ISBN 978-0-19-967811-2

Bostrom, Nick

(1990), "Elephants Don't Play Chess" (PDF), Robotics and Autonomous Systems, 6 (1–2): 3–15, CiteSeerX 10.1.1.588.7539, doi:10.1016/S0921-8890(05)80025-9, retrieved 2007-08-30

Brooks, Rodney

(2019). The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation, 34.

Bryson, Joanna

(1996), The Conscious Mind: In Search of a Fundamental Theory, Oxford University Press, New York, ISBN 978-0-19-511789-9

Chalmers, David J

Cole, David (Fall 2004), "The Chinese Room Argument", in Zalta, Edward N. (ed.), .

The Stanford Encyclopedia of Philosophy

(1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.

Crevier, Daniel

(1991), Consciousness Explained, The Penguin Press, ISBN 978-0-7139-9037-9

Dennett, Daniel

(1972), What Computers Can't Do, New York: MIT Press, ISBN 978-0-06-011082-6

Dreyfus, Hubert

(1979), What Computers Still Can't Do, New York: MIT Press.

Dreyfus, Hubert

; Dreyfus, Stuart (1986), Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, Oxford, UK: Blackwell

Dreyfus, Hubert

Fearn, Nicholas (2007), The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers, New York: Grove Press

(2005), Blink: The Power of Thinking Without Thinking, Boston: Little, Brown, ISBN 978-0-316-17232-5.

Gladwell, Malcolm

(2001), "What's Wrong and Right About Searle's Chinese Room Argument?", in Bishop, M.; Preston, J. (eds.), Essays on Searle's Chinese Room Argument, Oxford University Press

Harnad, Stevan

(1985). A Cyborg Manifesto.

Haraway, Donna

(1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT Press.

Haugeland, John

(1979), Gödel, Escher, Bach: an Eternal Golden Braid.

Hofstadter, Douglas

Horst, Steven (2009), "The Computational Theory of Mind", in Zalta, Edward N. (ed.), , Metaphysics Research Lab, Stanford University.

The Stanford Encyclopedia of Philosophy

; Haenlein, Michael (2018), "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence", Business Horizons, 62: 15–25, doi:10.1016/j.bushor.2018.08.004, S2CID 158433736

Kaplan, Andreas

(2005), The Singularity is Near, New York: Viking Press, ISBN 978-0-670-03384-3.

Kurzweil, Ray

(1961), "Minds, Machines and Gödel", in Anderson, A.R. (ed.), Minds and Machines.

Lucas, John

(2019). Morphing Intelligence: From IQ Measurement to Artificial Brains. (C. Shread, Trans.). Columbia University Press.

Malabou, Catherine

(1999), What is AI?, archived from the original on 4 December 2022, retrieved 4 December 2022

McCarthy, John

McDermott, Drew (May 14, 1997), , New York Times, archived from the original on October 4, 2007, retrieved October 10, 2007

"How Intelligent is Deep Blue"

(1988), Mind Children, Harvard University Press

Moravec, Hans

Rescorla, Michael, "The Computational Theory of Mind", in:Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 Edition)

; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2

Russell, Stuart J.

(1980), "Minds, Brains and Programs" (PDF), Behavioral and Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756, S2CID 55303721, archived from the original (PDF) on 2015-09-23

Searle, John

(1992), The Rediscovery of the Mind, Cambridge, Massachusetts: M.I.T. Press

Searle, John

(1999), Mind, language and society, New York, NY: Basic Books, ISBN 978-0-465-04521-1, OCLC 231867665

Searle, John

(October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423

Turing, Alan