Machine ethics
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents.[1] Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.[2]
James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots. As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents. A machine can be more than one type of agent.[3]
(See artificial systems and moral responsibility.)
Ethical frameworks and practices[edit]
Practices[edit]
In March 2018, in an effort to address rising concerns over machine learning's impact on human rights, the World Economic Forum and Global Future Council on Human Rights published a white paper with detailed recommendations on how best to prevent discriminatory outcomes in machine learning.[36] The World Economic Forum developed four recommendations based on the UN Guiding Principles of Human Rights to help address and prevent discriminatory outcomes in machine learning:[36]
In fiction[edit]
In science fiction, movies and novels have played with the idea of sentient robots and machines.
Neill Blomkamp's Chappie (2015) enacts a scenario of being able to transfer one's consciousness into a computer.[49] Alex Garland's 2014 film Ex Machina follows an android with artificial intelligence undergoing a variation of the Turing Test, a test administered to a machine to see whether its behavior can be distinguished from that of a human. Films such as The Terminator (1984) and The Matrix (1999) incorporate the concept of machines turning on their human masters.
Asimov considered the issue in the 1950s in I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing his three laws' boundaries to see where they break down or create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[50] Philip K. Dick's 1968 novel Do Androids Dream of Electric Sheep? explores what it means to be human. In his post-apocalyptic scenario, he questions whether empathy is an entirely human characteristic. The book is the basis for the 1982 science-fiction film Blade Runner.