Katana VentraIP

Machine ethics

Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents.[1] Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.[2]

Ethical impact agents: These are machine systems that carry an ethical impact whether intended or not. At the same time, they have the potential to act unethically. Moor gives a hypothetical example, the "Goodman agent", named after philosopher . The Goodman agent compares dates but has the millennium bug. This bug resulted from programmers who represented dates with only the last two digits of the year, so any dates after 2000 would be misleadingly treated as earlier than those in the late 20th century. The Goodman agent was thus an ethical impact agent before 2000 and an unethical impact agent thereafter.

Nelson Goodman

Implicit ethical agents: For the consideration of human safety, these agents are programmed to have a , or a built-in virtue. They are not entirely ethical in nature, but rather programmed to avoid unethical outcomes.

fail-safe

Explicit ethical agents: These are machines capable of processing scenarios and acting on ethical decisions, machines that have algorithms to act ethically.

Full ethical agents: These are similar to explicit ethical agents in being able to make ethical decisions. But they also have human features (i.e., have free will, consciousness, and intentionality).

metaphysical

James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots. As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents. A machine can be more than one type of agent.[3]


(See artificial systems and moral responsibility.)

Ethical frameworks and practices[edit]

Practices[edit]

In March 2018, in an effort to address rising concerns over machine learning's impact on human rights, the World Economic Forum and Global Future Council on Human Rights published a white paper with detailed recommendations on how best to prevent discriminatory outcomes in machine learning.[36] The World Economic Forum developed four recommendations based on the UN Guiding Principles of Human Rights to help address and prevent discriminatory outcomes in machine learning:[36]

In fiction[edit]

In science fiction, movies and novels have played with the idea of sentient robots and machines.


Neill Blomkamp's Chappie (2015) enacts a scenario of being able to transfer one's consciousness into a computer.[49] Alex Garland's 2014 film Ex Machina follows an android with artificial intelligence undergoing a variation of the Turing Test, a test administered to a machine to see whether its behavior can be distinguished from that of a human. Films such as The Terminator (1984) and The Matrix (1999) incorporate the concept of machines turning on their human masters.


Asimov considered the issue in the 1950s in I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing his three laws' boundaries to see where they break down or create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[50] Philip K. Dick's 1968 novel Do Androids Dream of Electric Sheep? explores what it means to be human. In his post-apocalyptic scenario, he questions whether empathy is an entirely human characteristic. The book is the basis for the 1982 science-fiction film Blade Runner.

Affective computing

[51]

Formal ethics

Bioethics

Computational theory of mind

Computer ethics

Ethics of artificial intelligence

Moral psychology

Philosophy of artificial intelligence

Philosophy of mind

Artificial intelligence

AI safety

Automating medical decision-support

Google car

Military robot

Machine Intelligence Research Institute

Robot ethics

Space law

Self-replicating spacecraft

Watson project for automating medical decision-support

Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. US: .

Oxford University Press

Anderson, Michael; Anderson, Susan Leigh, eds (July 2011). Machine Ethics. .

Cambridge University Press

Storrs Hall, J. (May 30, 2007). Beyond AI: Creating the Conscience of the Machine .

Prometheus Books

Moor, J. (2006). The Nature, Importance, and Difficulty of Machine Ethics. , 21(4), pp. 18–21.

IEEE Intelligent Systems

Anderson, M. and Anderson, S. (2007). Creating an Ethical Intelligent Agent. , Volume 28(4).

AI Magazine

Hagendorff, Thilo (2021). Linking Human And Machine Behavior: A New Approach to Evaluate Training Data Quality for Beneficial Machine Learning. Minds and Machines, :10.1007/s11023-021-09573-8.

doi

Anderson, Michael; Anderson, Susan Leigh, eds (July/August 2006). "". IEEE Intelligent Systems 21 (4): 10–63.

Special Issue on Machine Ethics

Bendel, Oliver (December 11, 2013). Considerations about the Relationship between Animal and Machine Ethics. AI & SOCIETY, :10.1007/s00146-013-0526-3.

doi

Dabringer, Gerhard, ed. (2010). "". Austrian Ministry of Defence and Sports, Vienna 2010, ISBN 978-3-902761-04-0.

Ethical and Legal Aspects of Unmanned Systems. Interviews

Gardner, A. (1987). An Artificial Approach to Legal Reasoning. Cambridge, MA: .

MIT Press

Georges, T. M. (2003). Digital Soul: Intelligent Machines and Human Values. Cambridge, MA: .

Westview Press

Singer, P.W. (December 29, 2009). : The Robotics Revolution and Conflict in the 21st Century: Penguin.

Wired for War

Winfield, A., Michael, K., Pitt, J. and Evers, V. (March 2019). Special Issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems. Proceedings of the IEEE. 107 (3): 501–615, :10.1109/JPROC.2019.2900622

doi

Interdisciplinary project on machine ethics.

Machine Ethics

Podcast discussing Machine Ethics, AI and Tech ethics.

The Machine Ethics Podcast