© 2019, Springer Nature Switzerland AG. Machine learning and other types of AI algorithms are now commonly used to make decisions about important personal situations. Institutions use such algorithms to help them figure out whether a person should get a job, receive a loan or even be granted parole, sometimes leaving the decision completely to an automatic process. Unfortunately, these algorithms can easily become biased and make unjust decisions. To avoid such problems, researchers are working to include an ethical framework in automatic decision systems. A well-known example is MIT’s Moral Machine, which is used to extract the basic ethical intuitions underlying extensive interviews with humans in order to apply them to the design of ethical autonomous vehicles. In this chapter, we want to show the limitations of current statistical methods based on preferences, and defend the use of abductive reasoning as a systematic tool for assigning values to possibilities and generating sets of ethical regulations for autonomous systems.
|Title of host publication||Studies in Applied Philosophy, Epistemology and Rational Ethics|
|Number of pages||15|
|Publication status||Published - 1 Jan 2019|
- Ethics of AI
- Machine learning