Abstract
When androids take care of delusive patients, ethic-epistemic concerns crop up about an agency’s good intent and why we would follow its advice. Robots are not human but may deliver correct medical information, whereas Alzheimer patients are human but may be mistaken. If humanness is not the question, then do we base our trust on truth? True is what logically can be verified given certain principles, which you have to adhere to in the first place. In other words, truth comes full circle. Does it come from empirical validation, then? That is a hard one too because we access the world through our biased sense perceptions and flawed measurement tools. We see what we think we see. Probably, the attribution of ethical qualities comes from pragmatics: If an agency affords delivering the goods, it is a “good” agency. If that happens regularly and in a predictable manner, the agency becomes trustworthy. Computers can be made more predictable than Alzheimer patients and in that sense, may be considered morally “better” than delusive humans. That is, if we ignore the existence of graded liabilities. That is why I developed a responsibility self-test that can be used to navigate the moral mine field of ethical positions that evolves from differently weighing or prioritizing the principles of autonomy, non-maleficence, beneficence, and justice.
Original language | English |
---|---|
Pages (from-to) | 233-254 |
Number of pages | 22 |
Journal | Intelligent Systems, Control and Automation: Science and Engineering |
Volume | 74 |
DOIs | |
Publication status | Published - 1 Jan 2015 |
Externally published | Yes |
ASJC Scopus subject areas
- Control and Systems Engineering
- Mechanical Engineering
- Computer Science Applications
- Control and Optimization