Machine medical ethics: When a human is delusive but the machine has its wits about him

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

When androids take care of delusive patients, ethic-epistemic concerns crop up about an agency’s good intent and why we would follow its advice. Robots are not human but may deliver correct medical information, whereas Alzheimer patients are human but may be mistaken. If humanness is not the question, then do we base our trust on truth? True is what logically can be verified given certain principles, which you have to adhere to in the first place. In other words, truth comes full circle. Does it come from empirical validation, then? That is a hard one too because we access the world through our biased sense perceptions and flawed measurement tools. We see what we think we see. Probably, the attribution of ethical qualities comes from pragmatics: If an agency affords delivering the goods, it is a “good” agency. If that happens regularly and in a predictable manner, the agency becomes trustworthy. Computers can be made more predictable than Alzheimer patients and in that sense, may be considered morally “better” than delusive humans. That is, if we ignore the existence of graded liabilities. That is why I developed a responsibility self-test that can be used to navigate the moral mine field of ethical positions that evolves from differently weighing or prioritizing the principles of autonomy, non-maleficence, beneficence, and justice.
Original languageEnglish
Pages (from-to)233-254
Number of pages22
JournalIntelligent Systems, Control and Automation: Science and Engineering
Volume74
DOIs
Publication statusPublished - 1 Jan 2015
Externally publishedYes

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Mechanical Engineering
  • Computer Science Applications
  • Control and Optimization

Cite this