Publications
"What We Owe to Decision Subjects: Beyond Transparency and Explanation in Automated Decision-Making" (with Jeff Behrends and John Basl), Philosophical Studies, 2023 [abstract] [published]
- The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts who design and deploy them. Is it morally problematic to make use of opaque automated methods when making high-stakes decisions, like whether to issue a loan to an applicant, or whether to approve a parole request? Many scholars answer in the affirmative. However, there is no widely accepted explanation for why transparent systems are morally preferable to opaque systems. We argue that the use of automated decision-making systems sometimes violates duties of consideration that are owed by decision-makers to decision-subjects, duties that are both epistemic and practical in character. Violations of that kind generate a weighty consideration against the use of opaque decision systems. In the course of defending our approach, we show that it is able to address three major challenges sometimes leveled against attempts to defend the moral import of transparency in automated decision-making.
"Equalized Odds is a Requirement of Algorithmic Fairness," Synthese, 2023 [abstract] [penultimate]
- Statistical criteria of fairness are formal measures of how an algorithm performs that aim to help us determine whether an algorithm would be fair to use in decision-making. In this paper, I introduce a new version of the criterion known as “Equalized Odds,” argue that it is a requirement of procedural fairness, and show that it is immune to a number of objections to the standard version.
"Embedded EthiCS: Integrating Ethics Broadly Across
Computer Science Education" (with Barbara Grosz et al.),
Communications of the Association for Computing
Machinery, 2019
[final]
"Ethics and Artificial Intelligence in Public Health
Social Work," in Artificial Intelligence and Social
Work, ed. Milind Tambe and Eric Rice (Cambridge
University Press), 2018 [penultimate]
Papers in Progress
"Ethics for Artificial Agents" [abstract] [draft]
- Machine ethics is a relatively new subfield of
computer ethics that focuses on the ethical issues
involved in the design of autonomous software agents
("artificial agents"). According to what I call the
"agential theory" of machine ethics, it is morally
permissible to design an artificial agent to perform
a given action in a given situation if and only if it
would be morally permissible for a human agent to
perform the same action in the same situation. The
agential theory has been highly influential in recent
work on machine ethics. This paper argues that the
agential theory is false by developing a series of
counterexamples, and uses those counterexamples to
illustrate more general lessons about how artificial
agents ought to be designed to act.