VALUES IN COUNTERFACTUAL EXPLANATIONS
Emily’s research is at the intersection between philosophy and data and computer science. She investigates how AI technology mediates (scientific) understanding and shapes norms of information sharing, scientific modeling, explanation, and the ethical consequences that follow. Emily is currently the PI on a NWO Veni project (2021-2024) on the explainability of machine learning systems. She is the Co-director of the Eindhoven Center for philosophy of AI, fellow in the ESDiT research consortium, and an Associate Editor for the European Journal for the Philosophy of Science.
What are the hidden values in counterfactual explanations of AI decisions? In this talk, I discuss how the increasingly used counterfactual explanation methods for AI decisions are embedded with values. I discuss the potential ethical and epistemic pitfalls that can come with counterfactual explanations, and what we are doing in the VACE project to tackle the problem from the perspective of philosophy informed computer science.
Go back to RESEARCH TRACK