Critical Perspectives on A.I. Series
Co-sponsored by the Don Shula Chair in Philosophy and the Raymond and Eleanor Smiley Chair in Business Ethics
Please join us for a talk by Dr. Cameron Buckner (Associate Professor of Philosophy, University of Houston)
“Towards Impartial Machines—Empiricist Moral Psychology for Machine Learning Research”
Friday, February 23rd, 3:30 – 5pm.
O’Connell Reading Room, Dolan Science Center.
Free and open to the public.
Abstract: While machine learning has recently achieved successes in areas as diverse as object recognition, strategy gameplay, text generation, and the prediction of protein folds, it has made comparatively less progress on social and moral decision-making. Even state-of-the-art systems struggle to overcome demographic biases derived from their training regime, and as a result often flounder on applied moral and social decision-making. In this talk, I argue that prior AI research has been too influenced by Macchiavellian, rationalist, egoistic frames that emphasize the most abstract forms of moral and social decision-making based on cold calculation using fully decontextualized universal principles. In the alternative, more empiricist-leaning, sentimentalist tradition—which I will characterize using the work of David Hume, Adam Smith, and Sophie de Grouchy—mature moral decision-making is instead acquired by adapting one’s own emotional and affective learning capacities to coordinate with other agents to achieve impartiality. Such adaptation—often involving a faculty of empathy/sympathy—enables agents to cultivate virtues, solicit help, sanction social defectors, share cultural knowledge through social learning and shared decision-making, and, eventually, simulate the perspective of an “impartial observer” as an aid to normative decision-making.
Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. His research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence. He began his academic career in logic-based artificial intelligence. This research inspired an interest into the relationship between classical models of reasoning and the (usually very different) ways that humans and animals actually solve problems, which led him to the discipline of philosophy. He received a PhD in Philosophy at Indiana University in 2011 and an Alexander von Humboldt Postdoctoral Fellowship at Ruhr-University Bochum from 2011 to 2013. Recent representative publications include “Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks” (2018, Synthese), and “Rational Inference: The Lowest Bounds” (2017, Philosophy and Phenomenological Research)—the latter of which won the American Philosophical Association’s Article Prize for the period of 2016–2018. He just published a book with Oxford University Press that uses empiricist philosophy of mind (from figures such as Aristotle, Ibn Sina, John Locke, David Hume, William James, and Sophie de Grouchy) to understand recent advances in deep-neural-network-based artificial intelligence.