Share these talks and lectures with your colleagues
Invite colleaguesIt’s not the algorithm, it’s the ethics
Abstract
Machine learning and artificial intelligence (ML/AI) technologies have transformed nearly every industry, helping to realise unprecedented efficiency and effectiveness in a variety of tasks once thought the exclusive domain of humans. The financial compliance industry, however, lags its peers in adopting ML/AI tools in spite it being readily available and promising to reduce costs for financial institutions. This paper argues that the reason for delay in adoption is not ignorance of the technology but the lack of a moral consensus around its use in financial compliance. The ethics and morality behind the adoption of ML/AI tools and why compliance professionals are discouraged from adopting it in their compliance programmes are explored. The paper introduces the trolley car problem and how this explains the lack of a moral consensus of the use of ML/AI in compliance. It then explores why, even though machines today can pass the Turing test, machines are not capable of making moral judgments, meaning humans remain responsible for the actions taken by ML/AI. This creates an unprecedented burden about making moral decisions without any real benefit to compliance officials who want to do good. The argument is that if regulators change the incentive structure away from conformity to saving lives, and making this the moral regime guiding the use of ML/AI, technology adoption would increase and allow the compliance industry to change the world for the better.
The full article is available to subscribers to the journal.
Author's Biography
Gary M. Shiffman is an economist working to counter coercion and organised violence and to support others who do the same. After earning an undergraduate degree in Psychology, his career began in the US Navy with two tours in the Gulf War, and several positions in the national security community in Washington, DC. Dr Shiffman earned his PhD in Economics and joined the faculty of Georgetown University's School of Foreign Service in 2002. He published ‘Economic Instruments of Security Policy’ in 2006. He started incorporating machine learning into his research while a principal investigator on Department of Defense funded R&D projects related to insurgency, terrorism, and human trafficking. He created two technology companies founded upon behavioural science-based machine learning software. In 2020, he published ‘The Economics of Violence: How Behavioral Science Can Transform Our View of Crime, Insurgency, and Terrorism’ (Cambridge University Press). He has published essays in The Hill, the Wall Street Journal, USA Today, and other outlets.
Christopher Wall is pursuing his PhD at King's College, London, researching and writing about political violence and the use of ML/AI for national security. Previously he was involved with DARPA research on ML/AI and he frequently lectures at several military commands throughout the Department of Defense, to include SOCOM's Strategic Leadership International School. He also holds an appointment as an Adjunct Professor at Georgetown University, where he teaches a course titled ‘The Science of National Security’, designed to help future policymakers become more intelligent consumers of data. In 2018, he co-authored ‘The Future of Terrorism’ with the late Georgetown historian, Walter Laqueur, and his writing has appeared in outlets such as The Hill.