Govern once/comply many: Leveraging cyber security framework experience to support AI governance
Abstract
Data protection is notoriously complex and artificial intelligence (AI) has only added to that complexity. In addition, many organisations are floundering as they seek to adopt AI in an ethical and trustworthy manner. This paper addresses skill sets and frameworks familiar to IT and cyber security professionals that can be leveraged to help build a robust approach to AI governance. Adopting the maxim of ‘govern once/comply many’, the paper compares and contrasts existing cyber security frameworks and approaches that address the governance concerns that arise with AI. It also uses the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework as a lens through which to assess the utility of cyber security frameworks to inform AI governance efforts. Generally, the map, measure, manage and govern functions of the NIST Artificial Intelligence Risk Management framework align well with the confidentiality, integrity, and availability foci of established cyber security frameworks, forming the beginnings of a common language, when it comes to issues of data protection and AI governance. This article is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/.
The full article is available to subscribers to the journal.
Author's Biography
F. Paul Greene PhD, AIGP, CIPP/US, CIPP/E, CIPM, FIP is a partner at Harter Secrest & Emery, a full-service business law company with offices throughout New York. He chairs the company’s privacy and data security and artificial intelligence (AI) and new technologies practice groups. Recognised as a leading authority on data protection and AI, Paul helps clients maximise the benefits of technology while managing risk. A Fellow of Information Privacy and a Distinguished Fellow of the Ponemon Institute, he frequently speaks and publishes internationally. With a background in academia and commercial litigation, Paul brings deep insight into AI, forensic investigations, e-discovery and technology-related disputes. His clients include major retailers, financial institutions, healthcare organisations, higher education institutions, AdTech, AI developers and manufacturers. He has led high-profile incident response efforts involving millions of individuals and collaborates frequently with law enforcement and regulators. On the proactive side, Paul helps clients develop privacy management and AI governance programmes. His team is an authorised Breach Coach® by NetDiligence. Fluent in German, Paul applies a global perspective to data protection and AI matters.