Security audits on artificial intelligence systems
Abstract
Auditing is important for ensuring security and compliance for artificial intelligence (AI) systems. Unlike traditional software security audits that primarily address well-documented vulnerabilities, AI systems introduce distinctive challenges due to their reliance on complex machine learning (ML) models and expansive data pipelines. This paper presents key considerations for a security audit specifically tailored for AI systems, emphasising core components such as model robustness, adversarial defences, penetration testing, data privacy compliance and continuous monitoring. It systematically identifies crucial areas of focus, including data sources, ML models and application interfaces, while also detailing specialised security tools such as the IBM Adversarial Robustness Toolbox and Microsoft Counterfeit. Furthermore, the paper integrates established security standards and methodologies, including the MITRE Adversarial Threat Landscape for AI Systems (ATLAS) and the NIST AI Risk Management Framework, to address the unique threats posed by AI technologies. By adopting this holistic auditing approach, organisations can enhance the resilience of their AI systems against evolving cyber threats, thereby ensuring their operational reliability and compliance with regulatory standards. This article is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/.
The full article is available to subscribers to the journal.
Author's Biography
Robert Kemp received his PhD in cyber security from De Montfort University in 2023 with research focused on cyber security standards and the critical infrastructure industry. He currently works as a Senior Security Manager for a global organisation. His research is focused on artificial intelligence and security standards.