Harmonising risk assessments for high-risk AI systems under the GDPR and the AI Act
Abstract
Artificial intelligence (AI) is transforming industries, unlocking unprecedented opportunities while posing significant challenges in areas such as data privacy, fairness and accountability, especially in high-risk applications. To address these concerns, the European Union (EU) has established a dual regulatory framework comprising the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). These frameworks employ risk-based mechanisms, including data protection impact assessments (DPIAs), fundamental rights impact assessments (FRIAs) and conformity assessments (CAs). This paper explores the interplay between these regulatory frameworks, focusing on their distinct scopes and the potential for harmonising risk management strategies. By analysing the practical benefits and challenges of integrating these assessments, the study identifies pathways to streamline compliance. The proposed strategy emphasises the importance of organisational context mapping, cross-functional collaboration, unified templates and continuous risk oversight. The findings demonstrate that harmonising these frameworks not only ensures legal compliance but also enhances operational efficiency, fosters stakeholder trust and supports responsible AI and data governance. This research provides actionable insights for organisations navigating overlapping regulatory requirements, enabling them to balance compliance with the advancement of AI technologies. This paper is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/.
The full article is available to subscribers to the journal.
Author's Biography
Przemysław (Shemy) Gruchała is a European Union (EU)-qualified attorney-at-law with certifications including Certified Information Privacy Professional Europe (CIPP/E), Certified Information Privacy Manager (CIPM), Artificial Intelligence Governance Professional (AIGP) and Fellow of Information Pricay (FIP) from the International Association of Privacy Professionals (IAPP). With over eight years’ expertise in data protection and emerging technologies, Shemy is recognised as a leading adviser in privacy and AI governance. In 2023, he was acknowledged by Legal500 for his work in the TMT sector. His solution-oriented approach bridges technical and legal insights, advancing ethical artificial intelligence (AI) governance and data protection across industries.
Lucrezia Nicosia is a data protection consultant with an LLB in European law from Maastricht University and an LLM in public international law, specialising in peace and security, from Oslo University. With nearly two years’ experience advising clients on data protection compliance across multiple jurisdictions, she is currently pursuing another LLM in information and communication technology law at Oslo University to further enhance her expertise in digital regulations.
Monika Zięciak is a Data Protection Associate with an LLM from University of Wroclaw and over four years’ experience as a legal specialist. She advises businesses on privacy and business law, delivering practical and business-focused solutions. Monika is passionate about addressing privacy challenges in emerging technologies, particularly large language models and distributed ledger technology.