What 2025 taught us about AI agent security: A practitioner’s guide to the incidents shaping enterprise adoption
Abstract
This paper examines security incidents affecting artificial intelligence (AI) coding assistants and enterprise AI agents during 2025, providing security teams with practical guidance for risk assessment and mitigation. The paper synthesises notable vulnerabilities and public advisories affecting major platforms including GitHub Copilot, Cursor, Amazon Q Developer, Microsoft 365 Copilot and Claude Code, with a focus on enterprise-relevant impact rather than exhaustive coverage. Three vulnerability patterns recur across platforms: prompt injection enabling privilege escalation, inadequate authentication at trust boundaries, and insufficient isolation between AI operations and sensitive resources. These patterns enabled remote code execution, credential theft and data exfiltration. Organisations deploying AI agents require updated security controls including agent inventory management, configuration hardening and vendor security assessment. The recurring patterns across diverse tools indicate the need for new forms of trust infrastructure and governance beyond product-by-product patching. This article is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/.
The full article is available to subscribers to the journal.
Author's Biography
Tim Williams is Chief Executive Officer and Co-Founder of AstraSync AI, a Melbourne-based company building identity, trust and verification infrastructure for autonomous AI agents. He is also Founder and CEO of hyperIQ Consulting, which advises enterprises on AI strategy and implementation. Tim brings 20 years’ experience commercialising artificial intelligence (AI) in customer-facing operations. As Chief Customer Experience Officer at Pepperstone, he led the deployment of generative AI (GenAI) capabilities across sales, support and marketing functions, achieving sustained improvements in customer satisfaction and operational productivity while managing teams across three countries. Tim’s career spans financial services and healthcare, where he has built and led go-to-market organisations of up to 500 people. He is author of ‘Preparing for the Agentic Hyperevolution: How I Learned to Stop Worrying and Love the Robots’ (2025), which examines how organisations can adapt to autonomous AI systems, and co-author of ‘The Infrastructure Gap: Why Platform Security Cannot Protect Against Agentic Attacks’, a technical analysis of identity and trust requirements for the agentic economy published on SSRN. Tim’s particular interest lies in the intersection of regulatory compliance, enterprise security and the governance challenges posed by autonomous AI systems operating across organisational boundaries.