Share these talks and lectures with your colleagues
Invite colleaguesCognitive Integration Process for Harmonising Emerging Risks: A cognitive mental model for navigating emerging technology security in rapid workplace transformation
Abstract
The swift integration of emerging technologies such as artificial intelligence (AI), quantum computing and blockchain into workplace environments presents unprecedented security challenges that transcend traditional cybersecurity paradigms. This paper introduces the Cognitive Integration Process for Harmonising Emerging Risks (CIPHER), a novel cognitive mental model designed to assist security professionals in navigating the complex and dynamic security landscape of technology-driven workplace change. CIPHER differentiates itself from current frameworks by providing a flexible, cognitive approach to security strategy that can adapt to the unpredictable dynamics of integrating various technologies into companies. The approach consists of six stages to the mental model: contextualise, identify, prioritise, harmonise, evaluate and refine. CIPHER integrates principles from cognitive science, game theory and dynamical systems theory to offer a memorable and flexible conceptual framework for assessing and addressing security threats in high-uncertainty, low-information contexts inside linked technology ecosystems. This research illustrates CIPHER’s ability to connect theoretical security principles with actual execution via its cognitive foundations, integration with organisational procedures and applicability across diverse developing technological sectors. The paper examines essential elements of developing technology security, encompassing the ethical ramifications of AI algorithms, privacy and legal issues and the wider social effects of employment automation. This research demonstrates how the CIPHER mental model can assist organisations in formulating comprehensive, adaptive and ethically sound security strategies for the swiftly changing environment of workplace technology through theoretical foundations, practical applications and hypothetical case studies. This article is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/.
The full article is available to subscribers to the journal.
Author's Biography
Ben Kereopa-Yorke is an AI security researcher and theorist whose work explores the intersection of cognitive security governance, dynamical systems theory and AI risk quantification. As an Associate Editor of IEEE Transactions on Technology and Society and co-lead of the OWASP Machine Learning Security Top 10, his work has influenced frameworks including NSA/ACSC/GCHQ Secure AI guidelines. His recent publications examine AI-augmented information operations and LLM applications in cybersecurity, introducing novel frameworks such as ClausewitzGPT for quantifying AI risk. Ben has written for the Brookings Institution, Cambridge and for Oxford. While maintaining a senior security specialist role at Telstra, Ben’s primary focus spans computational linguistics, sociotechnical systems impacts, and the philosophical implications of human–computer interaction. He holds a Master’s degree in cybersecurity and is pursuing additional postgraduate studies from UNSW Canberra in cybersecurity operations, and in neuroscience at the University of New England (UNE), Australia. Ben advocates for interdisciplinary approaches to AI safety that enhance rather than diminish human agency. He sits on the Professional Standards Board of the Australian Computer Society.