Share these talks and lectures with your colleaguesInvite colleagues
AI–human interaction: Soft law considerations and application
This paper defines the utilisation of ‘soft law’ concepts and structures generally, considers the application of soft law to the perceived gap between artificial intelligence (AI) approaches and normal human behaviours, and subsequently explores the challenges presented by this soft law application. The authors submit that AI is only becoming more prevalent, and increased uses of this technology logically create greater opportunities for ‘friction’ when human norms and AI processes intersect — especially those processes that seek to replace human actions, albeit inconsistently and imperfectly. This paper considers that friction as inevitable, but instead of offering wholesale objections or legal requirement application to AI’s imperfect intrusions into humans’ daily lives, the authors consider ways in which soft law can smooth the path to where we are collectively headed. As human–computer interaction increases, the true role of AI and its back-and-forth with humans on a day-to-day basis is itself rapidly developing into a singular field of study. And while AI has undoubtedly had positive effects on society that lead to efficient outcomes, the development of AI has also presented challenges and risks to that which we consider ‘human’ — risks that call for appropriate protections. To address those concepts, this paper establishes definitions to clarify the discussion and its focus on discrete entities; examines the history of human interaction with AI; evaluates the (in)famous Turing Test; and considers why a gap or ‘uncanny valley’ between normal human behaviour and current AI approaches is unsettling and potentially problematic. It also considers why certain types of disclosure regarding AI matter are appropriate and can assist in addressing the problems that may arise when AI attempts to function as a replacement for ‘human’ activities. Finally, it examines how soft law factors into the equation, filling a need and potentially becoming a necessity. It considers the use-case of how one US legislative body initiated such a process by addressing problems associated with AI and submits that there is a need for additional soft law efforts — one that will persist as AI becomes increasingly important to daily life. In sum, the paper considers whether the uncanny valley is not a challenge so much as a barrier to protect us, and whether soft law might help create or maintain that protection.
The full article is available to subscribers to the journal.
Catherine Casey serves as Chief Growth Officer of Reveal Brainspace.
Ariana Dindiyal serves as an associate at BakerHostetler.
James A. Sherer serves as a partner at BakerHostetler.