Much has been made about the potential emergence of so-called ‘responsibility gaps’, either due to technology more generally or due to AI specifically. A multitude of responses have emerged in response to the responsibility gap challenge. An understudied response to the challenge, however, is vicarious responsibility as a potential bridge for responsibility gaps. In cases of vicarious responsibility, one agent stands as a substitute for another. The idea behind vicarious responsibility is therefore a simple one: it can helpfully account for cases where an agent is responsible for the uncontrollable and unpredictable actions of some other entity. Human agents, in certain circumstances, can stand in as responsible for the harms that follow from autonomous systems.
Surprisingly, however, very little attention has been given to the prospect of vicarious responsibility for autonomous AI-systems. With the exception of work by Trystan Goetze, who offers an argument in favour of vicarious responsibility for computing professionals in certain cases of AI-enabled harm, there is very little other literature on the intersection between digital ethics/AI ethics and theories of vicarious responsibility (2022).
Moreover, we find a similar lack of philosophical literature on vicarious responsibility more generally. While legal scholars debate its merit (usually in discussions of vicarious liability), there has only recently been spate of attention addressed to the philosophical underpinnings of vicarious responsibility (most of it in a special issue of the Monist) (Collins and De Haan, 2021; Goetze, 2021; Kuan, 2021; Mellor, 2021; Radoilska, 2021; Glavaničová and Pascucci, 2024). In this paper I aim to plug this gap and assess the limits and potential of vicarious responsibility as a solution to the responsibility gap challenge. To do so I proceed as follows.
First, I outline in general terms why the idea of vicarious responsibility makes sense, despite suggestions that it is a contradiction in terms.
Second, I outline two different ‘faces’ of responsibility – accountability and answerability, and show how responsibility as answerability can plausibly be borne vicariously. This is motivated by the idea that it seems unjustifiable to blame an agent for the deeds of another, but in certain cases it makes sense to expect one agent to answer for the actions of another.
Third, I describe why vicarious responsibility might be a useful analytical lens for understanding autonomous-systems. While Mellor (2021), for example, argues that vicarious responsibility can only exist between individuals (which I take to mean human persons), I think this overly restrictive. We can and should adopt a more holistic perspective on vicarious responsibility whereby such responsibility is fittingly attributable even in cases where only one entity involved is a human agent. An easy example of such a case is the responsibility I might have for my dog if he were to bite someone. In such a scenario, therefore, I am responsible for an unexpected and uncontrollable action that I did not initiate. I apply this discussion to autonomous systems and suggest that, under certain conditions, we are vicariously responsible, and thus answerable, for the actions of these entities.