Between Human and Algorithmic Decisions: Analyzing the Ambiguities in the AI Act Definition of AI
David Doat
Catholic university of Lille, Belgium
In contemporary academic discourse on artificial intelligence, scarcely any publication neglects to underscore the extensive deployment of artificial intelligence systems (AIS), akin to a pervasive social phenomenon, permeating all facets of human endeavor: healthcare, education, justice, security, as well as economic and financial sectors, private life, and beyond. These systems serve to optimize information processing procedures, generate recommendations, and automate complex processes among others. However, a significant number of these systems also render "decisions" that influence individuals' life courses, thus affecting their trajectories and opportunities, their quality of life, and their overall well-being. The European Regulation on Artificial Intelligence (AI Act) adopts a definition that encompasses decision-making as one of the operations executed by AI. The objective of my presentation is to elucidate that the incorporation of the concept of "decision" in the legal definition of AI articulated in the current text of the AI Act is the source of profound ambiguity, incapable of withstanding philosophical scrutiny, necessitating rectification. Even though the concept of decision is utilized in numerous contexts familiar to computer scientists and within the reference frameworks of the discipline, such as the IEEE Computer Science Curricula, its invocation within a legal definition of AI bears the risk of significant categorical confusion among European citizens from diverse cultural, educational, professional, and disciplinary backgrounds. Given the inherent distinctions between human decision-making acts and automated "decision-making" acts, legislative restraint from employing the notion of "decision" in the established definition of AI would have been preferable. This is corroborated in the judicial domain, wherein it is emphasized that "the use of AI tools can support the decision-making power of judges or judicial independence, but should not replace them, as the final decision must remain a human activity". Consequently, my presentation will advocate this thesis by advancing four arguments. The initial argument will involve an original philological and philosophical analysis of the concept of decision, highlighting its formal structure and anthropological specificity. The second argument will draw from the philosophy of language and the theory of the performativity of speech acts (Austin, Searle, Chomsky). The third argument will derive from theories of embodied cognition and Dewey's conception of the relationship between the decision-maker and the associated environment. The final argument will present an analysis and critique of the interests and limitations inherent in attempts to formalize decisions within decision theory. In conclusion, I will emphasize the legal necessity of distinguishing between metaphorical uses of the concept of decision-making, wherein delegation to machines incurs no epistemic or ethical consequences, and the authentic dynamics of human decision-making, which cannot be supplanted by algorithmic processes. In light of the proposed analysis, I will revisit the inherent ambiguity in the use of the concept of decision within the definition of AI in the European AI Act, advocating for modifications to the definition's text based on the analyses presented. In this context, I will provide several proposals for discussion.
Test and regulation: how testing to regulation leads to failure
Matthew James Phillip Wragg
University of Edinburgh, United Kingdom
As a rudimentary form of technology, construction products and systems affect our day to day lives in ways that we hope not to perceive or engage with. The majority of us will enter and exit a building without considering that someone, somewhere, should know that the materials and systems used to make the structure are the right products and systems for that structure, and that they are fit for their intended use (Construction Product Regulations, 2011).
Yet when buildings do fail, the impact is felt acutely. From relatively minor failures that can result in long term health issues (Murphy, 2006; Awaab Ishaak – Prevention of future deaths report, 2022), to cumulative failures that lead to catastrophe (Grenfell Tower Inquiry, Phase 2, 2024), we seek to understand why these failures have happened, even though the regulations that are present are there to provide assurances of repeatable behaviour of a product being fit for its intended use (Chhobra, 2020), i.e. uses that should not lead to, often, preventable failure.
How do we make claims of fitness for use, and what is the relationship between fitness of use in the test environment and in situ? In this paper I shall argue that by removing the unnecessary variables that inform how and what we test for when confirming the future performance of a product in test (Downer, 2007), we limit our epistemic claims of product performance to being of data gathered about how a product performs in comparison to a product type, rather than being of how this product performs under certain conditions, leaving open the door to both physical and epistemic failure.
Using current regulatory schemes and experience drawn from working with a UKAS accredited Product Certification Body and Technical Assessor (those accredited to be able to validate claims of performance made by manufacturers) I shall be discussing how we use standardisation, test, and assessment to restrict the regulated uses (functions) of a product to maintain epistemic accuracy by reducing the scope of the epistemic claims but not by reducing the potential functions of a product. By relying on what a manufacturer deems as relevant to declare due to a products foreseeable use (CE Marking of Construction Products – Step by Step, 2014), we have a limited understanding of what expected performance is that can only account for very few of both the regulated and the potential uses of a product.
Although my work concentrates on how we create and continually verify and validate epistemic claims within the context of the construction industry and civil engineering, this is situated is within a broader context relevant to philosophy of technology: what is the purpose of the test environment and how do we use it to regulate artefacts?
|