Categories, institutions, instruments: technology as a category?
Johannes F.M. Schick
University of Siegen, Germany
Ludovic Coupaye proposes to understand ‘technology’ in a twofold sense: as a discipline that studies techniques (i.e. as human science) and as a category of contemporary western societies. To conceive of ‘technology’ as a category hasthe benefit of understanding its operative role in constructing an objective, observable and describable reality.To understand ‘technology’ as a category implies therefore that the socio-technical practices of using modern technical objects co-constitute this category. "Technology" becomes therefore a specific mode of perceiving and constructing the world. My talk focuses on the genetic process of how ‘technology’ can become a category against the backdrop of the “Category Project of the Durkheim School.” The underlying heuristic of this project is, that human intelligence and its categories originate in social practices. To conceive of ‘technology’ as a category therefore requires an attempt to understand how techniques contribute to the genesis of a category and in what sense this category is the expression and crystallization of social things.
Though Durkheim added the rubric “Technology” to the Année Sociologique 4 (1901) defining “technology” as a branch of sociology and a science yet to be developed, “technologie” was not used as a category and Durkheim himself preferred to focus on religious phenomena rather than on techniques. Durkheim assigned the task of studying technical phenomena to his nephew Marcel Mauss and his ‘work twin’ Henri Hubert (which resulted for instance in the seminal work “Techniques of the Body” by Marcel Mauss). Even though “technology” has not been spelled out as a category generated from social practices such as time, space or causality, which were each studied in their own right by members of the Durkheim School, what consequence if ‘technology’ were conceived of as a category of the human mind instead of merely a subsection of the Année Sociologique that Durkheim was not particularly interested in? What are the epistemological and philosophical ramifications? How can ‘technology’ become a category and how can answering these questions help us to understand the human condition in the 21st century?
In my talk, I will develop my argument in four steps. Firstly, I will introduce the reciprocity of mind and bodily practices as central to the formation of categories in Durkheim and Mauss. In a second step, I will focus on the genesis of collective representations and categories. Thirdly, I will show how ‘technology’ can be conceived of as a category in the framework of the “Category Project” and relate this category to the goal of the Durkheimians to understand multiple modes of being human. In the concluding part, I will relate the category of ‘technology’ to the possibility of developing technology as a human science.
Instrumental rationality, value trade-offs, and medical AI
Zachary Daus
Monash University, Australia
Artificial intelligence (AI) is increasingly being used in various public sectors to achieve policy goals with greater efficiency. One such sector is health care. Medical AI is now being developed to make health care processes cheaper and faster, to free up scarce time for overworked clinicians, to reduce the need for expensive human labour, to predict when treatments may be successful and for which patients, and to better allocate scarce health care resources. While potentially beneficial due to the scarcity of health care resources, this more efficient achievement of health care outcomes can nonetheless come at a cost to other values that are external to the value of health, such as privacy, equality, autonomy, and dignity. I argue that such value trade-offs can be better identified, and potentially resolved, through the application of Max Weber’s understanding of rationality and his conception of value conflicts. According to Weber, a number of societal developments in modernity, such as the rise of bureaucratic governance and industrial capitalism, have resulted in the dominance of instrumental rationality over value rationality, that is to say the preference for action with reliably predictable consequences over action that is intrinsically valuable. This modern transformation in rationality has had ambiguous results, encouraging humans to give up inefficiently superstitious courses of action while trapping them in an ‘iron cage’ (stahlhartes Gehäuse) of unfreedom that is impervious to their higher-order values. This logic is also evident in the implementation of AI in health care. Namely, the gains in efficiency promised by AI in health care may lead many to overlook its accompanying value trade-offs. For example, an AI diagnostic system for detecting skin cancer may more efficiently expand health care access while exhibiting bias against individuals with darker skin tones, undermining the intrinsic value of justice. Alternatively, an AI prediction system for determining treatment success may more efficiently allocate scarce resources while limiting health care access for those deemed unlikely to benefit from the treatment, undermining the value of social solidarity. After describing a number of the value trade-offs posed by the implementation of AI in health care, I argue that many of these trade-offs would require democratic deliberation to adequately assess and fully resolve, and consider what such deliberation may entail.
Beyond Instrumentalism: reframing human-centered AI through Simondon's philosophy of technical objects
Luuk Stellinga, Paulan Korenhof, Vincent Blok
Wageningen University & Research, Netherlands, The
‘Human-centered artificial intelligence’ (HCAI) has emerged as a prominent phrase in the societal debate on the implications of AI, framing the development, deployment, and governance of AI technologies (e.g., HLEG AI, 2019). However, the current discourse on HCAI lacks critical reflection on the nature of human-technology relations, leading to an implicit instrumentalist perspective that treats AI technologies as means to human ends. This view does not adequately map onto the reality of AI technologies, which are not tools but system technologies, progressively changing in nature through time, and impacting human existence more profoundly than only through the serving of human ends. This leads to the critical question of how to ground HCAI in a richer understanding of technical objects that reflects the complexities of human-AI relations.
To address this question, we first providing a critical analysis of instrumentalism as a dominant yet reductive perspective in contemporary thinking about AI, and offer multiple arguments to reveal its shortcomings and demonstrate the need for moving beyond it. A critical response can be found in Gilbert Simondon’s philosophy of technical objects, which argues that the instrumentalist perspective stems from a false dualism between technics and culture (Simondon, 1958). Following a reconstruction of this argument, we draw on Simondon’s understanding of technical objects as ontogenetic and relational entities to reconsider human-AI relationships, and argue that it reveals the current HCAI discourse as too focused on a desire to control AI systems and to shape their functionality towards increasing human capacities, while neglecting the material and social conditions within which such systems operate.
Simondon’s perspective is valuable in allowing us to move beyond instrumentalism, but also has two significant limitations. First, Simondon’s analysis of human-technology relations considers human beings only in direct relation to a technical object, for example as craftsman or engineer. This reveals a limited philosophical anthropology that does not sufficiently acknowledge the political dimension of human existence, i.e., the human as zoon politikon (cf. Arendt, 1958). Second, while Simondon’s analysis does acknowledge the transformative effects that technical objects have on their natural milieu, it does not consider the finality of this milieu and consequently fails to contend with the environmental costs of technical progress. Both limitations point towards important aspects of human-technology relations, and overcoming them is crucial in dealing with the current challenges of AI.
The paper concludes by proposing a progressive concept of HCAI that goes beyond the instrumentalist perspective on AI technologies as means towards human ends, instead promoting the careful integration of AI in society. By incorporating Simondon’s insights while addressing its limitations, this work contributes to a philosophical grounding for HCAI, offering a critical and progressive vision for rethinking our relationship with AI technologies.
Arendt, H. (1958). The Human Condition.
HLEG AI. (2019). Ethics guidelines for trustworthy AI.
Simondon, G. (1958). On the Mode of Existence of Technical Objects.
|