Intimate technologies, brain chips and cyborgs: Revisiting the bright-line argument
Chair(s): John Sullins (Sonoma State University, United States of America)
How viable is the concept of authenticity in the face of intimate technologies? In this symposium, we will discuss whether “authenticity” is any longer a fecund concept for elucidating human agency and the human condition amidst the usage of intimate technologies, and bodily or cognitive enhancements. Our authors will reflect on the importance of autonomy as a necessary condition for moral agency (Wandmacher 2016) and its relevance to the problem of authenticity in artificial companions (Johnson 2011; Turkle 2011). Drawing on the Bright-Line Argument (Moor 2006; Wandmacher 2016) and alternative frameworks like sociomorphing (Seibt et al. 2020). We will also revisit Donna Haraway’s “A Cyborg Manifesto” which redefines "cyborg" as a metaphor for the breakdown of traditional boundaries—human/animal and animal/machine—rather than a literal technological hybrid (Clynes and Klein 1961). It critiques human exceptionalism and exposes the instability of these binaries. Haraway asserts that humans are inherently entangled with tools and non-human others, challenging notions of autonomy and purity. The cyborg thus represents our existential condition, revealing humanity’s intrinsic interconnectedness and shared kinship with other forms of being. Also, this session examines the liberatory potential of intimate technologies, questioning whether they truly empower users or reinforce control by technological elites. Inspired by Marx and expanded by thinkers like Dewey, Heidegger, and Arendt, the philosophy of technology has long explored the sociopolitical impacts of technology. Today, intimate technologies, shaped by transhumanism and longtermism, often serve corporate interests. Through subtle nudges, they risk merging user desires with corporate goals, blurring the line between liberation and manipulation. We will conclude by examining the widespread adoption of Generative AI (GenAI) tools, from dating apps (Lin 2024) to hospital transcription (Burke and Schellmann 2024), and their polarizing impact on academia. Using Participatory Sense-Making (PSM), it critiques claims that GenAI systems are "collaborators," arguing they lack genuine agency. Anthropomorphism and cognitive metaphysics are explored to reveal recurring errors in evaluating GenAI, highlighting the challenges of understanding these tools' ontological status and their role in human interaction.
Presentations of the Symposium
The Crux of the Bright-line Argument as an Explanatory Lens for Understanding Why the Problem of Authenticity Concerning Artificial Companions Persists
Aaron Butler University of Lucerne, Faculty of Theology, Lucerne Graduate School in Ethics LGSE, Institute of Social Ethics ISE
In this paper, I would like to attempt to bring a discussion of the importance of autonomy in characterizing moral agency; that is: being autonomous as a necessary condition for moral agency (Wandmacher 2016), into dialogue with a discussion as to why the problem of authenticity is so important in discussions of digital companions (mutatis mutandis, autonomous intelligence as artificial companions) concerning the behavioral outcomes of this class of entities in social settings with humans (Johnson 2011;Turkle 2011). My motivation here is that I suspect the reasons why autonomy is so important in elucidating the ontological requirements of moral agency sheds light on why some are so fixated on authenticity in the relevant sense. Furthermore, my rational suspicion is that the explanatory lens of the crux of the Bright-Line Argument (Moor 2006; Wandmacher 2016), namely: autonomy, does not lose its explanatory adequacy regarding the aforementioned problem of authenticity even if we switch to alternative models of explanation such as sociomorphing or the like to account for the intimate-techno-social phenomena articulated by the problem of authenticity concerning artificial companions (Seibt et al 2020). That is to say, I suspect the reasons why some are so fixated on discussions of authenticity for the relevant class of artificial companions are rooted in our realization of the importance of being autonomous concerning characterizing moral agency and that won't go away in the face of alternatives to the concept of authenticity.
Partial Bibliography:
Johnson, Deborah G. .2011. “Computer Systems: Moral Entities but not Moral Agents”, in Anderson, Michael and Anderson, Susan Leigh (eds.), 168–183. Machine Ethics, Cambridge: Cambridge University Press.
Moor, J.H. 2006. "The Nature, Importance, and Difficulty of Machine Ethics," in IEEE Intelligent Systems, 21 (4): 18–21.
Seibt, Johanna & Vestergaard, Christina & Damholdt, Malene. 2020. Sociomorphing, Not Anthropomorphizing: Towards a Typology of Experienced Sociality. 10.3233/FAIA200900.
Turkle, Sherry. 2011. “Authenticity in the Age of Digital Companions”, in Machine Ethics, Anderson, Michael and Anderson, Susan Leigh (eds.), 62–76. Cambridge: Cambridge University Press.
Wandmacher, Stevens F. 2016. The Bright Line of Ethical Agency. Techné: Research in Philosophy and Technology 20 (3): 240–257.
Thinking Otherwise
David Gunkel Northern Illinois University
In her essay “A Cyborg Manifesto,” Donna Haraway famously does not define “cyborg” as a human being augmented with or denatured by various technological prostheses. That formulation of the term is far too literal, harkening back to the work of Manfred Clynes and Nathan Klein, who introduced the neologism in their 1961 paper on human space flight. As Haraway characterizes it, “cyborg” names a crucial boundary breakdown between the ontological distinctions that have separated the human from the animal and animal from the machine. Although she does not say it in this exact way, what is called “cyborg” deconstructs the classic set of binary oppositions by which we—human beings—have defined ourselves and secured our sense of exceptionalism in opposition to those others who have therefore become the excluded other.
The concept of the “cyborg,” it is important to point out, does not incite or institute these boundary breakdowns. It simply describes the contours and consequences of border skirmishes or untenable discontinuities that have been underway within and constitutive of the Western philosophical tradition from its very beginnings. The cyborg, therefore, does not cause or produce these ontological erosions that appear to threaten the authenticity of the human subject; it merely provides these dissolutions with a name. For this reason, the term “cyborg” identifies not just an enhanced human being, as is commonly formulated in the transhumanist movement. It also (and more importantly) describes the rather unstable ontological position in which the human subject already finds itself. We have, therefore, always and already been cyborgs, insofar as the difference separating the human and the animal and the animal and the machine have been and continue to be undecidable, contentious, and provisional.
Following the innovations of Haraway and others who have followed her lead, this paper argues that the cyborg is already the existential condition in which we find ourselves. We are always and already tangled up in our tools and instruments and these entanglements already shape our understanding and definition of ourselves as “human.” Thus, it is with the posthuman subject that is called “cyborg” that we can begin to acknowledge how the very idea of being human is originally tangled up in and inextricably involved with a myriad of others with whom/which we always and already share a common bond of kinship. This intimacy with non-human others and other forms of otherness is not some threat to the pristine integrity of the human organism but constitutes the original ontological and axiological conditions of that which we seek to protect and insulate from what only subsequently appears as other.
Intimate technologies and liberation
John Sullins Sonoma State University
As we become more intimately bonded to our technologies the philosophical problem of self-identity and self-liberty is raised. We seek to immerse our personalities into social media platforms to liberate ourselves from various weaknesses we feel in our more mundane social relationships. Likewise, we seek to create radically intimate technologies such as brain implants to overcome perceived mental weakness or inabilities. In short, we want intimate technologies to give us new forms of technologically mediated liberation. What we may fail to realize is that all technologies have makers and those makers set the terms for these liberations, economically, philosophically, and politically. We need to discuss what kind of liberation are we achieving and if it is any form of liberation at all. During this session, I want to re-open a foundational discussion in the philosophy of technology on the liberatory potential of intimate technologies.
The discussion about the role of technology in liberation was initially inspired by Marx who noted the liberating or subjecting potential of various technologies found in social systems. Flawed as this theory was, it inspired either directly or through opposition more and deeper thought found in the works of Dewey, Mumford, Ortega y Gasset, Heidegger, and Arendt who all wrestled with the concept of the liberating potential of technology and technosocial systems. By the mid-century social scientists such as Jacques Ellul, Herbert Marcuse, and Max Weber had presented different theories that argued that technological rationality had been deeply embedded in the political realities of the era and served as a powerful force in determining political outcomes. More recently it has become far less popular to discuss these grand political narratives in the philosophy of technology. Even though philosophers may be hesitant to enter that debate, others are very willing to do so. Technology CEOs in particular tend to grab headlines through grand pronouncements about the advertised liberating power of their technologies be that liberation us from work, from distance, or nature.
In this discussion, we will look at how intimate technologies are not motivated by nuanced and responsible philosophies. They are not designed to liberate their users and give them personal and political freedoms. Instead, they are extensions of the more popular philosophies of transhumanism and longtermism whose ultimate goal is to empower and enrich a small set of technological elites. In particular, this part of the discussion will focus on the ways that intimate AI technologies can be used to nudge or influence users in subtle ways. These nudges can so deeply influence our thoughts and desires, such that our desires and the desires that corporations what us to have will become difficult to distinguish. The bright line between consumer and consumed will be harder to distinguish.
Hell is Other Robots: Participatory Sense-Making and GenAI
Robin Zebrowski Beloit College
Despite the existence of Generative AI (GenAI) preceding OpenAI’s release of ChatGPT in November of 2022, that event catalyzed a wide embrace of such tools across almost all aspects of daily life. GenAI tools have turned up in such unlikely places as dating apps, where they can act as a ‘wingman’ (Lin, 2024), and even in hospital rooms, where they act as transcription tools (although not without making up whole sentences, apparently) (Burke and Schellmann 2024). But for academics, such tools have been extremely polarizing, being openly embraced by some as collaborators in the knowledge process, while being cursed and scorned by so many others who accuse it of trying to automate the juice of academic work itself: deep thought. As a result, there is a good deal of academic (and public/popular) discourse about whether these tools can substitute for humans in all sorts of different relationships, both intimate and professional. However, in these debates about the proper relationships between humans and GenAI, we generally fail to properly account for the ontological status of such tools. As a result, we fail to recognize those relationships that can genuinely only arise between agents in certain sorts of interactions.
In this paper, I begin with a broad sweep across many of the interesting uses of GenAI, often in the form of Large Language Models (LLMs) or similar predictive models like the art bots, and introduce the enactive theory of social cognition known as Participatory Sense-Making (PSM) to try and make sense of humans in interaction with these specific kinds of technologies. I argue, from this grounding, that those GenAI systems touted as “collaborators” in the knowledge-process, for example, are being radically mischaracterized under this ontology, in part because they are not genuine agents in the enactive sense. I also look to the literature on anthropomorphism to help understand how we keep making the same kind of errors in our evaluations of these systems. Ultimately, I conclude that our attempts to capture and quantify these core bits of our humanity remain problematic in the face of our failures to fully understand our own cognitive metaphysics.
Burke, G. and Schellmann, H. (October 26, 2024). Researchers Say an AI-Powered Transcription Tool Used in Hospitals Invents Things No One Ever Said. In AP News: https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
Lin, B. (October 5, 2024). Grindr Aims to Build the World’s First AI ‘Wingman’. In The Wall Street Journal: https://www.wsj.com/articles/grindr-aims-to-build-the-dating-worlds-first-ai-wingman-8039e091
|