Conference Agenda

Session
Beyond the Specter of “AI”: Algorithmic Bias, Systems of Power, and the Impact of Machine Learning on Contemporary Soundscapes
Time:
Sunday, 09/Nov/2025:
9:00am - 10:30am

Location: Lakeshore B

Session Topics:
Philosophy / Critical Theory, Science / Medicine / Technology, Race / Ethnicity / Social Justice, AMS

Presentations

Beyond the Specter of “AI”: Algorithmic Bias, Systems of Power, and the Impact of Machine Learning on Contemporary Soundscapes

Chair(s): Eric Drott (University of Texas)

Since the infamous debut of ChatGPT in November 2022, AI technology has become the main focus of the tech industry and its visions of the future. This prodigious impact has extended to all of the areas of society in which tech is entangled, including law, government, the arts, and academia. In music, where algorithmic recommendations have ruled music streamers like Spotify for years (Drott, 2023), generative AI and voice clones present urgent challenges in law, labor, data ownership, and ethics. Along with Drott (2023) and Keith, Collins, Renzo, and Mesker (2023), this session contends that the recent “AI turn” extends further back than this recent history; by attending to earlier instances of algorithmic bias and overreach, musical copying, and technological threats to sonic labor and livelihoods, we can better understand our current moment amidst the fervor and furious pace of machine learning development. Moreover, we see “AI” itself as a shifting buzzword, a concept like whiteness whose plasticity has enabled those wielding it to surveil, control, and exercise power in changing ways throughout its history (Katz 2020). These papers deal with three of the nebulous forms AI has taken in recent years, and the ways those forms interface with systems of power.

“I’ll Be Bach: On Compositional Identity, Machine Learning Algorithms, and the Claims of Computational Composition” explores how Bach-imitating compositional algorithms dating back to the 1980s reify narratives of great composers and the white racial frame of music theory. “What Madonna and Kraftwerk Can Teach Us about Music Copyright after the ‘AI Turn’” sees the author examining court cases on sampling and pastiche to analyze how the landscape of AI music, ownership, and creativity may be shaped by legal precedence. Finally, “What Is a Voice Worth? Voice Actors and AI Voice Labor under Techno-Capitalism” follows the voice actors behind AI voices from Siri to ChatGPT’s Sky to interrogate exploitative labor and data practices in the tech industry, calling for transparency and fair payment for voice actors whose livelihoods are under threat of replacement by machine learning models.

 

Presentations of the Symposium

 

I’ll Be Bach: On Compositional Identity, Machine Learning Algorithms, and the Claims of Computational Composition

Eric Whitmer
University of Michigan

On March 1st, 2019 in celebration of Johann Sebastian Bach’s 334th birthday, the featured graphic on Google’s search engine became an interactive animation that prompted users to input a melody that would be harmonized in “[Bach’s] signature style.” Touted as the first “AI-powered” Google doodle, in three days users spent 350 years of time interacting with the animation. While this may be the most well-known computer program that imitates Bach’s compositional output, algorithms dedicated to mimicking the musical output of Bach have appeared since 1988, emerging as the results of some of the first experiments in automated musical composition.

Following Bettina Varwig’s call for novel approaches to the composer in Rethinking Bach, this paper traces the lineage of Bach-imitating algorithms and demonstrates how computer scientists routinely use music theoretical principles as fixed definite concepts. Drawing upon previous musicological and music theoretical scholarship on J.S. Bach, I show how computer engineers regularly ignore and simplify musical concepts in the process of reducing music from aesthetic practice to computational output. Unwittingly then, these computer engineers re-inscribe what Philip Ewell identifies as the white racial frame of music theory and perpetuate epistemological violence. Consequently these technologies follow in the footsteps of other machine learning algorithms in amplifying societal bias while hiding behind a veneer of objectivity. I conclude by theorizing about the musical (il)logic that resides in the incompressible space between 0 and 1, and the relevance of what the DISCO Network (2025) terms “technoskepticism” in music scholarship.

 

What Madonna and Kraftwerk Can Teach Us about Music Copyright after The "AI Turn"

Matthew Blackmar
Indiana University

Court decisions regarding Madonna and Kraftwerk might yet shape the AI music-copyright debate. Of late, copyright's legal balance between corporation and individual has shifted to occlude the author, even as fair-use provisions have been extended, in US and EU jurisdictions, to protect Silicon Valley corporations' practices of harvesting, caching, indexing, and thumbnailing public data online. (Gray, 2020) This paper thus asks how we might better understand "data mining" as a fair-use exception for AI developers in light of a landmark decision by the US Ninth Circuit, on minimal sampling, and a pending decision by the Court of Justice of the European Union (CJEU), on pastiche. The former concerns Madonna's use of a 0.23-second horn sample in her 1990 hit "Vogue"; the latter concerns Pelham GmBH's use of a two-second loop from Kraftwerk's 1977 "Metall auf Metall." Together, the two cases summon urgent questions regarding legal constructions of authorship, artistic agency, and fair use.

To reconcile discourses and jurisdictions, I read case law in order to argue that a pernicious dichotomy has emerged, in US and EU jurisprudence, between musical fair use and "data mining," with implications for recording and publishing in the wake of the "AI turn." While the decision in VMG Salsoul v. Ciccone 824 F.3d 871 promises a renewal of the de minimis doctrine pertaining to sampling of minimal length or extent, the pending CJEU decision, on appeal, in Pelham GmbH, et al. v. Ralf Hütter and Florian Schneider-Esleben (2023) promises to rehabilitate the fair-use exception for pastiche. These cases might thereby clarify two issues for understanding the copyright ramifications of AI audio generators: to what extent do AI training models "borrow" fragments of recorded music under copyright? And, to what extent do they reassemble these "borrowings" in the music they generate? While Sebastian Stober's work (2024) suggests that notions of musical "borrowing" and "pastiche" are incommensurate with the technical workings of AI audio generators, I argue that—whether or not jurists understand how to "disentangle" training-data sets—they might yet deliver a legally binding understanding that will shape the direction of copyright, creativity, and industry to come.

 

What is a Voice Worth? Voice Actors and AI Voice Labor under Techno-Capitalism

Kelly Hoppenjans
University of Michigan

When ChatGPT released their version 4o in May 2024, users became suspicious of about similarities between their voice “Sky” and actor Scarlett Johannson’s voice. Johannson famously played an AI voice in the 2013 film “Her,” and the actor alleged that OpenAI used her voice to train “Sky” without her knowledge or consent. Johannson’s high-profile case is representative of a growing existential threat to the voice actor community: exploitation of voice actors’ labor and voice ownership to create AI voice clones that can continue to “work” for free. In this techno-capitalist model, voice recordings are treated as data for training AI models from which tech companies can profit, much like Spotify’s algorithms driven by music and user data (Drott 2023). AI and automation throughout their history have been predicated on alienating knowledge from workers, as Marx observed in Grundrisse (1861). AI voices are but the latest in this history of exploitative capitalist practices.

This paper examines the history and current state of AI voice assistants, grappling with the alienation caused by increasingly automated solutions to voice work. Voice actors are often gig workers without A-list fame and power who are increasingly asked to provide voices for the very AI clones that will replace them. Further, once they provide their voice for a project, the company who owns it can sell their voice often without their consent or compensation, which is how voice actor Susan Bennett unexpectedly heard herself as the voice of Apple’s Siri. This technology’s entanglement with capitalism raises pressing questions about the nature of labor: can AI clones be said to perform voice “work” separate from the actors who lend them their voices? What protections are necessary to ensure that actors are paid fairly for the continued work their clones do, even when those voices are “owned” by tech companies? As AI voices become more entrenched in society and continue to boost tech industry profits, recognizing and protecting the value of voice work is essential for fair labor conditions and compensation in our exploitative, techno-capitalist society.