DM as DJ?: Understanding the Dungeon Master as Player through Ludomusicality
Haley Heinricks
Harvard University
In popular tabletop role-playing game (TTRPG) Dungeons & Dragons (D&D), a Dungeon Master (DM) leads a group of Players through a partially-improvised narrative adventure. While both the DM and Players are “playing D&D,” the DM is differentiated and assumes a role akin to a “game engine”: they design and describe the game’s world, referee game rules, manage enemy forces during combat, and control elements of game atmosphere including music and sound. As the Players enter a tavern, the DM might cue up crowd murmurs and diegetic jaunty lute music; when a brawl begins, the DM might shift to non-diegetic music with percussion and a fuller string section. Curating an appropriate selection of music can be time-consuming. Fortunately, the recent “D&D renaissance” (Whitten 2020; Stitch 2021) has brought about designated TTRPG music applications such as Pocket Bard that streamline the processes of compilation and real-time implementation.
In this paper, through an analysis of the Pocket Bard application, I propose that a DM’s use of music and sound during a session makes apparent their parallel status as “player.” I draw upon Mark Butler’s (2014) exploration of live improvisation in DJ and laptop sets, Roger Moseley’s (2016) ludomusical digital analogies, and Jonathan De Souza’s (2017) organological examination of instrumental interfaces in order to understand Pocket Bard as an instrument and reframe the relationship between a DM and play.
Pocket Bard features music specifically composed for use during TTRPGs. Music in the app is organized by location, like “town” or “forest,” and each location features two primary tracks—“exploration” and “combat”—and an “intensity” slider that manipulates orchestration. A simple and customizable interface allows the DM to easily move between tracks as in-game circumstances require (like a DJ might), layer diegetic sound (such as weather) over the active music track, and trigger single sounds (like an explosion). By understanding Pocket Bard as an instrument and the DM its player, I suggest a path forwards for music-theoretical considerations of agency in environmental sounds by building upon Moseley’s “ludomusicality” (2016).
Musical Function and Meaning in the Japanese Role-Playing Game
Alan Elkins
Cleveland Institute of Music,
Role-playing games, or RPGs, have received significant scholarly attention due to their strong emphasis on the relationship between music, story, and gameplay. William Gibbons and Steven Reale note that RPGs tend to have more complex storylines and soundtracks than most other video game genres, using music to provide supplemental information to the player about characterization and narrative. Music also aids in world-building by communicating information about the current environment—for instance, whether the player has just taken refuge in a quiet village or is instead deep in an underground labyrinth, with monsters lurking around every corner.
In this paper, I will discuss the musical conventions of RPGs, with a focus on titles developed for the Japanese market (“JRPGs”) in the late 1980s and early 1990s. I will begin by outlining four common cue types—towns, fields, dungeons, and battles—and their most frequently used subtypes, discussing the melodic, harmonic, and timbral conventions that were established early in the genre’s history and ways in which those conventions evolved over time. Then, I will examine several cues from the mid-1990s that deliberately subverted previously established conventions to convey narrative information to the player—for instance, through the combination of features normally found in two separate cue types, or the use of a single cue to for two different gameplay functions at key moments in a game’s plot.
While much of previous scholarship on RPGs has focused on individual titles, a deeper examination of genre-wide conventions can shed further light on the ways that composers adhere to—or subvert—player expectations in the service of narrative. This may, in turn, further inform future research and hermeneutic analysis on individual soundtracks.
Tracing Percussion Orchestration in Early NES Soundtracks (1983-1987): Understanding Coded Parameters Through FFT Analysis
Joseph T Chang
McGill University; Center for Interdisciplinary Research in Music Media and Technology
“Creativity within constraint” has become a defining phrase in discussions of 8-bit video game music, from foundational publications by Collins (2008a, 2008b) to recent scholarship by Grasso (2020), Anatone (2022), and Elkins (2023). Music written for the Nintendo Entertainment System, the most widely distributed console of the 1980 and 1990s (Sheff, 1994), exemplifies this notion. Composers only had five voices to orchestrate their compositions: two pulse waves, one triangle wave, one noise channel, and a Delta Modulation Channel (Schartmann 2015, 2018). Among these, the noise channel provides a key perspective on compositional strategies, particularly in its role for percussion and sound effects.
Early NES composers faced unique challenges in orchestrating percussion, compounded by the difficulty of programming and the task of emulating the diverse timbres of percussion instruments within a single noise channel. McAlpine (2019) reveals that composers could manipulate three timbral parameters – frequency, duration, and dynamics, each with sixteen possible modes. Altering just one of these parameters can create subtle nuances, such as the accented hi-hats in Yukio Kaneoka’s Mario Bros (1983). As the noise channel became more standardized for percussion, its timbres evolved between 1983 and 1985 into a drum kit consisting of a snare drum, kick drum, and a hi-hat. This progression is evident in works of Hirokazu Tanaka, from Duck Hunt (1984) to Donkey Kong 3 (1984), and Wrecking Crew (1985).
Despite its complexity, the noise channel remains underexplored in ludomusicology, partly due to the limitations of Western notation in capturing these parameters. While learning assembly language provides one route to analysis, Fast Fourier Transform (FFT) analysis offers a more accessible alternative without needing to learn programming. Waveforms, spectrograms, and frequency spectrums are analyzed in combination to pinpoint the exact values within the three parameters of the noise channel. Furthermore, forming larger datasets based on each composer’s parameters help identify recurring trends, uncovering how composers gradually developed their compositional techniques under the technological constraints. When complemented with published interviews (Tanaka, 2014), these idioms reveal how composers approached each software—whether they refined their techniques over multiple games or mastered these parameters from the outset.
|