Conference Agenda

The Online Program of events for the 2023 AMS & SMT Joint Annual Meeting appears below. This program is subject to change. The final program will be published in early November.

Use the "Filter by Track or Type of Session" or "Filter by Session Topic" dropdown to limit results by type.

Use the search bar to search by name or title of paper/session. Note that this search bar does not search by keyword.

Click on the session name for a detailed view (with participant names and abstracts).

 
 
Session Overview
Session
SMT Poster Session
Time:
Friday, 10/Nov/2023:
8:00am - 9:30am

Location: Columbine

Session Topics:
SMT

Show help for 'Increase or decrease the abstract text size'
Presentations

Investigating Relationships among Mindset, Rapport, and Belonging in Undergraduate Music Theory Learners

Benjamin Dobbs1, Shana Southard-Dobbs2

1Furman University; 2Lander University

Recent work in the scholarship of teaching and learning indicates that students’ mindset (beliefs about their ability to learn; Yeager and Dweck 2020), student-instructor rapport (Frisby and Martin 2010; Webb and Barrett 2014), and sense of belonging (Wilson et al. 2015) are important factors for achieving learning outcomes. These factors and their interrelationships, however, have not been explored quantitatively in music-theory learners. Using data collected between fall 2022 and fall 2023 from students at universities across the United States, our research addresses this gap, investigating levels of mindset, rapport, and belonging for undergraduate students at all levels of the music theory curriculum, and examining correlations among these factors.

Overall, participants reported a low level of entity mindset (i.e., fixed mindset, in which the ability to learn new things is not believed to be malleable) and a high level of incremental mindset (i.e., growth mindset, in which the ability to learn new things is believed to be malleable) for both written theory and aural skills, though the contrast in mindset was stronger for written theory. Participants also reported high levels of rapport and belonging. For both written theory and aural skills, incremental mindset correlated positively with rapport and belonging, while entity mindset correlated negatively with rapport and belonging.

In our poster, we discuss additional findings for participants at different levels of study in the undergraduate music theory curriculum, and as they correlate with demographic markers and learning environment factors (e.g., institution type and size and class size). We also provide an interdisciplinary model for conducting scholarship of teaching and learning in a music-theory learning context.



Metric Irregularity as Characterization in Death Note (2006)

Thomas Charles Collison

Indiana University

The soundtrack from the 2006 anime Death Note interacts with a listener’s perception of rhythm and meter in a number of sophisticated ways. Metric and rhythmic irregularities pervade the entirety of the soundtrack and their prominence, combined with the specific musical and thematic settings in which they arise, strongly indicates that they are part of a broader system of compositional design which seeks to instill particular effects in listeners. In this presentation, I provide analyses of several themes from the soundtrack, discussing instances of metric irregularity at both micro- and macro-levels. Observing continuities in its application across multiple themes in the soundtrack, I argue that metric irregularity is used in specific ways as a means of characterization for the primary protagonist (Light Yagami, a.k.a. Kira) and antagonist (Detective L), as well as the ethereal spirits of death known as shinigami. In congruence with this, I categorize instances of irregularity as belonging to either a “L Group,” “Kira Group,” or “Shinigami Group” of characterization. L Group irregularities often occur in the musical foreground and disrupt listeners’ immediate sense of metric stability, mirroring L’s preoccupation with trickery and deception throughout the show; my analysis of these irregularities extensively borrows from models of listeners’ processive projections of meter as described by Gretchen Horlacher (2001) and Christopher Hasty (1997). Contrastingly, Kira Group irregularities occur either at background levels or in ways that aren’t as obtrusive to local metric stability, enticing listeners to adopt new metrical frameworks in a similar manner to how Kira encourages Death Note’s world to adopt his sinister Machiavellian worldview; these are understood within the taxonomy of metrical dissonances outlined by Harald Krebs (1999). Finally, Shinigami Group irregularities arise mainly via timbral effects and references to musical styles like plainchant or aleatoricism, generating rhythmic phenomena which fall outside the bounds of the metrical consonance/dissonance spectrum. Just as the shinigami in the story belong to the realm of the dead but interact with the world of the living, Shinigami Group irregularities interact with (but do not adhere to) the expectations posed by "earthly" metrical structures.



Computational Analysis of Melodic Contour Based on CSIM and Clustering Techniques: A Model Tested by J. S. Bach’s Preludes in Cello Suites No.2 and 3

Lizhou Wang

Indiana University

Traditional thematic analysis may encounter difficulty in analyzing a piece that lacks motivic crispness and formal articulation. For example, the preludes in J. S. Bach’s Cello Suites No. 2 and 3 feature textural homogeneity and thematic fuzziness; their continuous musical flow causes difficulties in grasping the material organization. In this project, taking the two Bach preludes as examples, I developed a comprehensive computational model, which effectively analyzes such a melody without the human analyst giving any motivic information in advance. The model is based on Marvin and Laprade’s COM-matrix/CSIM algorithm, which quantifies contour information and calculates contour similarity; the model also depends on unsupervised clustering techniques to categorize the contours. Before the main analytical process, the model slices a melody into regular segments at two levels (for example beat- and bar-levels) and unifies the cardinality at each level. Then, the system generates the contour matrix of each segment. There are three main analytical modules. Module One creatively uses CSIM values to both recognize the most representative segment and evaluate the level of monothematicism; a low monothematic level triggers the system to look for the second representative segment. Module Two focuses on the macro-level contours. It uses clustering techniques to separate contours into an appropriate number of groups. Then, it evaluates the cohesiveness among a group of contours using clustering parameters and CSIM algorithm; the “sparse” data points are then labelled and require extra attention from the ananlyst. Module Three is technically similar to the previous module, except that it addresses the organization of contours at the micro-level. Moreover, this module is used to further explore what the analyst obtains in previous modules, such as detecting the subtle patterns in a module-two group or the similarity and difference between outstanding module-one contours. Overall, depending on the cooperation between COM-matrix/CSIM algorithm and clustering techniques, the model shows in detail the distribution of, and the interactions between melodic contours at different levels, producing both factual data and interpretive potentials for the analyst.



The “Colors” of Parsimony in Cohn’s Reinterpreted Tonnetz

M. A. Coury-Hall

New York City, NY

Groupoids can effectively tile sections of the neo-Riemannian Tonnetz based on the trivial and non-trivial “alleys” most recently discussed in the literature under the topic of homotopy by Tymoczko (2020). These neo-Riemannian groupoids extend the conceptual framework initiated by Brower (2008) by constructing geometric shapes of music-theoretic significance within the Tonnetz. The coloring technique assigns a primary color to Tonnetz tiles that preserve certain voice-leading properties such as parsimony: hence, a subtractive visual color is assigned to a music-theoretic “color.” The RYB color wheel determines how combined invariants are represented in the tiles. The innovation in this paper recognizes that Brower’s three overlapping music-theoretic regions identified as hexatonic, octatonic, and key spaces, once extricated from their group-theoretic origins, can be more completely characterized through the algebraic properties of groupoids, resulting in a unified formalism of voice-leading invariants. This is a practical application of Tymoczko's recent rejection of transformational closure as an organizational principle in music theory because the musical significance of any two basic transformations cannot guarantee the musical significance of any combination of these transformations.

Schubert’s seventh song Auf dem Flusse from Winterreise (D. 911, No. 7) uses all three alleys to structure the piece as seen in the paradigmatic analysis of Cohn (2012). The groupoids constructed from these alleys can resolve conflicting interpretations of the piece’s harmony given by Lewin (1982), Newcomb (1986), and Damschroder (2010), once the appropriate groupoids are assigned to Schubert’s text-setting, now formalized as the “colors” of his musical praxis.



The Antiphonal Stream in Popular Music

David Forrest

Texas Tech University

Alan Moore describes four textural layers in popular song: melody, harmony, bass, and beat (Moore 2012). In many songs, the melodic material involves two or more parts in dialogue with each other. Such material occurs in a wide range of styles and includes call-and-response activity which scholars connect to the influence of African-diasporic music (Keyes 2002, Stover 2009). To highlight this important textural element, this paper divides Moore’s melodic layer into two streams: lead and antiphony. The antiphonal stream includes any material that directly responds to the lead melody. In the interest of developing a broadly applicable tool, this paper consolidates rap’s flow layer into the lead stream and Lavengood’s 1980s novelty layer into the antiphonal stream (Lavengood 2020, Duinker 2021).

This paper highlights the presence of antiphonal material across a wide range of styles and demonstrates the analytical advantage of identifying its unique role, adding depth to the four-layer texture. Antiphonal material can be pitched, such as the trumpet response in Barry Manilow’s “Sweet Caroline,” or unpitched, like Ringo Starr’s drum fills that respond to John Lennon’s melody in the second verse of “A Day in the Life.” The antiphonal stream incorporates elements that are often ignored by textural analyses, such as studio effects, hip-hop record scratches, and nonverbal, ad-lib vocables. Instruments sometimes change textural roles within a song. These changes often help define form such as in the common blues schema where the guitar plays lead melody during the intro and instrumental sections and comments antiphonally in the verses. The same gesture can fill multiple roles simultaneously. Investigating the textural function of backup singers frequently reveals dynamic relationships between lead and antiphonal streams, between melodic and harmonic layers, and highlights the roles of meter and verbal syntax in defining these relationships.

This paper explores the vital role of the antiphonal layer in songs by Aretha Franklin, The Ronettes, Billy Joel, LL Cool J, George Strait, Mariah Carey, the Backstreet Boys, Destiny’s Child, and BTS. The antiphonal stream is not present in all songs, but its ubiquity merits analytical categorization.



Animated Harmonic Analysis Using DFT Phase Spaces and Coefficient Products

Jason Yust1, Giovanni Affatato3, Fabian C. Moss2

1Boston University; 2Politecnico di Milano; 3Julius-Maximilians-Universität Würzburg

MidiVerto is an interactive music analysis tool that uses the discrete Fourier transform on pitch-class vectors. It performs a windowed analysis of a midi file and displays the results of a DFT in wavescapes and coefficient spaces, showing how DFT coefficients change over the course of passage and with changes in the size of the window.

A playback feature creates analytical animations by timing output to audio. In this digital poster we illustrate two new modules added to midiVerto, a coefficient product space and phase space. Products of the 2nd, 3rd, and 7th (f2f3f7) and 3rd, 4th, and 5th (f3f4f5) coefficients illustrate typical features of functional harmony, especially a tendency towards coherence (phase values staying close to zero) in the f2f3f7 space, but not in the f3f4f5 space, and an association of the imaginary dimension with mode. Modulations are visible in the phase space on the 3rd and 5th coefficients. The ability to produce animated analyses timed to audio output provides a new medium for analytical interaction with music.

Animated Harmonic Analysis Using DFT Phase Spaces and Coefficient Products-Yust-615_Handout.pdf


Contextualizing Hildegard of Bingen’s Compositional Style through Computational Analysis

Jennifer Bain1, Kate Helsen2, Mark Daley2, Jake Schindler2

1Dalhousie University; 2Western University

Our study contextualizes Hildegard of Bingen’s compositional style, by probing her repertory in relation to other medieval plainchant. Using computational n-gram analysis, our methodology contrasts with earlier manual methods, including studies by Pothier (1898, 1899, 1908, and 1909), Bronarski (1922), Pfau (1990), Fassler (1998), Stühlmeyer (2003), Pfau and Morent (2005), and Bain (2008 and 2009). Testing earlier theories about Hildegard’s style and identifying features previously unquantified, our paper applies n-gram analysis to two datasets of computer-readable melodies: 1. The LMLO, the 6000+ melodies from Andrew Hughes’ Late Medieval Liturgical Offices (1994 and 1996); and 2. our “HVat”, comprising Hildegard’s complete musical output, including the 83 melodies from the Ordo virtutum and the 104 melodies from the Symphoniae (77 chants, plus responsory and antiphon verses counted as separate melodies), as transcribed into the Cantus Database (cantus.uwaterloo.ca).

We compiled the top twenty 4-, 5-, 6-, 7-, and 8-grams in the HVat, ranking the results according to number of appearances. We compare these top 20 n-gram results with the number of appearances in LMLO, representing the differences by a ratio of x:1, HVat to LMLO. Using these results, we focus on: groups of n-grams related by (“diatonic”) transposition, and n-grams that appear more frequently in melodies in the HVat than in the LMLO. These groupings reveal more significant differences in the rate of appearance of particular gestures in Hildegard’s repertory in relation to the “general” late medieval chant repertory captured in the LMLO.

Finally, we conclude our paper with a case study inspired by one 4-gram that appears 155 times in the HVat, twice as frequently as in the LMLO, featuring an ascending minor third leap and descent by step. We investigate in the HVat: 1. if descending immediately after an upward leap of a minor third is typical. 2. if both varieties of minor third (ST-T and T-ST) operate in the same way and, 3. if ascending major thirds function in the same way.



Emo Guitar Tunings: The Impact of Guitar Tunings on Fretboard Distances

Matt Chiu1, Tyler M. Howie2

1Baldwin Wallace University; 2University of Texas at Austin

Recent research on pop and rock music has focused on fretboard spaces, highlighting the gestural aspects of performance, and disrupting the traditional assumption that the interval from one pitch to another is always the same (Capuzzo, 2004; Gardner and Shea, 2022; Koozin, 2011; Rockwell, 2009; Shea, 2020). While much of this research works with “standard” guitar tuning (SGT), some scholarship examines “alternate” guitar tunings (AGTs), discussing their negative effects (Rover 2006) and/or the practical affordances they provide (Kaminsky and Lyons, 2020). Alternate tunings change not only the pitches of the strings, but also the intervals between them. In some styles of American emo music, AGTs have become the “unmarked” standard to the “marked” alternative of SGT. Emo’s AGTs are, moreover, often “open,” meaning the strings are tuned to a chord, creating consonance among the open strings.

This poster examines AGT fretboard spaces in the context of a type of stylistic riff found in some emo music, nicknamed the “twinkle” schema (Howie and Chiu, 2022). First, it studies the historical role of AGTs in emo and how they relate to the genre’s stylistic, “twinkling” riffs. Then, it examines songs with different tunings, measuring the Euclidian distances between pitches in terms of 1) staff notation 2) standard guitar tuning and 3) the AGT in which each riff is performed.

Finally, it uses statistical data to show how AGTs encourage accessibility in guitar performance, embodying in part the DIY (do-it-yourself) roots of American emo.



Measuring the Uncanny: Chromatic Mediant Motion in Elliott Smith's, XO

Devin Ariel Guerrero, Brad Cawyer

Texas Tech University

Building on research that associates chromatic-mediant motion with Freud’s Uncanny, this paper utilizes converging methods to examine text painting in Elliott Smith’s XO. Like previous studies, one method provides a close comparison between Smith’s lyrics and moments with chromatic-mediant harmonic motion. The other method—following Sears and Forrest (2021)—compares root motion in Smith’s XO to conventional and characteristic ordered triadic chord pairings (bigrams) in the combined McGill Billboard and Rolling Stone-200 corpora. This paper demonstrates quantitatively that Smith’s lyrics and harmonies integrate a greater degree of uncanniness than the standard pop repertoire, highlighting the need for post-millennial popular music corpus studies.

Harmonic progression data for the combined RS-200 and Billboard corpus containing 921 unique songs drawn from five decades of popular music and cataloguing over 100,000 chords was filtered for its triadic bigrams using the Triadic Harmony Analysis Tool. When compared to the relative frequencies of chromatic-mediant bigrams from the combined corpus, chromatic-mediant bigrams in XO occur 3.33 times more frequently than in songs from the extant corpora. The six different types of chromatic-mediant bigrams used in XO occur between 3 and 49 times more frequently than in the combined corpus.

The track “Tomorrow, Tomorrow” exemplifies the connection between chromatic mediants and lyrics in XO. The conflicted inner dialogue of Smith’s lyrics express uncertainty, self-doubt, and misunderstanding: “They took your life apart and called your failures art. They were wrong, though. They won’t know ‘til tomorrow.” An explicitly uncanny moment of misunderstanding occurs when the narrator denies that his struggles led to his creative output, believing “they” (his audience) got it wrong and implying that “they” do not yet know his worst failures: “They won’t know ‘til tomorrow.” These lines of troubled introspection that suggest an impending tragedy are saturated with a remarkably higher frequency of chromatic-mediant harmonization than is normal in popular music.

The language structure of Smith’s lyrics brings the audience into the narrator’s experience. Alongside them, chromatic-mediant motion contributes to the listener’s own sympathetic experience of doubt. Our analyses provide a unique resource which augments progressions conventional to the popular corpus: chromatic-mediant rich subsets.



Them bars really ain't hittin' like a play fight - Analysing weak alternative lineations and ambiguous lineation in relation to metrical structure in rap flows.

Kjell Andreas Oddekalv

RITMO - University of Oslo,

As the common vernacular of hip-hop changed and “bars” became the newest synonym for “well-constructed lyrical lines”, “nicely structured rap verse” or even “dope flow” it could well be argued that the theoretical and terminological confusion was complete. Bars (as synonymous with “measures”) are not the same as lines, are they? However, the linguistic drift is understandable – as the relationship between the two (in a rap context at least) is intricate and interconnected enough that it makes some sense to consider them as one compound concept. In this paper, they will be pried apart again – their relationship explored, and rap techniques found in their intersections explained.

The structuring of rap’s lines has been a focus area of the field of rap analysis since its inception. Whether the exact term “lines” have been used, or sibling terms like “phrase”, music scholars have identified the line/phrase/measure/metre-interactions as central to the rhythmic techniques of rap flows. Scholars from the fields of linguistics and literature have made similar analytical forays, and it seems that the intersection of these disciplines is where the analysis of the intersection of lines and measures will occur in the future.

This paper follows Fabb’s (2002) argument that lineation – the act of dividing text into lines – is an implied form. What the listener experiences as the boundary of a line is determined by a triangulation of various evidence for lineation. The most prominent are linguistic syntax and primary rhyme position, but breathing pauses, various rhythmic parallelisms, melodic contour, performative delivery and more can function as evidence for lineation. When different evidence points to different lineations, there will be cases of weak alternative lineations or even fully ambiguous lineation – which this paper argues are central to the aesthetic expression of rap.

The main analytical topic will be excerpts from verses by OutKast emcee André 3000 – showcasing how the temporal nature of (recorded) music invites successive interpretations and reinterpretations of lineation structure by listeners.



Planting Another Tree: Relational Salience as a Hierarchical Form-Building Mechanism

Morgan Patrick

Northwestern University,

A Generative Theory of Tonal Music (Lerdahl and Jackendoff 1983) has spurred interest in extending prolongational structure beyond tonal stability relationships, even reaching poetry and narrative (Lerdahl 2007, 2022; Antović 2022; Margulis et al. 2022). An underexplored link between form and tension across music and narrative is the ebb and flow of similarity and change, which Leonard Meyer and Eugene Narmour recognized to influence musical affect alongside tonality (Meyer 1956; Narmour 1980).

Moving beyond tonal tension, this paper posits a narratively-oriented model of formal patterning based on relational salience, in which structural parallelism accentuates variation and change relative to preceding musical material during real-time listening. Schoenberg (1994), Meyer (1956), Keller (1970) and Ruwet (1990) implied (but never formalized) this concept in hierarchical terms, though it is well-established in psychological studies of similarity (Markman & Gentner 1997; Gentner 2010). Here, timespans are assigned syntactic functions based upon how they inflect contrast, and the resulting hierarchic constituents trace canonical arcs of tension and relaxation.

I show how resemblances between this model and models of discourse structure and Visual Narrative Grammar (“VNG,” Cohn 2013) provide a different kind of tree structure than typical approaches to prolongation or musical salience (Lerdahl 1989). These connections to narrative allow us to conceptualize musical timespans as phases of a narrative arc, which can balance between the generality of BME recursion and the specificity of classical formal and prolongational syntaxes (Caplin 1998; Lerdahl 2001). In turn, this reveals a more abstract set of relational schemata unfolding across varying temporal window sizes and musical styles, such as the repetition-break plot structure (Loewenstein & Heath 2009) and the same-except cognitive relation (Culicover & Jackendoff 2009).

In sum, this model recasts traditional conceptions of hierarchy by privileging change rather than stability as the governing elements of tonal-temporal patterns. It lays the foundation for a narrative model of real-time syntactic processing of musical similarity without reference to, or mediation from, extra-musical narrativity (cf. Margulis et al. 2022). Finally, by formalizing musical parallelism within the psychology of similarity, this approach provides a domain-general interface between intra-musical affect and other form-bearing parameters.



Choral Repertoire: Promising New Directions for Music Theory Teaching

Meghan Hatfield

Utah State University

Choirs are an integral part of music departments and schools, particularly at institutions with large choral education programs. In its standards for music education, the 2022 NASM handbook states that “Teachers should be prepared to relate their understanding of music… both in general and as related to their area(s) of specialization.” Yet despite the large number of students participating and/or specializing in choir, choral music is relatively rare in music theory textbooks. Perhaps as a result, research has shown that high school choir directors struggle with harmonic score study (Rowher et al. 2014) and, anecdotally, choir students and teachers are sometimes stereotyped as weak in music theory.

This poster will demonstrate how contemporary choral music can be applied to all levels of music theory instruction, using a comprehensive list of examples by a diverse range of composers. In addition to demonstrating standard music theory topics such as chromatic chords, modulation, and form. Understanding these concepts within the context of choral music can give them greater meaning and accessibility because of how they often relate to expressions of texts.

Incorporating choral music has the added benefit of a diverse set of peoples and musics within the repertoire. By diversifying examples, both in composer and repertoire, students will be better equipped to apply music theory beyond the college classroom. Studying choral music will help choral education majors apply music theory and analysis to their own practices and repertoire, helping them to understand music in their specialization on a different level, and allow them to teach their students the same skills through the music they sing.



 
Contact and Legal Notice · Contact Address:
Conference: AMS-SMT 2023 Joint Annual Meeting
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany