Repository logo
 

Rhythmic Qualities of Jazz Improvisation Predict Performer Identity and Style in Source-Separated Audio Recordings

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

Great musicians have a unique style and, with training, humans can learn to distinguish between these styles. What differences between performers enable us to make such judgments? We investigate this question by building a supervised-learning model that predicts performer identity from data extracted automatically from an audio recording. Such a model could be trained on all kinds of musical features, but here we focus specifically on rhythm, which (unlike harmony, melody, and timbre) is relevant for any musical instrument. We demonstrate that a supervised learning model trained solely on rhythmic features extracted from 300 recordings of ten jazz pianists correctly identified the performer in 59% of cases, six times better than chance. The most important features related to a performer’s “feel” (ensemble synchronization) and “complexity” (information density). Further analysis revealed two clusters of performers, with those in the same cluster sharing similar rhythmic traits, and that the rhythmic style of each musician changed relatively little over the duration of their career. Our findings highlight the possibility that artificial intelligence can perform performer identification tasks normally reserved for experts. Links to each recording and the corresponding predictions are available on an interactive map to support future work in stylometry.

Description

Journal Title

Royal Society Open Science

Conference Name

Journal ISSN

2054-5703
2054-5703

Volume Title

Publisher

The Royal Society

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International
Sponsorship
Cambridge Trust Vice-Chancellor's Award