Katana VentraIP

Computational musicology

Computational musicology is an interdisciplinary research area between musicology and computer science.[1] Computational musicology includes any disciplines that use computation in order to study music. It includes sub-disciplines such as mathematical music theory, computer music, systematic musicology, music information retrieval, digital musicology, sound and music computing, and music informatics.[2] As this area of research is defined by the tools that it uses and its subject matter, research in computational musicology intersects with both the humanities and the sciences. The use of computers in order to study and analyze music generally began in the 1960s,[3] although musicians have been using computers to assist them in the composition of music beginning in the 1950s. Today, computational musicology encompasses a wide range of research topics dealing with the multiple ways music can be represented.[4]

Applications[edit]

Music databases[edit]

One of the earliest applications in computational musicology was the creation and use of musical databases. Input, usage and analysis of large amounts of data can be very troublesome using manual methods while usage of computers can make such tasks considerably easier.

Analysis of music[edit]

Different computer programs have been developed to analyze musical data. Data formats vary from standard notation to raw audio. Analysis of formats that are based on storing all properties of each note, for example MIDI, were used originally and are still among the most common methods. Significant advances in analysis of raw audio data have been made only recently.

Artificial production of music[edit]

Different algorithms can be used to both create complete compositions and improvise music. One of the methods by which a program can learn improvisation is analysis of choices a human player makes while improvising. Artificial neural networks are used extensively in such applications.

Historical change and music[edit]

One developing sociomusicological theory in computational musicology is the "Discursive Hypothesis" proposed by Kristoffer Jensen and David G. Hebert, which suggests that "because both music and language are cultural discourses (which may reflect social reality in similarly limited ways), a relationship may be identifiable between the trajectories of significant features of musical sound and linguistic discourse regarding social data."[17] According to this perspective, analyses of "big data" may improve our understandings of how particular features of music and society are interrelated and change similarly across time, as significant correlations are increasingly identified within the musico-linguistic spectrum of human auditory communication.[18]

Non-western music[edit]

Strategies from computational musicology are recently being applied for analysis of music in various parts of the world. For example, professors affiliated with the Birla Institute of Technology in India have produced studies of harmonic and melodic tendencies (in the raga structure) of Hindustani classical music.[19]

Research[edit]

RISM's (Répertoire International des Sources Musicales) database is one of the world's largest music databases, containing over 700,000 references to musical manuscripts. Anyone can use its search engine to find compositions.[20]


The Centre for History and Analysis of Recorded Music (CHARM) has developed the Mazurka Project,[21] which offers "downloadable recordings . . . analytical software and training materials, and a variety of resources relating to the history of recording."

Computational musicology in popular culture[edit]

Research from computational musicology occasionally is the focus of popular culture and major news outlets. Examples of this include reporting in The New Yorker musicologists Nicholas Cook and Craig Sapp while working on the Centre for the History and Analysis of Recorded Music (CHARM), at the University of London discovered the fraudulent recording of pianist Joyce Hatto.[22] On the 334th birthday of Johann Sebastian Bach, Google celebrated the occasion with a Google Doodle that allowed individuals to enter their own score into the interface, then have a machine learning model called Coconet[23] harmonize the melody.[24]

Algorithmic composition

Computer models of musical creativity

Music cognition

Cognitive musicology

Musicology

Artificial neural network

MIDI

JFugue

Archived 2011-07-06 at the Wayback Machine

Computational Musicology: A Survey on Methodologies and Applications

Towards the compleat musicologist?

Transforming Musicology: An AHRC Digital Transformations project