▶ Digital Musicology

"Applied computational and informatics methods for enhancing musicology"

Workshop Organiser: Kevin Page, Oxford e-Research Centre, University of Oxford

Abstract

A wealth of music and music-related information is now available digitally, offering tantalizing possibilities for digital musicologies. These resources include large collections of audio and scores, bibliographic and biographic data, and performance ephemera -- not to mention the 'hidden' existence of these in other digital content. With such large and wide ranging opportunities come new challenges in methods, principally in adapting technological solutions to assist musicologists in identifying, studying, and disseminating scholarly insights from amongst this 'data deluge'.

This workshop provides an introduction to computational and informatics methods that can be, and have been, successfully applied to musicology. Many of these techniques have their foundations in computer science, library and information science, mathematics and most recently Music Information Retrieval (MIR); sessions are delivered by expert practitioners from these fields and presented in the context of their collaborations with musicologists, and by musicologists relating their experiences of these multidisciplinary investigations.

The workshop comprises of a series of lectures and hands-on sessions, supplemented with reports from musicology research exemplars. Theoretical lectures are paired with practical sessions in which attendees are guided through their own exploration of the topics and tools covered. Laptops will be loaned to attendees with the appropriate specialised software installed and preconfigured.

Timetable

Times Monday 20 July
2015
Tuesday 21 July
2015
Wednesday 22 July
2015
Thursday 23 July
2015
Friday 24 July
2015
Morning:
11:00 - 12:30
Welcome and Houskeeping Kevin Page

Introduction to Digital Musicology Tim Crawford and J. Stephen Downie
Computational tools, such as those of music information retrieval (MIR), are being enhanced and adapted to the needs of musicologists. These offer new, rapid and effective ways of investigating large collections of music in audio or score form. Recent advances in Web technology also allow researchers to record and share their results and working methods in a sustainable way, so their methods can be easily altered in the light of new knowledge or re-used on new data. This session provides an overview of how these technologies fit together in a musicological context, setting the scene for the week ahead.

Roundtable introduction from attendees Chair: Tim Crawford
Big Data and Other Digital Strategies for Historical Musicologists Stephen Rose
This session introduces a range of digital strategies for researching music history. It focuses on the analysis, manipulation and visualisation of high-volume open data, particularly music-bibliographical datasets such as RISM (Répertoire international des sources musicales, https://opac.rism.info/index.php?id=8&L=0) and the British Library's metadata for printed music (http://www.bl.uk/bibliographic/datafree.html). It also suggests ways to create a symbiosis between recent digital techniques and the older methods of historical musicology.
Representing musicological knowledge on the Web using Linked Data Kevin Page
The Semantic Web can be thought of as an extension of the WWW in which sufficient meaning is captured and encoded such that computers can automatically match, retrieve, and link resources across the internet that are related to each other. In a scholarly context this offers significant opportunities for publishing, referencing, and re-using digital research output. In this session we introduce the principles and technologies behind this 'Linked Data', illustrated through examples from musicological study.

An overview of software and data management best practice David M. Weigl and Richard Lewis
Revision control refers to a set of practices to track and control changes to your project files. Learn how to manage, revise, and collaborate on digital documents; how to revert files back to a previous state; and how to see when a particular change was introduced, and who was responsible.
Blind alleys, science fiction, redundancy and modernization: how musicology is and isn't evolving in response to the digital world Julia Craig-McFeely
We live in interesting times: 20 years ago the digital medium had little or no impact on scholarly activity; today the majority of active scholars learned their research techniques and methodologies in a print-oriented world. Born-digital research styles and scholars see data differently from those moulded by the print age: not only are research methodologies evolving, but also the way in which we manage digital content as we change from replicating or adapting print structures and thought processes to an understanding of content that has no roots in print culture. The session discusses issues facing a project born in the paper age as it attempts to meet the demands of digital modes of thought and research.

Automatic transcription of scanned notation: state of the art and applications (Part 1) Optical Music Recognition Ichiro Fujinaga
A brief history of optical music recognition research will be followed by an introduction to various challenges posed by trying to recognize musical scores both in printed and handwritten formats. An overview of the technologies used will also be presented including those currently under development within the SIMSSA project.
A case study in Early Music, from digitisation to Linked Data: experiences from EMO, ECOLM, SLICKMEM, and SLoBR Tim Crawford , David Lewis , Kevin Page , and David M. Weigl
We base our presentation on our experiences with a large collection of images of historical music prints (EMO). Using optical recognition methods, we encoded a representative test-set automatically. We shall describe what further work is needed to enable useful and interesting searches, comparisons and other musical investigations on incorporating both the resulting corpus and relevant external resources.

Describing music performance and interpretation: digitally researching Wagner and the leitmotif Carolin Rindfleisch and Kevin Page , and David M. Weigl
How are Wagner's Music Dramas heard, seen and interpreted in different cultural and historical situations? Reaching from systematising the huge corpus of audience-aimed introductory literature about Wagner's Ring, to digitally capturing characteristics of a certain performance, to new ways of structuring, presenting and linking our own interpretations in a digital environment, this session presents a variety of possibilities which Digital Musicology holds for this particular case study.
Afternoon:
14:00 - 17:30 (inc. break)
Hands on: Using computers to analyse recordings An introduction to signal processing Christophe Rhodes and Chris Cannam
This session, and the following hands-on session, introduces the basics of computational treatment of recordings of music, which are based on the concept of 'features' derivable from this 'signal' by suitable processing. The hands-on session will expose you to software for extracting features from recordings, visualising those features, and will help you understand how features relate to perceptual and musical concepts.

Hands on: Using computers to analyse recordings Practical feature extraction for musicology Christophe Rhodes , Chris Cannam , and David M. Weigl
Session develops on previous one.

Using computer analyses to index and find recordings Feature search and retrieval Christophe Rhodes
Having previously covered the extraction of features from musical recordings, in this session you will be introduced to the technique of using geometrical distance to quantify the similarity between sets of features, and we will relate application of that technique to the task of finding recordings of interest within a larger collection.
Training computers automatically to recognise patterns in recordings Practical machine learning Ben Fields , J. Stephen Downie , and David M. Weigl
When analysing a large corpus of audio, a limiting factor is time: it is not practical to find patterns in a very large collections of audio by just listening. So-called 'machine learning' techniques offer a means around this limit. In this session we will show you how to use modern machine learning techniques to distill out patterns in large collections of audio, without exhaustive human audition.

Methods for analysing large-scale resources and big music data Ben Fields and Tillman Weyde
This session will take the tools of the last few sessions and consider the effects and consequences of scale. We will look at how you can manage and mitigate the problems of working with very large amounts of data. We will explore techniques that work best at this scale, using music collections from the British Library to define explore, analyse and compare large datasets across historic, cultural, and musical dimensions.
Digitised Notated Music: hands on with MEI and MusicXML Richard Lewis , David Lewis , and David M. Weigl
There are two broad domains of digitised music: audio and so-called symbolic, which includes encodings of music notation. In this session we introduce two music notation formats: MEI and MusicXML. We learn the models of music notation they employ, the text critical apparatus they provide, and how to prepare documents in these formats.

Web-scale analysis of music: lessons from the SALAMI project Experiences in ground truth, big data, and structural analysis David De Roure , J. Stephen Downie , and Ichiro Fujinaga
Musical analysis has traditionally been conducted by individuals, and on a small scale. In the SALAMI (Structural Analysis of Large Amounts of Music Information) project, music students performed structural analyses over a substantial music corpus in order to deliver a "ground truth", and this was coupled with computational techniques established in the global music information retrieval research community. SALAMI exemplifies current social and "big data" approaches to take advantage of the huge volume of recorded music content data now available.
Automatic transcription of scanned notation: state of the art and applications (Part 2) Optical Music Recognition Ichiro Fujinaga
(continued)

Computer processing of digital notated music: hands on with music21 Working with symbolic music data Richard Lewis , David Lewis , and David M. Weigl
Given a corpus of digital musical documents, how can we explore its contents? In this session we introduce the music21 toolkit which allows us to search for patterns in such music corpora and to prepare reproducible analytic tools. We learn its specialist query language and some basic Python programming techniques.
The challenges and opportunities of finding music and music scholarship in the 4.6 billion pages of the HathiTrust Digital Library J. Stephen Downie and Ichiro Fujinaga
Very large collections of digital materials have the potential to transform musicology in both theory and practice. However, large corpora such as the HathiTrust Digital Library create as many challenges as opportunities. This session explores these challenges and highlights work being done to maximize the benefits.

In Concert: towards a collaborative digital archive of musical ephemera Rachel Cowgill
With the cultural turn in musicology, scholars' attention has shifted to traditions and cultures of performance and to related collections of ephemera. Traditionally overlooked, such material can yield rich, complex, and highly structured data with the potential to inform our understanding of taste, canon formation, musicians' careers, the development of institutions, and the socio-economic contexts within which live music has been presented to audiences. This session reports on a project designed to gather, systematize, and interrogate data of this nature, and proposes new ways of writing and thinking about performance history.

Round table discussion: applied digital musicology in your research Rachel Cowgill , Tim Crawford , Ichiro Fujinaga , David Lewis , Kevin Page , and Carolin Rindfleisch

There are 16 individual speakers in this workshop.

  • Chris Cannam
    Centre for Digital Music, Queen Mary University London

    Chris Cannam is Principal Research Software Developer in the Centre for Digital Music at Queen Mary University of London, where he works with researchers to produce useful software for music analysis. He is the primary author of the Sonic Visualiser application and many of its plugins.

  • Rachel Cowgill
    Music & Drama, University of Huddersfield

    Professor Rachel Cowgill (Head of Music & Drama, University of Huddersfield) is a musicologist specialising in British musical cultures. She is PI of the AHRC-funded InConcert, working alongside Professors Simon McVeigh, Christina Bashford, Alan Dix, and Dr Rupert Ridgewell (British Library).

  • Julia Craig-McFeely
    Faculty of Music, University of Oxford

    Julia Craig-McFeely has managed the Digital Image Archive of Medieval Music since 1998, and was one of the consultants for the digitization of the Dead Sea Scrolls. She has developed digital restoration techniques for damaged manuscripts which have been applied to many different types of sources. She is currently a research fellow at the Faculty of Music in Oxford and co-investigator of the AHRC-funded Tudor Partbooks project.

  • Tim Crawford
    Computing Department, Goldsmiths, University of London

    Tim Crawford worked as a professional lutenist, playing on several recordings made during the 1980s. As a musicologist he studies lute music of the 16th to 18th centuries. Since the early 1990s he has been active in the rapidly-expanding field of MIR and was President of ISMIR for two years. He is PI of the AHRC-funded Transforming Musicology project.

  • David De Roure
    Oxford e-Research Centre, University of Oxford

    David De Roure is Professor of e-Research at the University of Oxford, where he directs the multidisciplinary e-Research Centre. Focused on advancing digital scholarship, David has conducted research across disciplines in the areas of social machines, computational musicology, Web Science, social computing, and hypertext. He is a frequent speaker and writer on digital scholarship and the future of scholarly communications, and advises the UK Economic and Social Research Council in the area of Social Media Data and realtime analytics.

  • J. Stephen Downie
    Graduate School of Library and Information Science, University of Illinois, Urbana-Champaign

    J. Stephen Downie is a professor and the associate dean for research at the Graduate School of Library and Information Science, University of Illinois. Dr. Downie conducts research in music information retrieval. He was instrumental in founding both the International Society for Music Information Retrieval and the Music Information Retrieval Evaluation eXchange.

  • Ben Fields
    Computing Department, Goldsmiths, University of London

    Ben Fields is a post-doctoral researcher in Computing at Goldsmiths, University of London. He works on Transforming Musicology, focusing on the social media aspect of the project. His research interests include network analytics, audio signal processing, recommender systems, and Linked Data. Ben also runs the data-centric consulting agency Fun and Plausible Solutions. There he works with companies to better understand and leverage their data.

  • Ichiro Fujinaga
    Schulich School of Music, McGill University

    Ichiro Fujinaga is an Associate Professor and the Chair of the Music Technology Area at the Schulich School of Music at McGill University. He has Bachelor's degrees in Music/Percussion and Mathematics from University of Alberta and a Master's degree in Music Theory and a Ph.D. in Music Technology from McGill.

  • David Lewis
    Computing Department, Goldsmiths, University of London

    David Lewis is a researcher based at Goldsmiths, University of London and Birmingham Conservatoire. His research focusses on the creation, dissemination and use of digital corpora of music (such as the Electronic Corpus of Lute Music) and music theory (earlymusictheory.org and Thesaurus Musicarum Italicarum).

  • Richard Lewis
    Computing Department, Goldsmiths, University of London

    Richard Lewis is a research associate at Goldsmiths College. He received his BA in Music and his MMus in Critical Musicology both from UEA and his doctoral work, carried out at Goldsmiths, explored issues around the uptake of computational techniques by musicologists.

  • Kevin Page
    Oxford e-Research Centre, University of Oxford

    Dr. Kevin Page is a researcher at the University of Oxford e­-Research Centre. His work on web architecture and the semantic annotation and distribution of data has, through participation in several UK, EU, and international projects, been applied across a wide variety of domains including sensor networks, music information retrieval, clinical healthcare, and remote collaboration for space exploration. He is principal investigator of the Early English Print in HathiTrust (ElEPHãT) and Semantic Linking of BBC Radio (SLoBR) projects, and leads Linked Data research within the AHRC Transforming Musicology project.

  • Christophe Rhodes
    Computing Department, Goldsmiths, University of London

    Christophe Rhodes's research career has spanned Cosmology, Software Development and Music Informatics, providing ideas and implementations for the European Space Agency's PLANCK satellite, Google's Flight Search, Yahoo! Music and Flickr. He co-founded Teclo Networks, developing and selling network infrastructure for mobile telecommunications, and now lectures at Goldsmiths, University of London.

  • Carolin Rindfleisch
    Faculty of Music, University of Oxford

    Carolin Rindfleisch studied "Music, Art and Media" at the Philipps-University Marburg and Musicology at the Humboldt-University in Berlin. She is currently a DPhil-student at the University of Oxford in the context of the "Transforming Musicology" project, and is doing research on the reception of Richard Wagner, comparing varying interpretations of leitmotifs from the Ring des Nibelungen in work introductions and opera guides.

  • Stephen Rose
    Department of Music, Royal Holloway, University of London

    Stephen Rose is Reader in Music at Royal Holloway, University of London. His specialisms include German music 1500-1750 and digital musicology. He has directed two collaborative projects with the British Library: Early Music Online (2011) and A Big Data History of Music (2014–15). In 2015–16 he holds a British Academy Mid-Career Fellowship for a project on musical authorship from Schütz to Bach.

  • David M. Weigl
    Oxford e-Research Centre, University of Oxford

    David M. Weigl is a postdoctoral research associate at the University of Oxford e-Research Centre. His work involves the application of Linked Data and semantic technologies in order to enrich digital music information and facilitate access to a variety of musical data sources. His research interests revolve around music perception and cognition, and music information retrieval.

  • Tillman Weyde
    Department of Computer Science, City University London

    Tillman Weyde studied Music, Mathematics, and Computer Science, and has been an active researcher for over 20 years on the intersection between machine learning, artificial intelligence, data science, as well as music and signal analysis. Tillman is a Senior Lecturer in the Department of Computer Science at City University London and leads the Music Informatics Research Group there. He is the Principal Investigator in the AHRC Digital Transformation Project 'Digital Music Lab - Analysing Big Music Data'.

Notes

Workshop Venue:All of your sessions will be in Oxford e-Research Centre: Conference Room. We'll make sure you know how to get there.

AM and PM Refreshment Breaks: All breaks will be in the Atrium, OERC. Please go directly to the OERC after your lecture each morning.

Lunch Arrangements: Lunch each day will be in the Atrium, OERC.

Computers: Computers will be provided with specialised software installed.

Group Colour: Orange

Site last updated: 2015-07-15 -- Image Credits -- Contact: events@it.ox.ac.uk