AVAI: A tool for expressive music visualization based on autoencoders and constant Q transformation

Simon Borst Tyroll, Daniel Overholt, George Palamas

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

75 Downloads (Pure)

Abstract

This paper describes an approach using machine learning to extract sound expressiveness and translate it into
light. An autoencoder was trained on several musical examples, thereby learning how to compress a constant Q
transformed audio input, and then the latent space representation was used to generate visuals, in real time. A single interactive control parameter, the interpolation speed
between new and current values, was made for customisation. The expressiveness of the visuals was tested through
a variety of musical textures and genres rated by participants. Results indicate that participants found the system’s
translations to be visually expressive, and reacted very positively to the experience. The control parameter was tested
for customization potential and found to be a good tool
for allowing the participant to adjust the visual expression.
The method for expressive visualization used in the system
shows promise for further development.
Original languageEnglish
Title of host publicationProceedings of the 17th Sound and Music Computing Conference
EditorsS. Spagnol, A. Valle
Number of pages8
PublisherAxea sas/SMC Network
Publication date20 Jun 2020
Pages378-385
ISBN (Electronic)978-88-945415-0-2
DOIs
Publication statusPublished - 20 Jun 2020
Event17th Sound and Music Computing Conference - Torino, Italy
Duration: 24 Jun 202026 Jun 2020
Conference number: 17
https://smc2020torino.it/uk/

Conference

Conference17th Sound and Music Computing Conference
Number17
Country/TerritoryItaly
CityTorino
Period24/06/202026/06/2020
Internet address
SeriesProceedings of the Sound and Music Computing Conference
ISSN2518-3672

Fingerprint

Dive into the research topics of 'AVAI: A tool for expressive music visualization based on autoencoders and constant Q transformation'. Together they form a unique fingerprint.

Cite this