Using recurrent neural networks to dream sequences of audio

Document Type

Conference Proceeding

Publication Date

1-1-2018

Abstract

Four approaches for generating novel sequences of audio using Recurrent Neural Networks (RNNs) are presented. The RNN models are trained to predict either vectors of audio samples, magnitude spectrum windows, or single audio samples at a time. In a process akin to dreaming, a model generates new audio based on what it has learned. Three criteria are considered for assessing the quality of the predictions made by each of the four different approaches. To demonstrate the use of the technology in creating music, two musical works are composed that use generated sequences of audio as source material. Each approach affords the user different amounts of control over the content of generated audio. A wide range of types of outputs were generated. They range from sustained, slowly evolving textures to sequences of notes that do not occur in the input data.

Publication Source (Journal or Book title)

Icmc 2018 Proceedings of the 2018 International Computer Music Conference

First Page

11

Last Page

16

This document is currently not available here.

Share

COinS