Technology-driven music playlists

A tweet yesterday from Dr Kieran Fenby-Hulse got me thinking: ‘My favourite Spotify status ever: “People who listen to anonymous also listen to Thomas Tallis”’.  Clearly, Spotify’s algorithms are still WIP (work in progress).  One of the influences on my research proposals was the speculative chapter Finale quasi una fantasia in Evan Eisenberg’s The Recording Angel.  Here, Eisenberg imagines a future time when music is actually composed on the fly to suit our immediate emotional and cognitive needs – a bespoke musical entity, ‘listened’ to once and never to be repeated.  With this in mind, I was pleased to find some tentative steps along this road.  This paper, Automatic Music Playlist Generation Using Affective Computing Technologies, by Griffiths, Cunningham and Weinel describes current attempts to tailor musical output to a listener’s emotional states through manipulation of sensory and environmental data-inputs.  This infographic, ‘A system overview of how a decision could be made’ is from their paper.

Image

I think that this is a useful first pass at this problem.  My gut feeling is that the internal workings of the FIS (Fuzzy Inference System) are far fuzzier and labyrinthine than any reductionist approach can deal with.  This is research worth following.

Jon Weinel’s Academia page is here: https://glyndwr.academia.edu/JonWeinel

Advertisements
This entry was posted in Subject Entries and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s