Warning: stripos() expects parameter 1 to be string, array given in /home/www/skottes.net/mediaaau/wp-includes/formatting.php on line 3792

Talk by Mads Græsbøll Christensen & Hendrik Purwins

2015-02-06 12.15.05 pm

Mads Græsbøll Christensen & Hendrik Purwins, Audio Analysis Lab, Aalborg University gave talk at the Danish Neuroscience Center in connection with the Music in the Brain Seminars.

The Audio Analysis Lab, Modelling Musical Category Formation, and Neural Correlates of Musical Attention.

Abstract:

The talk has three parts:
I. The Audio Analysis Lab was founded in 2012 and is located at the Dept. of Architecture, Design & Media Technology at Aalborg University in Denmark. The lab conducts basic and applied research in signal processing theory and methods aimed at or involving analysis of audio signals. The research currently focuses on audio processing for communication systems (VoIP, cellphones, etc.), hearing aids, music equipment, surveillance, and audio archives (e.g., compression and information retrieval). In this talk, we will present the lab, its members and our ongoing major projects and highlight our biggest contributions so far.
II. We present a system that learns the rhythmical structure of percussion sequences from an audio example in an unsupervised manner, providing a representation that can be used for the generation of stylistically similar and musically interesting variations. The procedure consists of segmentation and symbolization (feature extraction, clustering, sequence structure analysis, temporal alignment). In a top-down manner, an entropy-based regularity measure determines the number of clusters into which the samples are grouped. A variant of that system that adjusts the number of (timbre) clusters instantaneously to the audio input. A sequence learning algorithm adapts its structure to a dynamically changing clustering tree. The prediction of the entire system is evaluated using the adjusted Rand Index, yielding good results.
III. In a multi-streamed oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument can be classified online with a mean accuracy of 91% across 11 participants. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain-computer interface and music research.
I:http://www.create.aau.dk/audio/
II:http://www.youtube.com/user/audiocontinuation
http://link.springer.com/chapter/10.1007%2F978-3-642-23126-1_14
arxiv.org/abs/1502.00524
III:http://vbn.aau.dk/files/197609875/musicBCI_11.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *