Tomorrow Thursday the 7th of Feb from 12-13:00 in Frederikskaj 10A, room 2.164 Dr. Purwins will give a seminar titled: Audio-visual Time Series Analysis: Experiments, Computational Analysis, & Cognitive Models
Abstract:
In this talk, I will give a rough overview of my past work and ongoing preliminary work, highlighting a couple of cases:
1) EEG for Musical Brain-Computer Interface, and for the assessment of 3D television technology;
2) Audio-visual dictionary learning and cognitive and generative modeling.
In 3D cinema, the brain has to compensate the dissociation between accommodation and vergence (the simultaneous movement of both eyes in opposite directions to obtain or maintain single binocular vision), and has to suppress the accommodation reflex in a sudden change of perceived depth. I will present neural signatures for stress induced by sudden depth changes when viewing films in 3D television/cinema (ongoing work). Then I will introduce a statistical cognitively inspired model that learns the rhythmical structure of percussion sequences from an audio example in an unsupervised manner, providing a representation that can be used for modeling musical expectation and the generation of stylistically similar and musically interesting variations. Basing real-time human-computer interaction on cognitively plausible principles makes such a system smarter and more reactive to user actions during interaction/improvisation/performance. Finally, I will discuss possible avenues for future research.
Biography:
After concluding his studies in mathematics at Bonn and Münster universities with a diploma (best possible final grade “A” in all subjects), Dr. Purwins was awarded a Ph.D. on machine learning and audio signal processing applied to music with a scholarship of the Studienstiftung des deutschen Volkes (given only to the best 0.5% best students at German universities) after studying at Berlin Institute of Technology (BIT), CCRMA, Stanford, and the Psychology Department of McGill University. Since, he has been guest professor, senior researcher, and lecturer at Music Technology Group at Pompeu Fabra University, Barcelona, and guest researcher at IRCAM, Paris. Also, Dr. Purwins has been Head of Research and Development at PMC Technologies collaborating with 3 of the 10 world leading semiconductor manufacturers. Currently, Dr. Purwins is collaborating with Sony in using neurotechnology-based mental state monitoring for the assessment of 3D television technology, music cognition and a musical brain computer interface at Neurotechnology Group / Berlin Brain Computer Interface, BIT.
Dr. Purwins has (co-)authored more than 60 scientific papers and is first author of three articles with a JCR impact factor of 7. He is responsible for the admission of research grants worth more than 800 000 Euros. He has received 10 personal research grants and prizes. Dr. Purwins has led research teams in 3 European projects (music cognition models: http://emcap.iua.upf.edu/; measurement assisted sound design, http://closed.ircam.fr/). Dr. Purwins has taught more than 1000 hours in audio-visual signal processing, machine learning, perception & cognition.
Dr. Purwins’ research interests comprise the synergetic combination of signal processing, machine learning and experimental methods (psychological experiments and EEG-based mental state monitoring) for building audio-visual models and their application to resynthesis, visualization, user interaction & adaptation.