Tomorrow Thursday the 7th of Feb from 12-13:00 in Frederikskaj 10A, room 2.164 Dr. Purwins will give a seminar titled: Audio-visual Time Series Analysis: Experiments, Computational Analysis, & Cognitive Models
Abstract:
In this talk, I will give a rough overview of my past work and ongoing preliminary work, highlighting a couple of cases:
1) EEG for Musical Brain-Computer Interface, and for the assessment of 3D television technology;
2) Audio-visual dictionary learning and cognitive and generative modeling.
In 3D cinema, the brain has to compensate the dissociation between accommodation and vergence (the simultaneous movement of both eyes in opposite directions to obtain or maintain single binocular vision), and has to suppress the accommodation reflex in a sudden change of perceived depth. I will present neural signatures for stress induced by sudden depth changes when viewing films in 3D television/cinema (ongoing work). Then I will introduce a statistical cognitively inspired model that learns the rhythmical structure of percussion sequences from an audio example in an unsupervised manner, providing a representation that can be used for modeling musical expectation and the generation of stylistically similar and musically interesting variations. Basing real-time human-computer interaction on cognitively plausible principles makes such a system smarter and more reactive to user actions during interaction/improvisation/performance. Finally, I will discuss possible avenues for future research.
Continue reading “Research Seminar Thursday by Dr. Purwins”