Artificial Intelligence for Auditory MEG Decoding

Datum/Uhrzeit: 14.04.26, 10:15

Ort: HS 424, Hellbrunner Straße 34, 5020 Salzburg

Vortragender: MSc Jean-Félix Maestrati

Anmeldung: Anmeldung NICHT erforderlich

Inhalt:

AI provides new ways to study brain signals by learning directly from data rather than relying on predefined linear assumptions. In auditory neuroscience, approaches such as temporal re-sponse functions (TRFs) are widely used to relate stimulus features to MEG activity.
While these methods are interpretable and effective, they remain linear and may miss more complex structure in the data, including patterns shared across subjects. AI based methods of-fer a broader framework by modeling non linear relationships and learning rich internal repre-sentations by optimizing for more complex objectives such as decoding structured sound repre-sentations.
In this talk, I will introduce the general idea of AI based brain decoding and discuss how these methods can complement classical analyses of brain signals. Using MEG recordings from partici-pants listening to continuous speech, together with clinical data, I explore whether deep learning models can recover meaningful information about the sound and hearing related clinical varia-bles directly from sensor level activity. More broadly, I will use this work to illustrate why AI is promising for neuroscience, how it can improve existing analysis pipelines, and how it may con-tribute to hearing loss assessment.

Invited by Weisz/Weigl

Die vollständigen Informationen zu dieser Veranstaltung finden Sie hier