Date: Wednesday, 29 September 2021
Time: 10:15 - 11:00 (CEST)
Venue: CLARIN virtual Zoom meeting
About the Panel
Moderator
Henk van den Heuvel
Panellists
Gloria Gagliardi
Stefan Goetze
Saturnino Luz
Bio
Saturnino Luz is a Reader at the Usher Institute, University of Edinburgh's Medical School (UK). He works in medical informatics, devising and applying machine learning, signal processing and natural language processing methods in the study of behaviour and communication in healthcare contexts. His main research interest is the computational modelling of behavioural and biological changes caused by neurodegenerative diseases, with focus on the analysis of vocal and linguistic signals in Alzheimers's disease.
Luz, S, Haider, F, Fuente, SDL, Fromm, D, MacWhinney, B (2020) Alzheimer’s Dementia Recognition Through Spontaneous Speech: The ADReSS Challenge. Proc. Interspeech 2020, 2172-2176. DOI: https://doi.org/10.21437/Interspeech.2020-2571
Martinc M, Haider F, Pollak S and Luz S (2021) Temporal Integration of Text Transcripts and Acoustic Features for Alzheimer's Diagnosis Based on Spontaneous Speech. Front. Aging Neurosci. 13:642647. DOI: https://doi.org/10.3389/fnagi.2021.642647
de la Fuente Garcia, S, Ritchie, CW, and Luz, S. Artificial Intelligence, Speech, and Language Processing Approaches to Monitoring Alzheimer’s Disease: A Systematic Review. 1 Jan. 2020 : 1547-1574. DOI: https://doi.org/10.3233/JAD-200888
Khiet Truong
Bio
- Nazareth, D. S., Jansen, M. P., Truong, K. P., Westerhof, G. J., & Heylen, D. (2019, September). Memoa: Introducing the multi-modal emotional memories of older adults database. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 697-703). IEEE.
- Jansen, M. P., Truong, K. P., Heylen, D. K. J., & Nazareth, D. S. (2020, May). Introducing MULAI: A Multimodal Database of Laughter during Dyadic Interactions. In Proceedings of the 12th Language Resources and Evaluation Conference (pp. 4333-4342).
- Support data storage and sharing in a secured environment.
- Offering tools and (standardized) pipelines in a user-friendly way to process large multimodal (i.e., audiovisual) datasets. With the emphasis on user-friendly as we are often working in multidisciplinary teams and the tools used in spoken and natural language processing are not always self-explanatory.
- Raising awareness for the existence of spoken and natural language tools and infrastructure in different research communities. There is a lot of unused data with clinicians for example who could benefit from these tools. But they often are not aware of the existence of these tools.