This regular online talk series connects scholars across both geographical and disciplinary borders to explore how gesture and multimodality shape human communication.
Our regular online talks feature leading international researchers, offering insights into ongoing projects and fostering collaboration across linguistics, psychology, communication, and beyond.
Each session is in English and is freely accessible online via registration.
Upcoming Talks
Friday, May 22nd, 2026 - time TBA (GMT+2)
Dr. Laurence Meurant, University of Namur
Signed languages offer a unique perspective on linguistic assumptions, many of which come from the study of spoken languages and their written forms. What can seem obvious in spoken language, such as the traditional opposition between gesture and language, becomes far less clear when we consider languages that rely on a single visual–gestural modality rather than two. From this perspective, even the very definition of what counts as “linguistic” is open to question.
Yet despite their potential to reshape linguistic theory, signed languages have rarely been used as a starting point for theoretical reflection. Instead, they have long been approached through concepts and categories developed for spoken languages. This talk argues for reversing that perspective. It explores what our understanding of language can gain from a comparative and inclusive approach to signed and spoken languages.
How can such a comparison be carried out? What conditions are necessary to ensure the approach remains methodologically and theoretically sound? The presentation highlights two key requirements: comparable data and a theoretical framework whose core concepts are agnostic to modality. Recent work in comparative semiotics between French Belgian Sign Language (LSFB) and Belgian French will serve as a concrete case study, highlighting both the challenges and the insights of this approach.
REGISTRATION OPENING SOON!
Previous Talks
February 26th, 2026, 11 am
Gesture Use and Interactional Dynamics in L2 Speakers: A Comparative Study of Face-to-Face and Online Interactions
Prof. Renia Lopez, The Hong Kong Polytechnic University & Dr. Loulou Kosmala, Université Paris-Est Créteil
This talk explores how second language (L2) speakers use gestures differently in face-to-face versus online video-mediated interactions via Zoom. While gestures are known to facilitate understanding, turn-taking and meaning-making in L2 communication, the shift to digital platforms may alter their visibility and function. Comparing conversations among Cantonese learners of English across both settings, our study finds that face-to-face interactions promote greater interactional cohesion, including more instances of repeated gestures between partners in co-constructed turns. In contrast, online exchanges feature fewer interactive gestures during longer individual turns, potentially reducing multimodal engagement. We will elaborate with specific examples to show how speakers repeat gestures and employ interactive gestures to co-construct meaning collaboratively. Ultimately, the context significantly shapes gestural communication, highlighting implications for teaching and assessment in both physical and virtual environments.
November 21st, 2025
Cross-linguistic Variation in Co-Speech Gesture Timing: Evidence from Grammatical Control
Dr. Kathryn Franich
Co-speech gestures have traditionally been treated as an ‘extralinguistic’ phenomenon, not implicated in theories of grammar (e.g. Peter Hagoort and Jos van Berkum 2007). In this talk, I discuss cross-linguistic findings on the coordination of speech and co-speech gesture that reveal clear evidence of language-specific timing patterns, a hallmark of grammatical control (Pierrehumbert 1980; Keating 1984). Looking at data from Medʉmba, Babanki, and two varieties of English, I demonstrate that a) gestures reveal language-specific patterns of prosodic prominence, and b) co-speech gestures, like oral articulatory gestures, show language-specific patterns in the gestural landmarks selected for coordination (c.f. Browman & Goldstein 1986; Gafos 2002; Shaw et al. 2021). I then propose a framework for integrating co-speech gestures into the analysis of prosodic grammar.
September 26th, 2025
Gesture, Language, and Thought, Prof. Sotaro Kita
This presentation concerns a theory on how gestures (accompanying speaking and silent thinking) are generated and how gestures facilitate the gesturer's own cognitive processes. Prof. Kita presents evidence that gestures are generated from a general-purpose Action Generator, which also generates “practical” actions such as grasping a cup to drink, and that the Action Generator generates gestural representation in close coordination with the speech production process (Kita & Ozyurek, 2003, Journal of Memory and Language). He also presents evidence that gestures facilitate thinking and speaking through four functions: gesture activates, manipulates, packages and explores spatio-motoric representations (Kita, Chu, & Alibali, 2017, Psychological Review). Further, he argues that the schematic nature of gestural representation plays a crucial role in these four functions. To summarise, gesture, generated at the interface of action and language, shapes the way we think and we speak.