This line of research investigates multimodal discourse produced by speakers and signers, and how their productions compare. Despite the different modalities, both speakers and signers exploit all the semiotic resources available when they communicate, i.e., they use all their bodily articulators to convey meaning. The main interest is to explore the interplay between sign and gesture, and speech and gesture, diverse discourse phenomena (reformulation structures, discourse markers, repetitions, etc.), prosody, interactive practices, and shared gestural forms in face-to-face communication across spoken and signed languages. Only by adopting a multimodal and contrastive perspective, we can shed light on the real similarities and differences across the two modalities and understand the universal and language-specific principles that underlie human communication.
This line of research investigates how gestures are integrated with speech during language production, examining the interfaces between gesture and speech across multiple dimensions. We explore how gestures are temporally coordinated with speech in terms of prosodic and referential structure; how they contribute to pragmatic meaning by signaling stance, discourse status, or information structure; and how their representational content interacts with the propositional content of speech. This research also extends to sign languages, where manual articulators simultaneously convey propositional and prosodic information, offering valuable insights into the shared principles underlying multimodal communication.
This line of research examines how listeners integrate speech and gesture during perception and comprehension, and how meaning emerges from their interaction. Research in this area investigates how the semantic interpretation of an utterance can vary as a function of the representational content of accompanying gestures; how the timing and form of non-referential gestures can modulate listeners’ perception of prosodic prominence, thereby influencing which words are perceived as emphasized; and how these perceptual effects shape pragmatic interpretation. By investigating how visual and auditory cues jointly contribute to understanding, this research sheds light on the multimodal nature of language processing and the mechanisms that allow listeners to extract meaning from coordinated streams of information.
This line of research explores how multimodal cues ––gestures and speech prosody–– contribute to children’s language acquisition and communicative development, and learning. It examines how gestures function as integral components of (early) communicative acts, scaffolding linguistic and cognitive abilities. Particular attention is given to the interplay between gesture and speech prosody, and to how this multimodal integration supports meaning construction and pragmatic competence from early childhood through later developmental stages. Research in this area investigates both spontaneous gesture-speech integration in naturalistic, semi-naturalistic, and experimental settings (including child-directed and adult-directed speech contexts), as well as the effects of gesture-based training or intervention programs in educational contexts. By adopting a multimodal perspective, this work deepens theoretical and empirical understanding of first language acquisition and informs educational practices that foster children’s linguistic and cognitive growth.
This line of research examines the contribution of gestures to second language learning, focusing both on the gestures employed by teachers in instructional contexts and on those produced by learners as part of their developing interlanguage and communicative competence. In the domain of phonological acquisition, studies assess the potential benefits of gesture-based training and interventions in terms of comprehensibility, fluency, accentedness, and accuracy of both segmental and suprasegmental features. The gestures under investigation include spontaneous co-speech gestures—such as referential and non-referential movements—as well as pedagogically constructed gestures designed to emphasize specific aspects of speech, for instance, the prosodic contour of an utterance. Research on the role of gestures in second language learning advances our understanding of the processes underlying language acquisition and pedagogy, and informs the development of evidence-based, multimodal approaches to language teaching and learning.
This line of research examines how children with difficulties at the linguistic and communicative levels use co-speech gestures in production and comprehension, how they combine co-speech gestures with speech and how these combinations contribute to structural and pragmatic meanings. Two main populations with special needs are of particular interest: children with Developmental Language Disorders (DLD) and children with autism. These two populations are interesting from a gestural point of view: from the one side, they may use gestural signals to compensate for their deficits at the structural and pragmatic levels; from the other, the fact that gestures are often combined with speech adds complexity to the signal and can challenge multimodal integration. The children’s use of (and learning from) co-speech gestures can be a reliable tool for language assessment and intervention, and actions can be developed to outreach the importance of taking into account gestural abilities among families, educators and clinical professionals.