TY - JOUR
T1 - Multimodal alignment in telecollaboration
T2 - A methodological exploration
AU - Cappellini, Marco
AU - Holt, Benjamin
AU - Hsu, Yu Yin
N1 - Funding Information:
The authors would like to thank Bryan Smith and Audrey Chery from Arizona State University for their help in data collection in the AMU-ASU teletandem project, as well as Brigitte Bigi for her assistance in using SPPAS. This work was supported by the French Agence Nationale de la Recherche [ ANR-18-CE28-0011-01 ] and by a grant provided by the Research Grants Council of Hong Kong [ A-PB1C ] and the Department of Chinese and Bilingual Studies at the Hong Kong Polytechnic University .
Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/11
Y1 - 2022/11
N2 - This paper presents an analysis of interactive alignment (Pickering & Garrod, 2004) from a multimodal perspective (Guichon & Tellier, 2017) in two telecollaborative settings. We propose a framework to analyze alignment during desktop videoconferencing in its multimodality, including lexical and structural alignment (Michel & Cappellini, 2019) as well as multimodal alignment involving facial expressions. We analyze two datasets coming from two different models of telecollaboration. The first one is based on the Français en (première) ligne model (Develotte et al., 2007) which puts future foreign language teachers in contact with learners of that language. The second one is based on the teletandem model (Telles, 2009), where students of two different mother tongues interact to help each other use and learn the other's language. The paper makes explicit a semi-automatic procedure to study alignment multimodally. We tested our method on a dataset that is composed of two 1-h sessions. Results show that in desktop videoconferencing-based telecollaboration, facial expression alignment is a pivotal component of multimodal alignment.
AB - This paper presents an analysis of interactive alignment (Pickering & Garrod, 2004) from a multimodal perspective (Guichon & Tellier, 2017) in two telecollaborative settings. We propose a framework to analyze alignment during desktop videoconferencing in its multimodality, including lexical and structural alignment (Michel & Cappellini, 2019) as well as multimodal alignment involving facial expressions. We analyze two datasets coming from two different models of telecollaboration. The first one is based on the Français en (première) ligne model (Develotte et al., 2007) which puts future foreign language teachers in contact with learners of that language. The second one is based on the teletandem model (Telles, 2009), where students of two different mother tongues interact to help each other use and learn the other's language. The paper makes explicit a semi-automatic procedure to study alignment multimodally. We tested our method on a dataset that is composed of two 1-h sessions. Results show that in desktop videoconferencing-based telecollaboration, facial expression alignment is a pivotal component of multimodal alignment.
KW - Facial expression alignment
KW - Interactive alignment
KW - Lexical alignment
KW - Structural alignment
KW - Telecollaboration
KW - Virtual exchange
UR - http://www.scopus.com/inward/record.url?scp=85139834884&partnerID=8YFLogxK
U2 - 10.1016/j.system.2022.102931
DO - 10.1016/j.system.2022.102931
M3 - Journal article
SN - 0346-251X
VL - 110
JO - System
JF - System
M1 - 102931
ER -