Mobile-assisted pronunciation learning with feedback from peers and/or automatic speech recognition: a mixed-methods study

Yuanjun Dai, Zhiwei Wu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

52 Citations (Scopus)

Abstract

Although social networking apps and dictation-based automatic speech recognition (ASR) are now widely available in mobile phones, relatively little is known about whether and how these technological affordances can contribute to EFL pronunciation learning. The purpose of this study is to investigate the effectiveness of feedback from peers and/or ASR in mobile-assisted pronunciation learning. 84 Chinese EFL university students were assigned into three conditions, using WeChat (a multi-purpose mobile app) for autonomous ASR feedback (the Auto-ASR group), peer feedback (the Co-non-ASR group), or peer plus ASR feedback (the Co-ASR group). Quantitative data included the pronunciation pretest, posttest, and delayed posttest, and students’ perception questionnaires, while qualitative data included students’ interviews. The main findings are: (a) all three groups improved their pronunciation, but the Co-non-ASR and the Co-ASR groups outperformed the Auto-ASR group; (b) the three groups showed no significant difference in perception questionnaires; and (c) the interviews revealed some common and unique technical, social/psychological, and educational affordances and concerns about the three mobile-assisted learning conditions.

Original languageEnglish
Pages (from-to)861-884
Number of pages24
JournalComputer Assisted Language Learning
Volume36
Issue number5-6
Early online date26 Jul 2021
DOIs
Publication statusPublished - Jul 2023

Keywords

  • Automatic speech recognition
  • mixed methods
  • mobile-assisted language learning
  • peer feedback
  • pronunciation

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Mobile-assisted pronunciation learning with feedback from peers and/or automatic speech recognition: a mixed-methods study'. Together they form a unique fingerprint.

Cite this