Abstract
This study explores how sentiment analysis, a natural language processing technique, can help to assess the accuracy of interpreting learners’ renditions. The data was obtained from a corpus consisting of 22 interpreting learners’ performance over a training period of 11 weeks and comparable professional interpreters’ performance used as a reference. The sentiment scores of learners’ output were calculated using two lexicon-based sentiment tools and compared to the reference. The results revealed the learners’ limited ability to convey the speaker’s sentiment, which mainly resulted from their omission and distortion of key sentiment words and their intensity. Additionally, statistically significant correlations were found between the learner-reference sentiment gap of a given rendition and its accuracy level as perceived by the human raters, yet the extent of correlation is moderate. This suggests that the predictive power of sentiment analysis as a standalone indicator of accuracy is limited. Overall, the findings of this study have practical implications for the design of automated interpreting quality assessment tools and interpreting training.
Original language | English |
---|---|
Article number | amaf026 |
Journal | Applied Linguistics |
DOIs | |
Publication status | Published - 3 May 2025 |