Abstract
This paper aims at examining the grading accuracy of a revised rubric for a problem-solution essay assessment in an Academic English course in Hong Kong, necessitated by the emergence of GenAI. The research adopts a mixed-method approach, integrating both quantitative and qualitative data. The quantitative data is obtained from students' grades using the original and the revised rubrics for comparison, which provides an objective and clear evaluation of the grading accuracy between the two rubrics. The qualitative data is gathered through feedback from teachers, providing insights into the perceived efficacy and equity of the updated rubric. The study will further include an in-depth analysis of the specific alterations made to the rubric in response to GenAI, and how these changes have influenced the grading procedure. This research hopes to enhance language practitioners’ understanding of the impact of technological advancements, like GenAI, on language teaching and assessment. The findings will provide crucial insights for educators, curriculum developers, and policy makers in tailoring the assessment rubrics to the ever-changing educational landscape. As a result, this research emphasizes the significance of ongoing adaptation in pedagogical approaches in the face of emerging technologies.
Original language | English |
---|---|
Publication status | Not published / presented only - 26 Apr 2024 |
Keywords
- GenAI, rubrics, academic writing