TY - JOUR
T1 - Model-agnostic counterfactual reasoning for identifying and mitigating answer bias in knowledge tracing
AU - Cui, Chaoran
AU - Ma, Hebo
AU - Dong, Xiaolin
AU - Zhang, Chen
AU - Zhang, Chunyun
AU - Yao, Yumo
AU - Chen, Meng
AU - Ma, Yuling
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/10
Y1 - 2024/10
N2 - Knowledge tracing (KT) aims to monitor students’ evolving knowledge states through their learning interactions with concept-related questions, and can be indirectly evaluated by predicting how students will perform on future questions. In this paper, we observe that there is a common phenomenon of answer bias, i.e., a highly unbalanced distribution of correct and incorrect answers for each question. Existing models tend to memorize the answer bias as a shortcut for achieving high prediction performance in KT, thereby failing to fully understand students’ knowledge states. To address this issue, we approach the KT task from a causality perspective. A causal graph of KT is first established, from which we identify that the impact of answer bias lies in the direct causal effect of questions on students’ responses. A novel COunterfactual REasoning (CORE) framework for KT is further proposed, which separately captures the total causal effect and direct causal effect during training, and mitigates answer bias by subtracting the latter from the former in testing. The CORE framework is applicable to various existing KT models, and we implement it based on the prevailing DKT, DKVMN, and AKT models, respectively. Extensive experiments on three benchmark datasets demonstrate the effectiveness of CORE in making the debiased inference for KT. We have released our code at https://github.com/lucky7-code/CORE.
AB - Knowledge tracing (KT) aims to monitor students’ evolving knowledge states through their learning interactions with concept-related questions, and can be indirectly evaluated by predicting how students will perform on future questions. In this paper, we observe that there is a common phenomenon of answer bias, i.e., a highly unbalanced distribution of correct and incorrect answers for each question. Existing models tend to memorize the answer bias as a shortcut for achieving high prediction performance in KT, thereby failing to fully understand students’ knowledge states. To address this issue, we approach the KT task from a causality perspective. A causal graph of KT is first established, from which we identify that the impact of answer bias lies in the direct causal effect of questions on students’ responses. A novel COunterfactual REasoning (CORE) framework for KT is further proposed, which separately captures the total causal effect and direct causal effect during training, and mitigates answer bias by subtracting the latter from the former in testing. The CORE framework is applicable to various existing KT models, and we implement it based on the prevailing DKT, DKVMN, and AKT models, respectively. Extensive experiments on three benchmark datasets demonstrate the effectiveness of CORE in making the debiased inference for KT. We have released our code at https://github.com/lucky7-code/CORE.
KW - Answer bias
KW - Counterfactual reasoning
KW - Intelligent education
KW - Knowledge tracing
UR - https://www.scopus.com/pages/publications/85197585356
U2 - 10.1016/j.neunet.2024.106495
DO - 10.1016/j.neunet.2024.106495
M3 - Journal article
C2 - 38972129
AN - SCOPUS:85197585356
SN - 0893-6080
VL - 178
JO - Neural Networks
JF - Neural Networks
M1 - 106495
ER -