TY - GEN
T1 - Aligning Distillation For Cold-start Item Recommendation
AU - Huang, Feiran
AU - Wang, Zefan
AU - Huang, Xiao
AU - Qian, Yufeng
AU - Li, Zhetao
AU - Chen, Hao
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China (No. U22A2095, 62032020, 62272200).
Publisher Copyright:
© 2023 Association for Computing Machinery.
PY - 2023/7/19
Y1 - 2023/7/19
N2 - Recommending cold items in recommendation systems is a longstanding challenge due to the inherent differences between warm items, which are recommended based on user behavior, and cold items, which are recommended based on content features. To tackle this, generative models generate synthetic embeddings from content features, while dropout models enhance the robustness of the recommendation system by randomly dropping behavioral embeddings during training. However, these models primarily focus on handling the recommendation of cold items, but do not effectively address the differences between warm and cold recommendations. As a result, generative models may over-recommend either warm or cold items, neglecting the other type, and dropout models may negatively impact warm item recommendations. To address this, we propose the Aligning Distillation (ALDI) framework, which leverages warm items as "teachers" to transfer their behavioral information to cold items, referred to as "students". ALDI aligns the students with the teachers by comparing the differences in their recommendation characters, using tailored rating distribution aligning, ranking aligning, and identification aligning losses to narrow these differences. Furthermore, ALDI incorporates a teacher-qualifying weighting structure to prevent students from learning inaccurate information from unreliable teachers. Experiments on three datasets show that our approach outperforms state-of-the-art baselines in terms of overall, warm, and cold recommendation performance with three different recommendation backbones.
AB - Recommending cold items in recommendation systems is a longstanding challenge due to the inherent differences between warm items, which are recommended based on user behavior, and cold items, which are recommended based on content features. To tackle this, generative models generate synthetic embeddings from content features, while dropout models enhance the robustness of the recommendation system by randomly dropping behavioral embeddings during training. However, these models primarily focus on handling the recommendation of cold items, but do not effectively address the differences between warm and cold recommendations. As a result, generative models may over-recommend either warm or cold items, neglecting the other type, and dropout models may negatively impact warm item recommendations. To address this, we propose the Aligning Distillation (ALDI) framework, which leverages warm items as "teachers" to transfer their behavioral information to cold items, referred to as "students". ALDI aligns the students with the teachers by comparing the differences in their recommendation characters, using tailored rating distribution aligning, ranking aligning, and identification aligning losses to narrow these differences. Furthermore, ALDI incorporates a teacher-qualifying weighting structure to prevent students from learning inaccurate information from unreliable teachers. Experiments on three datasets show that our approach outperforms state-of-the-art baselines in terms of overall, warm, and cold recommendation performance with three different recommendation backbones.
KW - aligning distillation
KW - cold-start recommendation
KW - content features
UR - http://www.scopus.com/inward/record.url?scp=85168680953&partnerID=8YFLogxK
U2 - 10.1145/3539618.3591732
DO - 10.1145/3539618.3591732
M3 - Conference article published in proceeding or book
AN - SCOPUS:85168680953
T3 - SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval
SP - 1147
EP - 1157
BT - SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval
PB - Association for Computing Machinery, Inc
T2 - 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023
Y2 - 23 July 2023 through 27 July 2023
ER -