TY - GEN
T1 - Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendation
AU - Wu, Xuansheng
AU - Zhou, Huachi
AU - Shi, Yucheng
AU - Yao, Wenlin
AU - Huang, Xiao
AU - Liu, Ninghao
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/5/13
Y1 - 2024/5/13
N2 - Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research addresses cold-start issues for either users or items, we still lack solutions for system cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive approach heavily relies on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To solve the challenge, we propose to enhance small language models for recommender systems with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. The pipeline is supported by a theoretical framework that formalizes the connection between in-context recommendation and language modeling. To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only 17% of the inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.
AB - Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research addresses cold-start issues for either users or items, we still lack solutions for system cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive approach heavily relies on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To solve the challenge, we propose to enhance small language models for recommender systems with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. The pipeline is supported by a theoretical framework that formalizes the connection between in-context recommendation and language modeling. To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only 17% of the inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.
KW - cold-start recommendation
KW - data-centric ai
KW - in-context learning
KW - large language models
UR - http://www.scopus.com/inward/record.url?scp=85194065580&partnerID=8YFLogxK
U2 - 10.1145/3589334.3645494
DO - 10.1145/3589334.3645494
M3 - Conference article published in proceeding or book
AN - SCOPUS:85194065580
T3 - WWW 2024 - Proceedings of the ACM Web Conference
SP - 3566
EP - 3575
BT - WWW 2024 - Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
T2 - 33rd ACM Web Conference, WWW 2024
Y2 - 13 May 2024 through 17 May 2024
ER -