TY - JOUR
T1 - DesPrompt: : Personality-descriptive prompt tuning for few-shot personality recognition
AU - Wen, Zhiyuan
AU - Cao, Jiannong
AU - Yang, Yu
AU - Wang, Haoli
AU - Yang, Ruosong
N1 - This work was conducted at the Research Institute for Artificial Intelligence of Things (RIAIoT) at PolyU and supported by the
Hong Kong Jockey Club Charities Trust fund (No. 2021-0369), and PolyU Research and Innovation Office (No. BD4A and CD5H).
PY - 2023/9
Y1 - 2023/9
N2 - Personality recognition in text is a critical problem in classifying personality traits from the input content of users. Recent studies address this issue by fine-tuning pre-trained language models (PLMs) with additional classification heads. However, the classification heads are often insufficiently trained when annotated data is scarce, resulting in poor recognition performance. To this end, we propose DesPrompt to tune PLM through personality-descriptive prompts for few-shot personality recognition, without introducing additional parameters. DesPrompt is based on the lexical hypothesis of personality, which suggests that personalities are revealed by descriptive adjectives. Specifically, DesPrompt models personality recognition as a word-filling task. The input content is first encapsulated with personality-descriptive prompts. Then, the PLM is supervised to fill in the prompts with label words describing personality traits. The label words are selected from trait-descriptive adjectives from psychology findings and lexical knowledge. Finally, the label words filled in by PLM are mapped into the personality labels for recognition. Our approach aligns with the Masked Language Modeling (MLM) task in pre-training PLMs. So, it efficiently utilizes pre-trained parameters to reduce dependence on annotated data. Experiments on four public datasets show that DesPrompt outperforms conventional fine-tuning and other prompt-based methods, especially in zero-shot and few-shot settings.
AB - Personality recognition in text is a critical problem in classifying personality traits from the input content of users. Recent studies address this issue by fine-tuning pre-trained language models (PLMs) with additional classification heads. However, the classification heads are often insufficiently trained when annotated data is scarce, resulting in poor recognition performance. To this end, we propose DesPrompt to tune PLM through personality-descriptive prompts for few-shot personality recognition, without introducing additional parameters. DesPrompt is based on the lexical hypothesis of personality, which suggests that personalities are revealed by descriptive adjectives. Specifically, DesPrompt models personality recognition as a word-filling task. The input content is first encapsulated with personality-descriptive prompts. Then, the PLM is supervised to fill in the prompts with label words describing personality traits. The label words are selected from trait-descriptive adjectives from psychology findings and lexical knowledge. Finally, the label words filled in by PLM are mapped into the personality labels for recognition. Our approach aligns with the Masked Language Modeling (MLM) task in pre-training PLMs. So, it efficiently utilizes pre-trained parameters to reduce dependence on annotated data. Experiments on four public datasets show that DesPrompt outperforms conventional fine-tuning and other prompt-based methods, especially in zero-shot and few-shot settings.
KW - Personality recognition
KW - Prompt-tuning
KW - Text classification
U2 - 10.1016/j.ipm.2023.103422
DO - 10.1016/j.ipm.2023.103422
M3 - Journal article
SN - 0306-4573
VL - 60
SP - 1
EP - 17
JO - Information Processing & Management
JF - Information Processing & Management
IS - 5
ER -