TY - JOUR
T1 - SSAP: Storylines and Sentiment Aware Pre-Trained Model for Story Ending Generation
AU - Liu, Yongkang
AU - Huang, Qingbao
AU - Li, Jing
AU - Mo, Linzhang
AU - Cai, Yi
AU - Li, Qing
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grants 62076100 and 62171145, in part by collaborative research grants from the Fundamental Research Funds for the CentralUniversities, SCUTunderGrantsD2210010, D2200150, and D2201300, in part by the Science and Technology Planning Project of Guangdong Province under Grant 2020B0101100002, in part by the Open Foundation of the Key Laboratory of Big Data and Intelligent Robot, South China University of Technology, Ministry of Education, Guangxi Major Projects of Science and Technology under Grant 2020AA21077007, in part by Hong Kong Research Grants Council under Grants PolyU 1121417, PolyU 11204919, and C1031-18 G, and in part by Internal Research Grant from Hong Kong Polytechnic University under Grants 1.9B0V and P0036846.
Publisher Copyright:
© 2014 IEEE.
PY - 2022/1
Y1 - 2022/1
N2 - As an interesting but under-explored task, story ending generation aims at generating an appropriate ending for an incomplete story. The challenges of the task are to deeply understand the story context, mine the storylines hidden in the story, and generate rational endings in logic and sentiment. Although existing pre-trained approaches have been proven effective to this task, how to learn to generate endings with appropriate plots and sufficient sentimental information still remains a major challenge. One possible reason is that an over reliance on external commonsense knowledge beyond the storylines and sentimental trends information hidden in the story context could lead to generation deviating from the main theme. To address this issue, we propose a two-stage Stroylines and Sentiment Aware Pre-trained model (SSAP) for generating sentimentally relevant story endings. We apply a classifier for discriminating the sentiment of the story, and then employ a pre-trained language model, combining with storylines information, to conditionally generate sentences that match both the logic and sentiment of the story. Automatic and manual evaluations show that, without integrating external knowledge, our model can produce more consistent and diverse story endings than state-of-the-art baselines.
AB - As an interesting but under-explored task, story ending generation aims at generating an appropriate ending for an incomplete story. The challenges of the task are to deeply understand the story context, mine the storylines hidden in the story, and generate rational endings in logic and sentiment. Although existing pre-trained approaches have been proven effective to this task, how to learn to generate endings with appropriate plots and sufficient sentimental information still remains a major challenge. One possible reason is that an over reliance on external commonsense knowledge beyond the storylines and sentimental trends information hidden in the story context could lead to generation deviating from the main theme. To address this issue, we propose a two-stage Stroylines and Sentiment Aware Pre-trained model (SSAP) for generating sentimentally relevant story endings. We apply a classifier for discriminating the sentiment of the story, and then employ a pre-trained language model, combining with storylines information, to conditionally generate sentences that match both the logic and sentiment of the story. Automatic and manual evaluations show that, without integrating external knowledge, our model can produce more consistent and diverse story endings than state-of-the-art baselines.
KW - pre-trained language model
KW - sentiment conditional generation
KW - Story ending generation
KW - storylines
UR - http://www.scopus.com/inward/record.url?scp=85123776100&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2022.3145320
DO - 10.1109/TASLP.2022.3145320
M3 - Journal article
AN - SCOPUS:85123776100
SN - 2329-9290
VL - 30
SP - 686
EP - 694
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
ER -