The development of summarization research has been significantly hampered by the costly acquisition of reference summaries. This chapter proposes an effective way to automatically collect large scales of news-related multi-document summaries with reference to social media’s reactions. We utilize two types of social labels in tweets, i.e., hashtags and hyper-links. Hashtags are used to cluster documents into different topic sets. Also, a tweet with a hyper-link often highlights certain key points of the corresponding document. We synthesize a linked document cluster to form a reference summary which can cover most key points. To this aim, we adopt the ROUGE metrics to measure the coverage ratio, and develop an Integer Linear Programming solution to discover the sentence set reaching the upper bound of ROUGE. Since we allow summary sentences to be selected from both documents and high-quality tweets, the generated reference summaries could be abstractive. Both informativeness and readability of the collected summaries are verified by manual judgment. In addition, we train a Support Vector Regression summarizer on DUC generic multi-document summarization benchmarks. With the collected data as extra training resource, the performance of the summarizer improves a lot on all the test sets. We release this dataset for further research.
|Title of host publication||Social Media Content Analysis|
|Subtitle of host publication||Natural Language Processing and Beyond|
|Publisher||World Scientific Publishing Co. Pte. Ltd.|
|Number of pages||17|
|Publication status||Published - 1 Jan 2017|
ASJC Scopus subject areas
- Computer Science(all)