Abstract
Recent advances in textual Aspect-Based Sentiment Analysis (ABSA) have delivered strong performance. Nevertheless, a core challenge remains: raw textual data can only provide limited semantic coverage. To address this issue, researchers have explored enhancement with additional augmentations, they either craft audio, text, and linguistic features from the input or leveraging user-posted images. However, the former three are heavily overlap with the original data, which undermines their ability to be supplementary, while the user-posted images are extremely dependent on human annotation, which not only limits its application scope to just a handful of text-image datasets, but also propagates the errors derived from human mistakes to the entire downstream loop. In this work, we take a previously unexplored path: generating sentimental images tailored to the text. We introduce Sentimental Image Generation with Image Quality Assessment (SIGQA), a method that delivers precise, ancillary visual augmentation to strengthen textual extraction. Furthermore, SIGQA incorporates a no-reference image quality assessment that segments generated images to perform fine-grained quality evaluation, selecting the optimal image for augmentation. Extensive experiments establish new SOTA results on the ACOS and en-Phone datasets, underscoring the effectiveness of our method and highlighting a promising direction for expanding features.
| Original language | English |
|---|---|
| Article number | 113269 |
| Journal | Pattern Recognition |
| Volume | 177 |
| DOIs | |
| Publication status | Published - 10 Feb 2026 |
Keywords
- Aspect-based sentiment analysis
- Image generation
- Image quality assessment
- Data augmentation
Fingerprint
Dive into the research topics of 'Sentimental image generation with image quality assessment'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver