Self-disclosure to a robot: Only for those who suffer the most

Yunfei Duan, Myung Yoon, Zhixuan Liang, Johan Ferdinand Hoorn

Research output: Journal article publicationJournal articleAcademic researchpeer-review

9 Citations (Scopus)


Social robots may become an innovative means to improve the well-being of individuals. Earlier research has shown that people easily self-disclose to a social robot, even in cases where it was unintended by the designers. We report on an experiment considering self-disclosing in a diary journal or to a social robot after negative mood induction. An off-the-shelf robot was complemented with our in-house developed AI chatbot, which could talk about ‘hot topics’ after training it with thousands of entries on a complaint website. We found that people who felt strongly negative after being exposed to shocking video footage benefited the most from talking to our robot, rather than writing down their feelings. For people less affected by the treatment, a confidential robot chat or writing a journal page did not differ significantly. We discuss emotion theory in relation to robotics and possibilities for an application in design (the emoji-enriched ‘talking stress ball’). We also underline the importance of otherwise disregarded outliers in a data set of therapeutic nature.

Original languageEnglish
Article number98
Issue number3
Publication statusPublished - Aug 2021


  • Diary
  • Emotion theory
  • Relevance
  • Self-disclosure
  • Social robots
  • Valence

ASJC Scopus subject areas

  • Mechanical Engineering
  • Control and Optimization
  • Artificial Intelligence


Dive into the research topics of 'Self-disclosure to a robot: Only for those who suffer the most'. Together they form a unique fingerprint.

Cite this