Abstract
Social robots may become an innovative means to improve the well-being of individuals. Earlier research has shown that people easily self-disclose to a social robot, even in cases where it was unintended by the designers. We report on an experiment considering self-disclosing in a diary journal or to a social robot after negative mood induction. An off-the-shelf robot was complemented with our in-house developed AI chatbot, which could talk about ‘hot topics’ after training it with thousands of entries on a complaint website. We found that people who felt strongly negative after being exposed to shocking video footage benefited the most from talking to our robot, rather than writing down their feelings. For people less affected by the treatment, a confidential robot chat or writing a journal page did not differ significantly. We discuss emotion theory in relation to robotics and possibilities for an application in design (the emoji-enriched ‘talking stress ball’). We also underline the importance of otherwise disregarded outliers in a data set of therapeutic nature.
Original language | English |
---|---|
Article number | 98 |
Journal | Robotics |
Volume | 10 |
Issue number | 3 |
DOIs | |
Publication status | Published - Aug 2021 |
Keywords
- Diary
- Emotion theory
- Relevance
- Self-disclosure
- Social robots
- Valence
ASJC Scopus subject areas
- Mechanical Engineering
- Control and Optimization
- Artificial Intelligence