Robots can help promote mental wellbeing in the workplace

Robots can be useful as mental wellbeing coaches in the workplace – but perception of their effectiveness depends in large part on what the robot looks like.

Researchers from the University of Cambridge carried out a study in a tech consultancy firm using two different robot wellbeing coaches, where 26 employees participated in weekly robot-led wellbeing sessions for four weeks. Although the robots had identical voices, facial expressions, and scripts for the sessions, the robots' physical appearance affected how participants interacted with it.

Participants who did their wellbeing exercises with a toy-like robot said that they felt more of a connection with their 'coach' than participants who worked with a humanoid-like robot. The researchers say that perception of robots is affected by popular culture, where the only limit on what robots can do is the imagination. When faced with a robot in the real world however, it often does not live up to expectations.

Since the toy-like robot looks simpler, participants may have had lower expectations and ended up finding the robot easier to talk connect with. Participants who worked with the humanoid robot found that their expectations didn't match reality, since the robot was not capable of having interactive conversations.

Despite the differences between expectations and reality, the researchers say that their study shows that robots can be a useful tool to promote mental wellbeing in the workplace. The results will be reported today (15 March) at the ACM/IEEE International Conference on Human-Robot Interaction in Stockholm.

The World Health Organization recommends that employers take action to promote and protect mental wellbeing at work, but the implementation of wellbeing practices is often limited by a lack of resources and personnel. Robots have shown some early promise for helping address this gap, but most studies on robots and wellbeing have been conducted in a laboratory setting.

"We wanted to take the robots out of the lab and study how they might be useful in the real world," said Dr Micol Spitale, the paper's first author.

The researchers collaborated with local technology company Cambridge Consultants to design and implement a workplace wellbeing programme using robots. Over the course of four weeks, employees were guided through four different wellbeing exercises by one of two robots: either the QTRobot (QT) or the Misty II robot (Misty).

The QT is a childlike humanoid robot and roughly 90cm tall, while Misty is a 36cm tall toy-like robot. Both robots have screen faces that can be programmed with different facial expressions.

We interviewed different wellbeing coaches and then we programmed our robots to have a coach-like personality, with high openness and conscientiousness. The robots were programmed to have the same personality, the same facial expressions and the same voice, so the only difference between them was the physical robot form."

Minja Axelsson, Co-Author

Participants in the experiment were guided through different positive psychology exercises by a robot in an office meeting room. Each session started with the robot asking participants to recall a positive experience or describe something in their lives they were grateful for, and the robot would ask follow-up questions. After the sessions, participants were asked to assess the robot with a questionnaire and an interview. Participants did one session per week for four weeks, and worked with the same robot for each session.

Participants who worked with the toy-like Misty robot reported that they had a better working connection with the robot than participants who worked with the child-like QT robot. Participants also had a more positive perception of Misty overall.

"It could be that since the Misty robot is more toy-like, it matched their expectations," said Spitale. "But since QT is more humanoid, they expected it to behave like a human, which may be why participants who worked with QT were slightly underwhelmed."

"The most common response we had from participants was that their expectations of the robot didn't match with reality," said Professor Hatice Gunes from Cambridge's Department of Computer Science and Technology, who led the research. "We programmed the robots with a script, but participants were hoping there would be more interactivity. It's incredibly difficult to create a robot that's capable of natural conversation. New developments in large language models could really be beneficial in this respect."

"Our perceptions of how robots should look or behave might be holding back the uptake of robotics in areas where they can be useful," said Axelsson.

Although the robots used in the experiment are not as advanced as C-3PO or other fictional robots, participants still said they found the wellbeing exercises helpful, and that they were open to the idea of talking to a robot in future.

"The robot can serve as a physical reminder to commit to the practice of wellbeing exercises," said Gunes. "And just saying things out loud, even to a robot, can be helpful when you're trying to improve mental wellbeing."

The team is now working to enhance the robot coaches' responsiveness during the coaching practices and interactions.

The research was supported by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Hatice Gunes is a Staff Fellow of Trinity Hall, Cambridge.

Source:
Journal reference:

Spitale, M., et al. (2023). Robotic Mental Well-being Coaches for the Workplace. Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. doi.org/10.1145/3568162.3577003.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New research explores hidden health risks of hereditary hemochromatosis