Chatbot Chats to Improve Health

As technology and our psychology continue to evolve, it is critical to understand how best to use chatbots in public health interventions.

a man in a black shirt

Read Time: 4 minutes

Published:

When we’re dealing with a difficult problem, it often helps to talk to someone about what we’re going through. Talking to others about how we’re feeling (also known as emotional disclosure) is beneficial to both our physical and psychological health. Yet it’s not always easy to disclose to other people. We often feel worried about what other people will think or we don’t want to burden them with our problems. Sometimes opening up can even backfire when the other person doesn’t respond in ways we’d hope.

As a result, more and more people are starting to turn to chatbots, computer programs that can have a conversation with us, for emotional disclosure. WeChat’s Xiaoice provides emotional support to over 100 million unique people worldwide, and people experiencing anxiety, depression, and post-traumatic stress disorder are increasingly turning to chatbots such as X2AI and Woebot for empathy. Chatbots are starting to replace other people as partners in conversations of emotional disclosure.

This raises an important question: how are people affected by these conversations with a chatbot? Is disclosing to a chatbot better, worse, or the same as disclosing to another person? From existing literature and theory, the answer wasn’t clear. My colleagues Jeff Hancock, Adam Miner, and I set out to study whether disclosing to a chatbot leads to the same, better, or worse psychological effects as disclosing to another person.

In our study, participants were randomly assigned to talk about how they were feeling about a concern (emotional disclosure) or their schedule for the day (factual disclosure). All participants spoke to a human partner, but they were randomly assigned to be told they’d be talking to a human partner or a chatbot partner. We used this Wizard-of-Oz method because chatbots are not advanced enough to provide good and specific responses to emotional disclosure.

As technology and our psychology continue to evolve, this research will be critical to the conversation on how best to use chatbots in public health interventions.

We found that psychological benefits were the same for chatbot and human partners. People felt better, more optimistic, more understood, and more warmly towards the partner after emotional disclosure compared to factual disclosure. This happened to the same degree whether the partner was perceived to be a person or a chatbot. Participants also behaved similarly, disclosing just as much to a chatbot partner as a human partner.

Our results suggest that people feel similar immediate psychological benefits after emotional disclosure even when the partner is a non-human chatbot. According to decades of research, this is because we instinctively respond to and react to a computer partner as we would to a human partner. Thus, a chatbot partner might provide a viable solution when disclosing to other people isn’t feasible or possible for socially isolated individuals, or when it is too financially or emotionally costly to disclose to human partners.

Does this mean that talking to chatbots will always be as beneficial as talking to other people? No. Our latest research has found important differences between the effects of chatbot and human partners depending on the type of conversation, response from the partner, and the norms of the conversation. In some conversations, chatbot partners may be more distracting and ultimately worse than human partners. In addition, our work assumes that chatbots can respond just as well as humans can, which is not possible with current technology. It’s not clear whether the technologically limited chatbots being used right now would be just as helpful.

These questions are so new that our research only scratches the surface of all the important issues that need to be considered about chatbot use for public health. Before designing and implementing chatbot interventions, the next step should be more research to further examine the effects of disclosing to chatbots. For instance, it is unclear how different populations might react to chatbots, especially compared to typical study participants who are usually undergraduate students.

As technology and our psychology continue to evolve, this research will be critical to the conversation on how best to use chatbots in public health interventions. With more research and more answers to these questions, we can ensure that the public’s interactions with chatbots are the most helpful they can be.

Photo by Jack Sharp on Unsplash