Skip to main content

Connor Wright's dissertation, explores the potential of AI chatbots in addressing chronic loneliness among young adults.

Hi all, thanks for taking the time to read this! My names Connor, and I have recently completed the Mst in AI Ethics & Society as a member of Lucy Cavendish College.

Given that the Masters is part-time (I only needed to be in Cambridge for 5-days a term!), I was not immersed in college life as much as others. However, this allowed me to further reinforce my belief that feeling part of a community is not restricted to your physical location.

Despite most of my degree being done at my desk with my cats doing their best to distract me, I was able to feel part of the college community. In this sense I was alone, but not lonely. This distinction formed a core tenet of my dissertation on AI and loneliness.

To be exact, my full title was Progress through conversation: the role of chatbots in the treatment of loneliness in chronically lonely emerging adults (why I chose such a long title is still a mystery to me). Inspired by my interest in social AI (human-AI interaction) I wanted to apply it to social welfare.

I opted for chronic loneliness as loneliness is not by nature uncommon nor detrimental; its a normal part of being human. However, chronic loneliness is when serious problems start to occur. Through my research, I then found that emerging adults were often the loneliest age bracket across both males and females (not the elderly as you might have thought).

With this backdrop in mind, I set out to show how a loneliness-specific chatbot intervention would need to be designed to be successful. I managed to come up with the following main findings:

  • Stepping stone: such an intervention cannot replace any human help. Instead, it must be a gateway to further help. To illustrate, if an individual finding out they are chronically lonely is 0, and them receiving human-centric help (such as local community projects) is 1, a chatbot intervention would need to be 0.5.

  • Current companion apps are unsuitable for combatting loneliness: these AI companions aim to retain users who are feeling lonely, and do not prioritise allowing them reconnecting with others. Some users do end up doing so, but this is not the apps priority.

  • Offboarding: chatbots are great at creating a safe haven for personal information disclosure; you do not feel like someone is judging you. This can potentially create a strong bond between the user and chatbot, meaning an effective offboarding system that allows this relationship to be closed in favour of moving on to human-centric help is important.

  • There are persistent problems: issues to do with privacy, attachment/social deskilling and hallucinations (when large language models (a type of AI system) present information as if it was true, when its not) will not be solved soon. Hence, further consideration/mitigation is needed.

When it comes to the future, I think there will be a proliferation of AI companions assistants, and a general increase in how often we interact with different forms of AI daily. As a result, I believe it is important to really consider what we want to use AI for. The value of human-human connection is immeasurable and linked to so many health benefits, meaning we should measure our use and interaction with social AI accordingly to not jeopardise this reality.

The best way to do this is to understand the AI system you are interacting with. AI companions do have a purpose and benefits (they can be very useful for simulating hard conversations before you have them, and for children with learning difficulties, for example). However, they are run by private organisations with profit incentives to retain users, meaning they are not long-term solutions for issues such as loneliness. Not all social problems can be aided by technological solutions.

In sum, when looking to research the intersection between technology and human experience, understanding the system at hand while considering the purpose it is being used for becomes important. Above all, asking why? we should use AI, rather than how?, will be a crucial step in avoiding the hype around the technology that could, ultimately, do more harm than good if not handled correctly.

I hope this has been helpful and interesting!