February 11, 2021

Why patients lie to doctors and tell chatbots the truth

Doctors depend on their patients to tell the truth, but research shows many don’t. Could discussing ailments with a chatbot be the answer?

Any allergies? Dull or stinging pain? How much alcohol do you drink? Medical professionals base decisions on patients’ answers to questions like these. But what if they’re not telling the truth?

Between 60 and 80 per cent of patients have either lied to their doctors or withheld information, according to a 2018 US study of over 4,500 people. Most said they didn’t want to be lectured or feel judged. Some felt embarrassed about their unhealthy behaviours. Others just wanted to be liked.

Could technology help overcome this ‘truth deficit’? There’s increasing evidence that some patients find it easier to open up to a non-human.

In a study at the University of Southern California, researchers tested whether people with post-traumatic stress disorder would disclose more personal information when they were told they were talking to a chatbot rather than a doctor. In nearly every case they did – and they also reported feeling less judged. “Not only were participants more willing to share personal details when they thought their interviewer was ‘just a computer’,” reported researcher Gale Lucas, “they also reported being more comfortable in the ‘robot’ interview.”

AI… but all too human?

The medical sector has been advancing its information collation technology for some time, much of it in the form of hard data.

You can’t argue with information supplied by wearables like a Fitbit or smart ring, or forwarded from a home monitoring system. Certainly, online chatbots introduce an opportunity to bend the truth, but also a sense of detachment that can make it less embarrassing for people to share deeply personal information. Thus, it can be argued they’ll be more likely to be honest.

And now more sophisticated human-machine interfaces are taking this kind of engagement to a new level. So far, the most common is a conversational chatbot, an application powered by artificial intelligence (AI) that interacts using natural language.

For Vishaal Kishore, Professor of Innovation & Public Policy at RMIT University in Melbourne, the space is not free from contradiction. “The irony is that researchers are striving to make this kind of technology more human-like despite the fact that – at least at times – people seem to find non-humans less intimidating,” he says.

In a study at Pennsylvania State University, participants reported mixed emotions to a ‘humanised’ chatbot – while most appreciated the sympathetic or empathetic responses, others found them too personal or even creepy.

Even more revealing was a study from the University of Seoul in South Korea, where researchers asked groups of teenagers how they thought chatbots might do a better job of meeting their emotional needs than real people. The teens expected chatbots to be good listeners due to their lack of emotion, to keep secrets by being separated from the human world, and to give advice based on analysing a lot of data. So the qualities that appealed most were the very opposite of human.

“So far, we’ve seen conversational bots and apps in healthcare being used largely to support mental health,” Kishore says. “Many people find it hard to talk about things like anxiety or depression, so you can see why anonymity and lack of judgement could be particularly helpful here.”

Kishore adds that while there are some well-designed apps to help people through a difficult patch, it can be hard to imagine how chatbots could completely displace the human dynamic – people caring for people – particularly in respect of complex healthcare contexts and conditions.

“It’s extremely important for us to use innovation to support people’s wellbeing, but we must remember that, for the foreseeable future, technology will remain a – very important – tool,” he says. “That means we also need to take a close look at how our health systems either help or hinder making patients comfortable to tell the truth.”

Triage by technology

Another area where chatbots may expand their role is the hospital triage process. Already a clinic in the US is collaborating with a robotics firm to implement an AI platform that will help staff assess patients’ needs and direct them to appropriate care. Along with AI technology, the platform uses data from electronic health records and an intuitive questionnaire to handle the clinical intake of patients visiting emergency departments as well as patients at home.

Similar technology could be used to gather and organise patient information before they visit a GP.

“At the same time, we don’t yet understand – practically – how patient dynamics of truth-telling and anonymity will interact with these sorts of uses,” Kishore says. “As we’ve seen, anonymity can make it easier to tell the truth, but will that change if patients know that their clinician will hear what’s been said? We need more research before we can answer that question.”

 A holistic future

However the technology evolves, the global market for healthcare chatbots is destined to grow. A report published by Data Bridge Market Research predicts the initial estimated value of US$122 million in 2018 will increase to an estimated US$542.3 million by 2026.

There is every reason to believe Australia will be part of this growth. “In general, Australians embrace innovation and technology, I’m sure our health business owners are already thinking about how chatbots and other applications of AI can be integrated into their businesses,” says Kate Galvin, Executive at NAB Health.

Certainly the government is on board. For example, the South Australian health department recently introduced a virtual agent to help cope with a surge in queries about COVID-19. “The government is also supporting AI by investing $19 million in technologies designed to improve everything from prevention and diagnosis to treatment,” Galvin points out.

Kishore agrees that there’s a great future out there – as long as we’re bold as well as thorough in our approach. “That includes adopting a very integrated way of thinking about these technologies, with technologists, clinicians and researchers all working together,” he says.

The technology must also become more inclusive, he adds. “In general, developers and the technology markets haven’t always excelled in taking into account cultural sensitivities or particular vulnerabilities. It’s important that the blind spots and biases inscribed in our technologies are highlighted and overcome, so that a broader set of needs can be appropriately met.”