Medical Matters: Symptom checker sites beat Dr Google, but still get it wrong most of the time
Study tested 23 symptom checkers and found one-third of cases correctly diagnosed, says Muiris Houston
WebMD, one of many symptom checkers available on the internet designed to aid self-diagnosis, is careful to advise users it isn’t a “substitute for professional medical advice, diagnosis or treatment”. Which is just as well, according to the results of a new study published in the British Medical Journal (BMJ).
These websites and apps are more complex than simply visiting “Dr Google” and typing symptoms into a search engine. Rather, they take you through some questions before formulating a series of likely diagnoses and suggesting a course of action.
I tried out the WebMD version by telling it I had abdominal pain. It offered me two options: either work through a series of nine questions or skip directly to the next stage. Choosing the question route, I fed in as typical an acute appendicitis scenario as I could, given the questions I was asked. The result? In order of likelihood, I was told I had muscle strain, “gas” pain, a panic attack, coronary artery disease, asthma or, finally, appendicitis.
When I chose the direct route I was simply asked whether the pain was on the right or left side and whether it was tender to touch. Hey presto, choosing right-sided tenderness produced a top diagnosis of acute appendicitis.
Now, in fairness to WebMD, a single clinical scenario does not scientific research make. But the BMJ researchers put in the necessary legwork with less than impressive results. Testing 23 different symptom checkers, they found they came up with the correct diagnosis about one-third of the time. The online checkers also tended to suggest seeking unnecessary medical intervention when simple bed rest and self-care would have been more appropriate.
“Across all symptom checkers the correct diagnosis was listed in the first three diagnoses in 51 per cent of standardised patient evaluations and in the first 20 diagnoses in 58 per cent of standardised patient evaluations,” the authors concluded. “Diagnostic accuracy for listing the correct diagnosis in the top three and top 20 was higher for self-care conditions than for emergency conditions and was also higher for common conditions than for uncommon conditions.” On a positive note, the symptom checkers appropriately recommended emergency care 80 per cent of the time.
The software works on the basis of a computerised algorithm, using either branching logic or Bayesian inference. Interestingly, the researchers note the symptom checkers’ success rates were fairly close to those achieved by real-time telephone advice lines. And overall they performed better than a straight googling of symptoms.
“If symptom checkers are seen as an alternative for simply entering symptoms into an online search engine such as Google, then symptom checkers are likely a superior alternative,” the authors say.
“A recent study found that when typing acute symptoms that would require urgent medical attention into search engines to identify symptom-related websites, advice to seek emergency care was present only 64 per cent of the time.”
Curious about reproducibility, I went back to WebMD and fed it exactly the same answers to the nine questions. This time it offered the following diagnoses in order of probability: gas pain, muscle spasm, appendicitis, coeliac disease and panic attack. While at least it had jettisoned the absurd suggestions of coronary artery disease and asthma, and produced a more logical list of possibilities, the diagnostic variability for the same symptoms is a real concern. Frankly, I’d be wary of symptom checkers. firstname.lastname@example.org muirishouston.com