Nowadays when you do a Google search for a medical topic it's common to see a selection of scientific papers packaged under the banner "scholarly articles" among the suggested links. It's great to see this; it acts as a counterbalance to any unreliable sites a search may throw up.
But just because it is original research doesn’t mean it is of a uniform quality. Even some professionals who use health information on a daily basis haven’t been trained to appraise research critically, so to expect the lay reader to be able to objectively weigh up the evidence is a big ask.
The biggest challenge for many of us is probably the statistical information in a scholarly paper. With Benjamin Disraeli’s remark about “lies, damn lies and statistics” ringing in our ears, any remaining interest in deciphering the research numbers will likely be diluted. Actually this kind of scepticism is a really good starting point when compiling a list of questions with which to tackle a scientific paper.
In the run-up to the recent Brexit vote, David Spiegelhalter, the Winton professor of the public understanding of risk at the University of Cambridge, spoke of his exasperation at the cavalier misuse of statistics in the debate.
Writing in the Guardian, he suggested a checklist with which to assess statistical "spin". It's something we could usefully adapt in our assessment of clinical research and the overhyped press releases that sometimes accompany publication.
The checklist includes watching out for the old chestnut of implying causation when research just shows an association between two measurements. A recent example was research showing an association between weekend death rates in UK hospitals and reduced staffing; it quickly became a false “truism” that low numbers of staff in hospitals caused higher weekend death rates.
Spiegelhalter also warned about the selective use of proportions and percentages rather than total numbers. A high percentage increase or decrease can be illusory, especially in cases where the total risk of an event is low. The benefits of a particular treatment may be much less than a large headline percentage suggests.
The good news is experts at the Social and Public Health Sciences Unit at the University of Glasgow have launched an online tool called Understanding Health Research (understandinghealthresearch.organ you hyperlink, please) to guide anybody who wants to understand a health research paper through the process of asking the right questions.
Firmly aimed at non-scientists, it guides users through reading, interpreting and evaluating health research papers. As the quality of different types of research must be assessed in different ways, the tool helps users to identify and ask the right questions appropriate to each of the major types of health study.
For example, if you are trying to understand a clinical trial of a new drug, the Understanding Health Research tool will ask “Did the trial include a control group?” alongside an explanation of control groups. It then explains what the ramifications of your answer might be for the reliability and usefulness of the research. At the end of the tool, a summary page lists all the relevant positive and negative feedback to help you come to your own conclusions.
The originators say they were motivated to create the online resource by a lack of critical appraisal tools aimed at a general audience. “We decided to develop something that could help guide people through the process of understanding a research paper, demystifying important scientific concepts such as evidence hierarchies, scientific uncertainty and the difference between correlation and causation,” they write.
While tested with many different types of user to ensure that it strikes the right balance of accessibility and usefulness, the tool’s originators are keen to get more feedback.
Health literacy is the ability to understand, assess and use health information. Developing these skills will mean better relationships with healthcare providers, and better decision making about your health.