Grab a brew and buckle up. This one is kinda long :)
You must have seen at least one sensational nutrition headline in your time?
You know the type I mean, the ones detailing the latest food scare story, dieting secret or quick fix to a massive public health problem such as obesity.
A lot of bad science journalism is easy to spot, but for other more subtle cases, judging a science story in the news can be way harder than reviewing an actual scientific paper. For one simple reason: You don’t have all the info.
Some stories in the media are based on valid scientific research, but can mislead their audience by jumping to the wrong conclusions. This is usually because the journalist (perhaps unintentionally) has missed out some of the important details of the study when coming up with their headline.
And just like many things in life, when it comes to science, the devil is in the detail.
In its simplest form, we conduct scientific studies to find answers, e.g. we want to know if ‘A’ causes ‘B’. To do this we need to test our theory in a structured way and to make sure (as far as possible) that we produce a fair test. If we didn’t do this then we might wrongly conclude that A causes B, when in fact, the real culprit was secret answer C.
Ok. Imagine for a second we didn’t know that smoking can cause lung cancer. While going about our day to day business, we notice that people who carry a lighter (A) are more likely to develop lung cancer (B). If we studied the relationship between carrying a lighter and lung cancer without accounting for Smoking (C), we could wrongly conclude that lung cancer is caused by carrying a lighter (when obviously people who carry a lighter are more likely to be smokers and smokers are more likely to develop lung cancer).
It’s a silly example, but you get what I mean?
In this instance, carrying a lighter is known as a confounding factor – something which influences what you are measuring.
When scientists and health professionals read scientific papers, they don’t (or shouldn’t) just read the headlines or discussion. The design of the study and the statistical tests (eeek!) are important to determine how confident we are that a relationship exists (i.e. it’s not just a coincidence), the size of the effect and whether it has a clinical significance which can be applied in real life (no more stat talk, I promise!).
So, how do you know?! You’re never going to be able to fully draw conclusions about a research paper from a newspaper. However, there are a few things you can look for when reading a food/diet based science story to help you read between the lines of the sensationalistic headlines.
6 things to look for in mass media nutrition stories:
1. How many studies are they talking about?
Is it just one?
A single study is usually not enough to draw hard and fast conclusions and will rarely be considered a reason to change public health policy and advice. Think of it like a scale, the more evidence that builds up on one side of the scale will tip it in favour of a particular recommendation. As the evidence builds up, the recommendation becomes stronger. A single study wouldn’t be sufficient to tip the scale totally in its favour. That’s not to say the study’s findings are insignificant, but when new information comes to light, it needs to be tested again and again, by lots of different people, to build up the evidence and make sure that they results are replicable and not due to chance or confounding factors.
Bottom Line: Single studies do not change public health advice. If a headline is making claims of ‘breakthrough’ findings and implying a change of diet or advice based on one study alone, it’s unlikely to be true.
2. What TYPE of study are they talking about?
Different study types (or designs) are used for different things, but generally some are considered better than others for the quality of evidence they can produce. The best quality trials are more likely to be able to avoid bias (confounding factors) and so their results are considered to have more weight than others. Journalists are pretty good at stating the type of study in their news stories (even if their claims don’t quite match up with the evidence quality) so you can see for yourself where it fits.
Below is the general hierarchy:
I’m not going to go into the ins and outs of study design (if you are interested you can read more on the NHS evidence website here.)
Bottom Line: If claims are made in the headlines based on lower grade evidence, they need to be backed up by other/different types of studies to be considered fact.
3. Is it a Human Study or an Animal Study?
This one is easy. You are not a 1. Mouse, 2. Rat or 3 .Monkey – physiologically speaking anyway….I’m sure we all know one or two ;) The fact is we can’t extrapolate data from animals to humans. These studies are important as they give us ideas of theories which may be worth exploring further, but can’ be used to provide public health advice.
Bottom Line: You are not an animal. To understand how something works in humans, it must be studies in humans.
4. How big is it?
Studies look at samples of the population they are studying (they obviously can’t look at everyone). Different studies need different numbers of people to make their studies count (often researches do something called a ‘power equation’ to work it out), but generally the larger then number of people the more meaning the results have. In studies with small numbers even a small change in just one or two people can have a big effect on the results and make them less representative of the population they are studying.
Bottom Line: Large studies give more reliable results than small studies.
5. What did they measure?
If the story is of the Food A = Condition B type, it’s worth looking at whether they actually measured condition B or a ‘marker’ of condition B.
E.g. Did they measure the number of people with heart disease OR the number of people with raised cholesterol (a marker of heart disease).
Markers of disease (while sometimes appropriate to measure), give less powerful results as not all people with the marker will go on to develop the disease.
Bottom Line: Although it is sometimes necessary or appropriate to study disease markers, people with disease markers don’t always go on to develop the disease. Studies which measure REAL outcomes may have more meaning to public health.
6. Where is it published? OR Has it been published?
The most credible science research is published in Peer Review journals. Articles which have been per reviewed have been subjected to the scrutiny of independent scientific experts before they have been approved for publication. These articles are considered to be of a high quality. Sometimes a scientist will ‘leak’ their research results to newspapers before they have been published. These results haven’t yet been subjected to the peer review process and although some of it may be good, it’s impossible to know whether the research is flawed. Scientists will also be unable to replicate it as the methods are unknown and it’s certainly not possible to make public health decisions on this type of ‘research’.
Bottom Line: Many brains are better than one. Research which hasn’t been published cannot be used to make public health decisions as its quality is unknown.
Do you have anything to add? Feel free to discuss in the comments!
What’s the most ridiculous nutrition news headline you have read recently?!
- Harvard Nutrition Source – Nutrition and the Mass Media
- School of Health Professions: Intro to Research
- NHS News – Glossary of Health News Terms
- Sense About Science – Peer Review
- Making Sense of Science Stories – Sense About Science
By Helen West