By Louise Ratcliffe BSc (hons), Information Officer, Parent to Parent
In this age of instant information, and the ability of anyone to self-publish, it is crucial that we check our facts and avoid spreading “fake news” or junk science. But in the infinite vastness of the internet, how do we do this without wading through mountains of pages and long scholarly articles?
The website http://www.compoundchem.com/ has an infographic showing a Rough Guide to Spotting Bad Science. This highlights 12 ways to spot general bad information:
1. Sensationalised headlines
Also known as “clickbait”, these headlines are designed to catch your attention and are often accompanied by a picture that draws you in, but ultimately has nothing to do with the subject at all.
2. Misinterpreted results
Articles online are often skewed towards the more exciting aspect of a story, or towards confirming the writers biases. See if you can find the original information the article is based on, in order to form your own conclusions about the raw data.
3. Conflicts of interest
Check who is writing this information. Do they have a financial interest in this product or service? Who is funding the research? Make sure you know whether or not the writer is endorsing something because they genuinely believe it, or because they are financially motivated.
4. Correlation and causation
You can often find correlation (or connection) between two seemingly unrelated sets of data. This does not mean they are actually connected or that one causes the other to occur. A great site to illustrate the ridiculous nature of correlation is http://www.tylervigen.com/spurious-correlations which has gloriously silly correlations such as “per capita cheese consumption correlates with number of people who died getting tangled in their bedsheets”. Other examples may appear much more realistic, so when something mentions correlation or causation, interpret with caution.
5. Unsupported conclusions
People love to speculate and draw conclusions without substance, often by taking something someone else said in passing and turning it into a “fact”. This “fact” then spreads through word of mouth until we have half my children’s friends believing that if you unscrew your belly-button, your bottom will fall off.
6. Problems with sample size
It really ought to be common sense that if you hear information about 1 person, you would not assume the same information applied to all people, but we do like to generalise. Check to see if the writer or researcher has looked at a large number of samples to gather their information, or if they are just talking about the 5 people in their street they happened to speak to.
7. Unrepresentative samples used
Is the writer living in a bubble? Have they only looked at the immediate world around them for answers, or have they delved deeper with their questions to see if what they think is true for more than just them and their immediate neighbours. In order to be truly representative, the sample taken must be of the same makeup as the population it is intended to draw conclusions about.
8. Selective reporting of data
This is very common when someone wishes to support their own biases with their “research”. They will pick out bits of information which back up what they think and conveniently miss out anything that contradicts it. Read other articles on the same information; make sure that you have all sides of the subject covered before drawing conclusions.
11. Unreplicable results
Basically, if you can’t get the same results from the same experiment or research, you can be pretty sure the original information is dodgy. See if there are more articles or publications along the same subject line to cross-reference with.
12. Non-peer reviewed material
Pretty much anyone can set up a blog online and start spouting nonsense. Look for information that’s been cited by other people; check that the sources are reputable, rather than “www.iknowwhatimtalkingabout.com”, above all, do your research, don’t just believe the first thing you read!