selective reporting Lauren Davis
Are you tweaking your experiments? Australian researchers have stated that some scientists are unknowingly tweaking experiments and analysis methods in order to increase their chances of obtaining easily publishable results. Their study has been printed in the journal PLOS Biology, no doubt making some readers wonder if it too has been altered for publication!
T
he study examined a type of bias called
p-hacking, which occurs when “researchers try out
that p-hacking is happening throughout the life sciences”.
“They might look at their results before an experiment is finished or explore their data with
several statistical analyses and/or data eligibility
Dr Head suggested that “pressure to publish”
specifications and then selectively report those
may be driving this bias, noting along with her
“Many researchers are not aware that certain
that produce significant results”, according to the
co-authors that “there is good evidence that
methods could make some results seem more
authors. While such actions may be conscious or
journals, especially prestigious ones with higher
important than they are. They are just genuinely
unconscious on the part of the researcher, the end
impact factors, disproportionately publish
excited about finding something new and
result is the same - data is analysed multiple times
statistically significant results”. There is thus an
interesting.”
or in multiple ways until a desired result is reached.
incentive for researchers to selectively pursue and
The authors acknowledge that p-hacking is
lots of different statistical methods.
The study used text mining to extract p-values
attempt to publish such results, with the study
a serious issue, stating that the “publication of
- a number that indicates how likely it is that a
finding a high number of p-values that were
false positives hinders scientific progress”. Many
result occurs by chance - from more than 100,000
only just over the traditional threshold that most
scientists may be uninterested in replicating
research papers in the PubMed database, spanning
scientists call statistically significant.
previous (supposed unbiased) studies, while others
many scientific disciplines, including medicine,
“This suggests that some scientists adjust their
biology and psychology. According to lead author
experimental design, datasets or statistical methods
Dr Megan Head, from the ANU Research School
until they get a result that crosses the significance
of Biology, the researchers “found evidence
threshold,” Dr Head said.
may pursue fruitless research programs based entirely off their results. Even when scientists review evidence by combining the results from multiple studies - a method called meta-analysis - this procedure will be compromised if the studies being synthesised “do not reflect the true distribution of effect sizes”, according to the authors. They do concede, however, that p-hacking “probably does not drastically alter scientific consensuses drawn from meta-analyses”. The authors have made a series of recommendations to prevent p-hacking from occurring. They suggest researchers adhere to common analysis standards (performed blind wherever possible) and place greater emphasis on the quality of research methods rather than the significance of the findings. Journals, meanwhile, are encouraged to provide clear and detailed
Dr Megan Head in her evolutionary biology lab at the ANU Research School of Biology. Image credit: Regina Vega-Trejo.
40 | LAB+LIFE SCIENTIST - June 2015
guidelines for the full reporting of data analyses and results.
www.LabOnline.com.au | www.LifeScientist.com.au