The extent and consequences of p-hacking in science

Megan L. Head, Luke Holman, Rob Lanfear, Andrew T. Kahn, Michael D. Jennions

Research output: Contribution to journalArticlepeer-review

413 Citations (Scopus)
54 Downloads (Pure)


A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
Original languageEnglish
Article numbere1002106
Pages (from-to)e1002106-1-e1002106-15
Number of pages15
JournalPLoS Biology
Issue number3
Publication statusPublished - 13 Mar 2015

Bibliographical note

Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Fingerprint Dive into the research topics of 'The extent and consequences of p-hacking in science'. Together they form a unique fingerprint.

Cite this