Has everyone already read this article that came out in PNAS last month? US studies may overestimate effect sizes in softer research by Daniele Fanelli and John Ionaddis.
Fanelli & Ionaddis gathered a bunch of articles (1,174) that had been analyzed in meta-analyses. Each article fell into the category of Genetics & Heredity or Psychiatry. These categories were stand-ins for non-behavioral vs. behavioral research, and the authors compared the effect sizes reported in individual articles to the overall effect sizes estimated in the meta-analyses.
There were two major findings. First, studies that analyzed behavioral outcomes were more likely to report extreme effects than those that analyzed genetic outcomes. Second, if the corresponding author came from the US, articles were more likely to report findings consistent with their original hypothesis in behavioral but not genetic studies. In other words, they were more likely to find what they wanted to find.
Their interpretation is that the crazy level of publish-or-perish present in the US promotes this kind of thing; preferentially reporting results that fit your hypotheses may be easier in behavioral research, where there is greater variety in methods, replication is harder, and noise tends to be greater. According to the authors, all of these factors give behavioral researchers more “degrees of freedom” to find the results they were expecting.
As anyone who has ever worked in genetics can tell you, the tendency towards publishing only positive findings definitely isn’t limited to behavioral research. It has been reported that genetic studies are subject to the same bias, especially in the US, although Fanelli and Ionaddis didn’t replicate this finding in this (larger) study. And apparently the tendency to publish only positive results has been getting worse over time, presumably increasing as pressure has been mounting in academia.
So what is the answer? Publishing negative results is important, but it’s also a thankless job. First, it’s hard to find a good home for null results. Now that there are journals like PLoS ONE that will publish any scientifically-sound study, regardless of its excitement level, things are getting a little easier… but it will probably never be the case that a null results study is going to be your ticket to tenure or fame. Because of this, even if a null study is well-designed and a scientist knows he/she can publish it somewhere eventually, it’s natural to allocate scarce resources to more exciting/higher payoff studies. This is bad for science, but it’s a natural consequence of the way we’ve set things up.
Fanelli & Ionaddis suggest that this bias may become more common in the rest of the world if other countries follow our model, which is a scary prospect. We need to figure this out!