Katana VentraIP

Publication bias

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results.[1] The study of publication bias is an important topic in metascience.

Not to be confused with Reporting bias or Media bias.

Despite similar quality of execution and design,[2] papers with statistically significant results are three times more likely to be published than those with null results.[3] This unduly motivates researchers to manipulate their practices to ensure statistically significant results, such as by data dredging.[4]


Many factors contribute to publication bias.[5][6] For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the null hypothesis.[7] Most commonly, investigators simply decline to submit results, leading to non-response bias. Investigators may also assume they made a mistake, find that the null result fails to support a known finding, lose interest in the topic, or anticipate that others will be uninterested in the null results.[2] The nature of these issues and the resulting problems form the five diseases that threaten science: "significosis, an inordinate focus on statistically significant results; neophilia, an excessive appreciation for novelty; theorrhea, a mania for new theory; arigorium, a deficiency of rigor in theoretical and empirical work; and finally, disjunctivitis, a proclivity to produce many redundant, trivial, and incoherent works."[8]


Attempts to find unpublished studies often prove difficult or are unsatisfactory.[5] In an effort to combat this problem, some journals require studies submitted for publication pre-register (before data collection and analysis) with organizations like the Center for Open Science.


Other proposed strategies to detect and control for publication bias[5] include p-curve analysis[9] and disfavoring small and non-randomized studies due to high susceptibility to error and bias.[2]

Definition[edit]

Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected.[10] The subject was first discussed in 1959 by statistician Theodore Sterling to refer to fields in which "successful" research is more likely to be published. As a result, "the literature of such a field consists in substantial part of false conclusions resulting from errors of the first kind in statistical tests of significance".[11] In the worst case, false conclusions could canonize as being true if the publication rate of negative results is too low.[12]


Publication bias is sometimes called the file-drawer effect, or file-drawer problem. This term suggests that results not supporting the hypotheses of researchers often go no further than the researchers' file drawers, leading to a bias in published research.[13] The term "file drawer problem" was coined by psychologist Robert Rosenthal in 1979.[14]


Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors are more likely to accept, positive results than negative or inconclusive results.[15] Outcome reporting bias occurs when multiple outcomes are measured and analyzed, but the reporting of these outcomes is dependent on the strength and direction of its results. A generic term coined to describe these post-hoc choices is HARKing ("Hypothesizing After the Results are Known").[16]

Compensation examples[edit]

Two meta-analyses of the efficacy of reboxetine as an antidepressant demonstrated attempts to detect publication bias in clinical trials. Based on positive trial data, reboxetine was originally passed as a treatment for depression in many countries in Europe and the UK in 2001 (though in practice it is rarely used for this indication). A 2010 meta-analysis concluded that reboxetine was ineffective and that the preponderance of positive-outcome trials reflected publication bias, mostly due to trials published by the drug manufacturer Pfizer. A subsequent meta-analysis published in 2011, based on the original data, found flaws in the 2010 analyses and suggested that the data indicated reboxetine was effective in severe depression (see Reboxetine § Efficacy). Examples of publication bias are given by Ben Goldacre[40] and Peter Wilmshurst.[41]


In the social sciences, a study of published papers exploring the relationship between corporate social and financial performance found that "in economics, finance, and accounting journals, the average correlations were only about half the magnitude of the findings published in Social Issues Management, Business Ethics, or Business and Society journals".[42]


One example cited as an instance of publication bias is the refusal to publish attempted replications of Bem's work that claimed evidence for precognition by The Journal of Personality and Social Psychology (the original publisher of Bem's article).[43]


An analysis[44] comparing studies of gene-disease associations originating in China to those originating outside China found that those conducted within the country reported a stronger association and a more statistically significant result.[45]

The studies conducted in a field have small sample sizes.

in a field tend to be smaller.

The effect sizes

There is both a greater number and lesser preselection of tested relationships.

There is greater flexibility in designs, definitions, outcomes, and analytical modes.

There are prejudices (financial interest, political, or otherwise).

The scientific field is hot and there are more scientific teams pursuing publication.

John Ioannidis argues that "claimed research findings may often be simply accurate measures of the prevailing bias."[46] He lists the following factors as those that make a paper with a positive result more likely to enter the literature and suppress negative-result papers:


Other factors include experimenter bias and white hat bias.

Lehrer, Jonah (13 December 2010). . The New Yorker. Retrieved 30 January 2020.

"The Truth Wears Off"

Register of clinical trials conducted in the US and around the world, maintained by the National Library of Medicine, Bethesda

.

Skeptic's Dictionary: positive outcome bias

.

Skeptic's Dictionary: file-drawer effect

Journal of Negative Results in Biomedicine

The All Results Journals

Journal of Articles in Support of the Null Hypothesis

Psychfiledrawer.org: Archive for replication attempts in experimental psychology