Publication bias in psychology: what is it and why does it cause problems?
This problem means that an important part of research in psychology does not receive attention.
Psychology, in particular its research side, has been in crisis for a few years now, which does not help its credibility at all. The problem is not only in the problems in replicating classic experiments, but also in publishing new papers.
The big problem is that there seems to be a prominent publication bias in psychology, i.e., there seems to be a publication bias in psychology.i.e., it seems that the publication of articles is based more on aspects such as how interesting they may seem to the general public rather than the scientifically relevant results and information they offer to the world.
Today we will try to understand how serious the problem is, what it implies, how this conclusion has been reached and if it is something exclusive to the behavioral sciences or if there are others who are also at the same crossroads.
What is publication bias in psychology?
In recent years, several researchers in psychology have warned about the lack of replication studies within the field, which has suggested the possibility that there may be publication bias in the behavioral sciences.. While this was a long time coming, it was not until the late 2000s and early 2000s that there was evidence of problems in psychological research that could mean the loss of valuable information for the advancement of this great, albeit precarious, science.
One of the first suspicions of the problem was what happened with Daryl Bem's 2011 experiment.. The experiment itself was simple:
It consisted of a sample made up of volunteers who were shown 48 words. They were then asked to write down as many words as they could remember. Once that was done, they had a practice session, in which they were given a subset of those 48 previously shown words and asked to write them down. The initial hypothesis was that some participants would remember better those words that they were then made to practice.
Following the publication of this paper, three other research teams separately tried to replicate the results seen in Bem's work. Although they followed essentially the same procedure as the original paper, they did not obtain similar results. This, although it would allow some conclusions to be drawn, was reason enough for the three research groups to have serious problems in getting their results published.
In the first place, since it was a replica of a previous work, it gave the impression that the scientific journals were interested in something new, original, not something new, it gave the impression that the scientific journals were interested in something new, original, not a "mere copy" of something earlier.. Added to this was the fact that the results of these three new experiments, not being positive, were seen more as studies that were methodologically poorly done and that this would explain the poor results rather than thinking that, perhaps, the new data represented a new advance for science.
In psychology, studies that confirm their hypotheses and, therefore, obtain more or less clear positive results, seem to end up behaving like rumors. They are easily disseminated by the community, sometimes without even consulting the original source from which they originate or without carefully reflecting on the conclusions and discussions made by the same author or by critics of that work.
When attempts to replicate earlier studies that had positive results fail, these replications are systematically left unpublished.. This means that, despite having performed an experiment that confirms that a classic experiment was not replicable for whatever reason or motive, since it is not of interest to the journals, the authors themselves avoid publishing it, and thus there is no record in the literature. This means that what is technically a myth continues to be disseminated as a scientific fact.
On the other hand, there are the habits ingrained by the research community, ways of proceeding that are quite reprehensible although they are so widespread that they are often overlooked: modifying experimental designs so as to guarantee positive results, deciding on the sample size after checking whether the results are significant, selecting previous studies that confirm the hypothesis of the current study, omitting or ignoring, as if it were no big deal, those that refute it.
Although the behaviors described above are reprehensible but understandable (although not necessarily tolerable), there are cases of manipulation of study data to ensure that they are published, which can be openly referred to as fraud and a total lack of scruples and professional ethics.
One of the most wildly embarrassing cases in the history of psychology is the case of Diederik Stapel, whose fraud is considered to be of biblical proportions.whose fraud is considered of biblical proportions: he went so far as to invent all the data of some of his experiments, i.e., to put it bluntly, like someone writing a fictional novel, this gentleman invented research.
This not only implies a lack of scruples and scientific ethics that is conspicuous by its absence, but also a total lack of empathy towards those who used his data in subsequent research, making those studies have to a greater or lesser extent a fictitious component.
Studies that have highlighted this bias
Kühberger, Fritz, and Scherndl analyzed in 2014 nearly 1,000 randomly selected articles published in psychology since 2007. The analysis overwhelmingly revealed an obvious publication bias in the field of behavioral science.
According to these researchers, theoretically, effect size and the number of people participating in the studies should be independent of each other, however, their analysis revealed that there is a strong negative correlation between these two variables based on the selected studies. This means that studies with smaller samples have larger effect sizes than studies with larger samples.
In the same analysis it was also shown that the number of published studies with positive results was greater than the number of studies with negative results, the ratio being approximately 3:1.the ratio being approximately 3:1. This suggests that it is the statistical significance of the results that determines whether the study will be published rather than whether it is actually of any benefit to science.
But it is apparently not only psychology that suffers from this kind of bias towards positive results. In fact, one could say that this is a generalized phenomenon in all sciences, although it would be psychology andHowever, psychology and psychiatry are the most likely to report positive results, leaving aside studies with negative or moderate results. These data have been observed in a review carried out by sociologist Daniele Fanelli of the University of Edinburgh. He reviewed nearly 4,600 studies and saw that between 1990 and 2007, the proportion of positive results rose by more than 22%.
Is a replication that bad?
There is an erroneous belief that a negative replication invalidates the original result.. The fact that an investigation has carried out the same experimental procedure with different results does not mean that neither the new investigation is methodologically wrong nor that the results of the original work have been exaggerated. There are many reasons and factors that can cause the results not to be the same, and all of them allow us to have a better knowledge of reality, which is, after all, the objective of any science.
The new replicas should not be seen as a harsh criticism of the original work, nor as a simple "copy and paste" of an original work, only with a different sample. It is thanks to these replications that a better understanding of a previously investigated phenomenon is obtained, and allows finding conditions in which the phenomenon is not replicated or does not occur in the same way. When the factors that condition the occurrence or non-occurrence of the phenomenon are understood, better theories can be developed.
Preventing publication bias
Solving the situation in which psychology and the sciences in general find themselves is difficult, but this does not necessarily mean that the bias has to worsen or become chronic. sharing useful data with the scientific community requires the efforts of all researchers and a greater tolerance on the part of the scientific community. and a greater tolerance on the part of the journals towards studies with negative results, some authors have proposed a series of measures that could help to put an end to the situation.
- Elimination of hypothesis testing.
- A more positive attitude to non-significant results.
- Improved peer review and publication.
Bibliographic references:
- Kühberger A., Fritz A., Scherndl T. (2014) Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size. PLoS One. 5;9(9):e105825. doi: 10.1371/journal.pone.0105825
- Blanco, F., Perales, J.C., & Vadillo, M.A. (2017). Pot la psicologia rescatar-se a si mateixa? Incentius, biaix i replicabilitat. Anuari de psicologia de la Societat Valenciana de Psicologia, 18 (2), 231-252. http://roderic.uv.es/handle/10550/21652 DOI: 10.7203/anuari.psicologia.18.2.231
- Fanelli D. (2010). Do pressures to publish increase scientists' bias? An empirical support from US States Data. PloS one, 5(4), e10271. doi:10.1371/journal.pone.0010271NLM
(Updated at Apr 14 / 2024)