Tag Archives: Statistics

Null Hypothesis: Y U NO good enough for scientific articles?

If you’ve ever been involved in a scientific endeavour, there is a good chance you are familiar with the null hypothesis (which I’ll call H0). Basically, it is the opposite of the “real” hypothesis of a study. Say you want to demonstrate the following effect: chocolate consumption improves memorising skills. Your corresponding H0 would be the absence of such an effect.

In the ensuing statistical analyses, you’ll probably want to disprove the H0 to reject it in favour of your alternative hypothesis, thus showing a significant effect of chocolate on memory.

However, finding this Saint-Graal of inferential statistics is not the easiest thing. I won’t talk here about what influences this since it isn’t anything close to my area of expertise – I’d rather not ridicule myself. Rather, I’d like to discuss a little bit the overwhelming discrimination against unrejected H0s in the scientific literature.

You see? Source: xkcd

In my school projects so far, I have NEVER found ANY significant effect. EVER. It is disappointing. Most of all, my apparently consistent inability to reject the H0 made me think that, further in my academic career, I’d never be able to publish an article.

Indeed, most scientific journals accept almost only articles that contain significant effects (I don’t have numbers about this phenomenon, sorry). This attitude suggests that unrejected H0s somehow signify a lack of (convenient?) information.

But don’t they say that absence of evidence is not evidence of absence? Just because one team couldn’t reject the H0 doesn’t mean that their results are devoid of interest.

My point exactly! Source: muddylemon

For one thing, publishing unsignificant results would be like taking into account antimatter in addition to matter (i.e., significant results). They represent as revelant an information. Choosing to communicating them, instead of concealing them, would help increase transparency in science.

Secondly, researchers interested in replicating the experiment could focus on improving the methods rather than on inventing a whole procedure from scratch. This would mean saved time, saved money, collaboration opportunities and possibly less frustrating research.

Finally, and perhaps most importantly, information on “failed” experiments could help prevent un-needed research from happening. Steven Reysen, from the Journal of Articles in Support of the Null Hypothesis, explains it better than I do:

The file-drawer problem is that psychologists, and scientists in general, will not report research that does not meet traditional levels of significance. If a study has null results psychologists will often abandon the research to move on to other ideas and not report the findings. The result is that the journals are filled with studies that reached significance. For example, there may have been 20 null studies conducted on a topic but one significant study reported in the literature. If I then try to research the same topic I may be wasting time and money on that idea.

Clearly, I am in favor of the scientific community paying more attention to the H0/null hypothesis than it does at the moment, and not only because this could potentially give me a better shot at publishing my work.

What do you think? Publishing articles without significant results: yay or nay?

Advertisements