Monthly Archives: May 2013

Research Internship – marine mammals and sea turtles

For those of you interested in marine science and boat-based field research:

Fall 2013

Marine Mammal and Sea Turtle Research Internship

Program Description
The IMMS Research Internship Program is designed as a way for students interested in a career in marine science to gain valuable research experience in a real-world setting. Interns will participate with multiple projects involving bottlenose dolphins, sea turtles and diamondback terrapins. As an intern, you will be trained in all aspects of dolphin photo-id research, sea turtle satellite tracking, and other current research projects at IMMS. Interns will also participate in other operations at IMMS including stranding response, education, and animal care. Our goal is to give Interns a well-rounded experience in a variety of areas while providing expert training and experience in marine science research.
Principle Duties include: data entry, searching and cataloging journal articles, learning all research protocols, cropping and sorting photo-id fin images, learning to use photo-id programs such as Darwin (fin matching software), and FinBase (Microsoft Access), boat based field research (21’ and 31’ boats), and learn how to use ArcGIS

  • Secondary Duties involve: Assisting with animal care staff, attending marine mammal necropsies, responding to marine mammal and sea turtle strandings, and assisting with educational tours.
  • Field days: Interns must be able to spend many hours on the water and on shore in sometimes extreme seasonal conditions. Seasonal temperatures range from over 100 °F in summer to 30 °F in winter. Field days typically exceed eight hours and occur at least two or three times a week.

To Apply: Please visit our website at http://imms.org/internship.php

Advertisements

Null Hypothesis: Y U NO good enough for scientific articles?

If you’ve ever been involved in a scientific endeavour, there is a good chance you are familiar with the null hypothesis (which I’ll call H0). Basically, it is the opposite of the “real” hypothesis of a study. Say you want to demonstrate the following effect: chocolate consumption improves memorising skills. Your corresponding H0 would be the absence of such an effect.

In the ensuing statistical analyses, you’ll probably want to disprove the H0 to reject it in favour of your alternative hypothesis, thus showing a significant effect of chocolate on memory.

However, finding this Saint-Graal of inferential statistics is not the easiest thing. I won’t talk here about what influences this since it isn’t anything close to my area of expertise – I’d rather not ridicule myself. Rather, I’d like to discuss a little bit the overwhelming discrimination against unrejected H0s in the scientific literature.

You see? Source: xkcd

In my school projects so far, I have NEVER found ANY significant effect. EVER. It is disappointing. Most of all, my apparently consistent inability to reject the H0 made me think that, further in my academic career, I’d never be able to publish an article.

Indeed, most scientific journals accept almost only articles that contain significant effects (I don’t have numbers about this phenomenon, sorry). This attitude suggests that unrejected H0s somehow signify a lack of (convenient?) information.

But don’t they say that absence of evidence is not evidence of absence? Just because one team couldn’t reject the H0 doesn’t mean that their results are devoid of interest.

My point exactly! Source: muddylemon

For one thing, publishing unsignificant results would be like taking into account antimatter in addition to matter (i.e., significant results). They represent as revelant an information. Choosing to communicating them, instead of concealing them, would help increase transparency in science.

Secondly, researchers interested in replicating the experiment could focus on improving the methods rather than on inventing a whole procedure from scratch. This would mean saved time, saved money, collaboration opportunities and possibly less frustrating research.

Finally, and perhaps most importantly, information on “failed” experiments could help prevent un-needed research from happening. Steven Reysen, from the Journal of Articles in Support of the Null Hypothesis, explains it better than I do:

The file-drawer problem is that psychologists, and scientists in general, will not report research that does not meet traditional levels of significance. If a study has null results psychologists will often abandon the research to move on to other ideas and not report the findings. The result is that the journals are filled with studies that reached significance. For example, there may have been 20 null studies conducted on a topic but one significant study reported in the literature. If I then try to research the same topic I may be wasting time and money on that idea.

Clearly, I am in favor of the scientific community paying more attention to the H0/null hypothesis than it does at the moment, and not only because this could potentially give me a better shot at publishing my work.

What do you think? Publishing articles without significant results: yay or nay?