Research is mostly slow and non-significant

30 April 2014 by Christopher Buddle, posted in Miscellaneous, Research

It’s easy to forget this. We are constantly bombarded with new, exciting science stories every single day. There are a gazillion science bloggers covering the latest discoveries and our twitter stream is full of newly published research results. We don’t generally see much coverage of what really happens when research occurs. Most of the time, the process is rather slow, typically quite boring, and results are often non-significant.

I was reminded of this recently when an undergraduate student presented her final Honour’s research results a few weeks ago. Margot and I developed a project last summer as follow-up to a previous research I had done while in Ohio during my post-doc. To make a long story short, I had published a paper that showed, under controlled laboratory and field conditions, spider silk could reduce insect herbivory. The spider itself didn’t have to be present to elicit this response - insect herbivores ate less green stuff when there was silk present on their food. Margot was super-excited about this research and decided to do a follow-up project at McGill, and asked the same question but in open-field plots and with a different crop.
Margot working in the fields

Margot working in the fields

We really felt that her project was feasible and straightforward. To be honest, we were pretty certain that we would find significant effects, mainly because the result we had in Ohio was very strong. I know you are supposed to go into a project without any expectations of a result, but in this case we had good reason for being excited about the prediction - we were quite sure we’d find spider silk would deter insect herbivores in open-field plots. Margot worked extremely hard all summer: she toiled in the fields under a hot sun, fought with spiders to extract their silk, and did careful and meticulous work.  Although the research question itself was exciting and fascinating, the research process was painstakingly slow, frustrating, and Margot was constantly troubleshooting: spiders wouldn’t cooperate, the weather delayed field work, or we needed more plants, additional equipment, etc etc etc. Bottom line: it was tough work and far from glamorous! (she blogged about her experience, also!)

This is the reality of research: it’s rarely quick, never easy, and research is often as much about solving problems than it is about straight-forward data collection. Within the research community this is pretty well known since it’s lived every day; outside of this community, however, I don’t think it’s fully appreciated.

Margot’s field season ended successfully and after data entry, and data analysis, she was finally able to see whether adding spider silk to kale plants reduced insect herbivory. It was the big moment of truth! We had a great research question, a strong experimental design, excellent level of replication, high quality data collection, and a solid prediction.

Drum roll please…

EVERYTHING was non-significant. 

Margot analyzed the data in every way possible, there was no effect of spider silk on insect herbivory.

For seasoned scientists, this is also quite normal. Many experiments just don’t work. Many experiments work, but all the results are non-significant. Non-significance is extremely valuable in itself, and is of equal value to significant results provided the experiment was done correctly.  Acceptance of a null hypothesis is certainly at the foundation of scientific progress.  But all that being said, it can still be rather frustrating, especially in the case where there is good reason for a strong prediction, and especially in the case when SO MUCH blood, sweat and tears went into the work. Sorry, but no matter was anyone tells you, we don’t do science in an emotional vacuum: we get emotionally invested in our research and finding ‘no effects’ can be quite frustrating. There’s also the huge problem of publication bias in science: it’s a heck of a lot harder to publish non-significant results. For all the ‘significant findings’ you read about in the peer-reviewed literature, there are probably equal (or more!) number of unpublished, non-significant findings out there (also known as the 'file drawer problem').
Margot, with her 'low-tech' frame for silk collecting.

Margot, with her 'low-tech' frame for silk collecting (she's pointing at silk on the frame)

In the end, Margot had the right attitude about her undergraduate research, and showed exceptional maturity and poise. She throughly enjoyed the project and the experience, and at the end of her final honour’s presentation, she said “Doing crazy projects like this is TOTALLY worth it”.  That’s great news, and goes to show you that research is not about a result, it’s about a process. We learn so much from our failures, troubleshooting, and from 'non-significance’. We grow as scientists and as individuals because we march forward carefully and meticulously from a research question to a final research result, even if that result isn’t what we expected.

In sum, I think Margot helped remind me that even if research is often slow and non-significant, the process of science remains exhilarating, and is always a learning experience. Let’s remember this next time we read or hear about the next whiz-bang discovery. Behind every published paper, and related blog, TV show or radio interview, is a person who has probably seen more failed experiments than successful ones, and has probably groaned in frustration as the statistical results indicate ‘non-significance’.  And this is nothing to lament; it’s how science proceeds.

6 Responses to “Research is mostly slow and non-significant”

  1. Joe Spagna Reply | Permalink

    One way I help my students 'work around' the significance problem is by making sure projects includea descriptive piece- some phenomenon or pattern that has never been described before. Or, alternately, developing a novel method and road testing it. Overvaluing statistical differences is bound to lead to disappointment.I guess I'm saying important as hypothesis-testing is, science is big enough for other sorts of discovery and advancement, even if nature is not giving up her secrets easily via ANOVA and t-test.

    • Christopher Buddle Reply | Permalink

      Joe - thanks for the comment and it's a great one. I totally support what you are saying, and having a suite of 'approaches' is very important in supervising research, and this is perhaps even more important at the undergraduate level. Terrific suggestion and thanks for sharing.

  2. Regina Reply | Permalink

    Scientific research is too often boring, slow, and fruitless. Although it is true that "you are supposed to go into a project without any expectations of a result", there two major problems associated with this. First, there are plenty of questions which are largely more reasonable (not necessarily more predictable), more important, and, why not, more applicable than those usually taking the precious time of most scientists. Second, while it's true that negative results obtained through rigorous research mean one would have acquired skills and insight into what science is about, at the end of the day it's the positive (preferably groundbreaking) results what pays your rent. Nobody cares whether you were told to do it, if what you ended up with were negative results regardless of whether they answer important questions.

    This is actually the problem number one of sci-research in our times, and explains sharply the redundant overgrowth of nonsensical papers and unfinished researches as well as the futile life-compromising stress scientists are forced to deal with. Or aren't they?

    • Christopher Buddle Reply | Permalink

      Thanks for your comment - it's a good one. Overall I do agree with you, and what's critical as a scientists is finding the right kind of research question, and perhaps having a suite of (related) research questions on the go, and multiple approaches to a particular problem. Because, as you state, a scientist has to publish papers, these relate to job security, and it's tough to publish NS results. So, careful selection of research questions is key.

  3. David Colquhoun Reply | Permalink

    Of course you are right that many, perhaps most, fail to give the results that you hoped for. And to make matters worse, if you claim to have made a discovery when you observe P=0.04, the odds are that you are wrong: see http://www.dcscience.net/?p=6518

Leave a Reply


3 × = twenty one