It’s easy to forget this. We are constantly bombarded with new, exciting science stories every single day. There are a gazillion science bloggers covering the latest discoveries and our twitter stream is full of newly published research results. We don’t generally see much coverage of what really happens when research occurs. Most of the time, the process is rather slow, typically quite boring, and results are often non-significant.
I was reminded of this recently when an undergraduate student presented her final Honour’s research results a few weeks ago. Margot and I developed a project last summer as follow-up to a previous research I had done while in Ohio during my post-doc. To make a long story short, I had published a paper that showed, under controlled laboratory and field conditions, spider silk could reduce insect herbivory
. The spider itself didn’t have to be present to elicit this response - insect herbivores ate less green stuff when there was silk present on their food. Margot was super-excited about this research and decided to do a follow-up project at McGill, and asked the same question but in open-field plots and with a different crop.
Margot working in the fields
We really felt that her project was feasible and straightforward. To be honest, we were pretty certain that we would find significant effects, mainly because the result we had in Ohio was very strong. I know you are supposed to go into a project without any expectations of a result, but in this case we had good reason for being excited about the prediction - we were quite sure we’d find spider silk would deter insect herbivores in open-field plots. Margot worked extremely hard all summer: she toiled in the fields under a hot sun, fought with spiders to extract their silk, and did careful and meticulous work. Although the research question itself was exciting and fascinating, the research process was painstakingly slow, frustrating, and Margot was constantly troubleshooting: spiders wouldn’t cooperate, the weather delayed field work, or we needed more plants, additional equipment, etc etc etc. Bottom line: it was tough work and far from glamorous! (she blogged about her experience, also!)
This is the reality of research: it’s rarely quick, never easy, and research is often as much about solving problems than it is about straight-forward data collection. Within the research community this is pretty well known since it’s lived every day; outside of this community, however, I don’t think it’s fully appreciated.
Margot’s field season ended successfully and after data entry, and data analysis, she was finally able to see whether adding spider silk to kale plants reduced insect herbivory. It was the big moment of truth! We had a great research question, a strong experimental design, excellent level of replication, high quality data collection, and a solid prediction.
Drum roll please…
EVERYTHING was non-significant.
Margot analyzed the data in every way possible, there was no effect of spider silk on insect herbivory.
For seasoned scientists, this is also quite normal. Many experiments just don’t work. Many experiments work, but all the results are non-significant. Non-significance is extremely valuable in itself, and is of equal value to significant results provided the experiment was done correctly. Acceptance of a null hypothesis is certainly at the foundation of scientific progress. But all that being said, it can still be rather frustrating, especially in the case where there is good reason for a strong prediction, and especially in the case when SO MUCH blood, sweat and tears went into the work. Sorry, but no matter was anyone tells you, we don’t do science in an emotional vacuum
: we get emotionally invested in our research and finding ‘no effects’ can be quite frustrating. There’s also the huge problem of publication bias
in science: it’s a heck of a lot harder to publish non-significant results. For all the ‘significant findings’ you read about in the peer-reviewed literature, there are probably equal (or more!) number of unpublished, non-significant findings out there (also known as the 'file drawer problem').
Margot, with her 'low-tech' frame for silk collecting (she's pointing at silk on the frame)
In the end, Margot had the right attitude about her undergraduate research, and showed exceptional maturity and poise. She throughly enjoyed the project and the experience, and at the end of her final honour’s presentation, she said “Doing crazy projects like this is TOTALLY worth it”. That’s great news, and goes to show you that research is not about a result, it’s about a process. We learn so much from our failures, troubleshooting, and from 'non-significance’. We grow as scientists and as individuals because we march forward carefully and meticulously from a research question to a final research result, even if that result isn’t what we expected.
In sum, I think Margot helped remind me that even if research is often slow and non-significant, the process of science remains exhilarating, and is always a learning experience. Let’s remember this next time we read or hear about the next whiz-bang discovery. Behind every published paper, and related blog, TV show or radio interview, is a person who has probably seen more failed experiments than successful ones, and has probably groaned in frustration as the statistical results indicate ‘non-significance’. And this is nothing to lament; it’s how science proceeds.