Saturday, May 21, 2016

What would the ideal research world look like?

Recently, I was asked: “What made you interested in research methods?” I’m afraid I didn’t give a good answer, but instead started complaining about my eight-times failure to replicate that nobody wants to publish. I have been thinking about this question some more, and realised that my interest in research methods and good science is driven by predominantly selfish reasons. This gave me the idea to write a blog post: I think it is important to realise that striving towards good science is, in the long run, beneficial to a researcher. So let’s ignore the “how” for the time being (there are already many articles and blog posts on this issue; see, for example, entries for an essay contest by The Winnower) – let’s focus on the “why”.

The world as it should be
Let’s imagine the research world as it should (or could) be. Presumably, we all went into research because we wanted to learn more about the world – and we wanted to actively contribute to discovering new knowledge. Imagine that we live in a world where we can trust the existing literature. Theories are based on experiments that are sound and replicable. The job of a researcher is to keep up to date on this literature, find gaps, and design experiments that can fill these gaps, thus providing a more complete picture of the phenomenon they are studying.

The world as it is
The research world as it is provides two sources of frustrations (at least, for me): (1) Playing Russian Roulette when it comes to conducting experiments, and (2) sifting through a literature which consists of an unknown ratio of manure to pearls, and trying to find the pearls.

Russian Roulette
I have conducted numerous experiments during my PhD and post-doc so far, and a majority of them “didn’t work”. By “didn’t work”, I mean they showed non-significant p-values when I expected an effect, showed different results from published experiments (again, my eight-times failure to replicate), and occasionally, they were just not designed very well and I would get floor/ceiling effects. I attributed this to my own lack of experience and competence. I looked to my colleagues had many published experiments, and considered alternative career paths. In the last year of my PhD, I came to a realisation: even professors have the same problem.

In the research world as it is, a researcher may come up with an idea for an experiment. It can be a great idea, based on a careful evaluation of theories and models. The experiment can be well-designed and neat, providing a pertinent test of the researcher’s hypothesis. Then the data is collected and analysed – and it is discovered that the experiment “didn’t work”. Shoulders are shrugged – the researcher moves on. Occasionally, one experiment will “work” and can be published.

How is it possible, I asked myself, that so much good research goes to waste, just because an experiment “didn’t work”? Is it really necessary to completely discard a promising question or theory, just because a first attempt at getting an answer “didn’t work”? How many labs conduct experiments that “don’t work”, not knowing that other labs have already tried and failed with the same approach? These are, as of now, rhetorical questions, but I firmly believe that learning more about research methods and how these can be used to produce sound and efficient experiments can answer them.

Sifting through manure
Some theories are intuitively appealing, apparently elegant, and elicit a lot of enthusiasm with a lot of people. New PhD students want to “do something with this theory”, and try to do follow-up studies, only to find that their follow-up experiments “don’t work”, replications of the experiments that support the theory “don’t work”, and the theory doesn’t even make sense when you really think about it. *

Scientists stand on the shoulders of giants. Science cannot be done without relying on existing knowledge at least to some extent. In an ideal world, our experiments and theories should build on previous work. However, I often get the feeling that I am building on manure instead of a sound foundation.

So, in order to try and understand whether I can trust an effect, I sift through the papers on it. I look for evidence of publication bias, dodgy-sounding post-hoc moderators or trimming decisions, statistical and logical errors (such as concluding that the difference between two groups is significant because one is significantly above chance while the other is not); check whether studies with larger sample sizes tend to give negative results, while positive results are predominantly supported by studies with small samples. It’s a thankless job. I criticise and question the work of colleagues, who are often in senior positions and may well one day make decisions that affect my livelihood. At the same time, I lack the time to conduct experiments to test and develop my own ideas. But what else should I do? Close my eyes to these issues and just work on my own line of research? Spending less or no time scrutinising the existing literature would mean that I don’t know whether I am building my research agenda on pearls or manure. This would mean that I could waste months or years on a question that I should have known to be a dead end from the very beginning.

Conclusion
So, why am I interested in research methods? Because it will make research more efficient, for me personally. It is difficult to conduct a good study, but in the long run, it should be no more difficult than running a number of crappy studies and publishing the one that “worked”. It should also be much less frustrating, much more rewarding, and in the end, we will do what we (presumably) love: contribute to discovering new knowledge about how the world works.

-----------------------------------------------------------------

* This example is fictional. Any resemblance to real persons or events is purely coincidental.

3 comments:

  1. A good research method is the first step to establish a theory. I really like your blog!!!

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete