In a famous 1964 essay, the physicist John Platt asked, “Why should there be such rapid advances in some fields and not in others?” His answer, in essence, was that the rapidly advancing fields applied the classic scientific method of hypothesis testing more rigorously than the slowly advancing fields. The scientific method had already gone out of fashion in some fields by 1964, and it continues to be unfashionable in many scientific fields today. Why?
I got some insight when I reviewed a journal paper several years ago. The gist of my review was that the paper was technically strong, but I couldn’t understand what hypothesis was confirmed or disconfirmed by the data. Evidently, the authors felt that simply reporting their findings with some post-hoc storytelling was sufficient. When the paper was rejected, the first author complained on social media that the reviewer (me) didn’t understand how science works. You’re supposed to measure things and then come up with hypotheses. A chorus of supporters loudly acclaimed that view.
The problem with this post-hoc approach to experimentation is that it leads to what I call “random walk science.” The space of experiments is vast, so if a scientist doesn’t have a clear idea about what hypotheses they’re trying to test, they’ll choose an idiosyncratic direction based on whatever experiments they’ve done recently. Because each scientist chooses idiosyncratically, they collectively end up moving short distances in effectively random directions and not getting very far. Strikingly, many scientists don’t feel that this is a bad situation. If you’re not trying to get anywhere in particular and just trying to accumulate a large number of facts, random walk science is great.
The current zeal for random walk science has grown out of a scientific culture that places low value on theory as a guide to experimental work. Theorists are typically siloed, working on computational models that few experimentalists understand and fewer actually use. Occasionally, experimentalists bring theorists into their grant proposals to add flavor. But they quickly become an afterthought once the experiments start. If an experimentalist is really desperate, they might ask a theorist to make sense of some puzzling data.
Only in a few corners of neuroscience do theories exert strong guidance on experimental design: Theories have been used extensively to guide experimental research on the role of dopamine in reinforcement learning and the role of the parietal cortex in perceptual decision-making. These experiments would never have been conceived in the absence of theory. For example, in a paper HyungGoo Kim and his colleagues (including me) published in 2020, we constructed intricate paths through a virtual-reality track designed to distinguish predictions of two theories about dopamine.
To see if I could nudge the culture around theory-experiment collaboration, I decided to teach a class, “Computational Neuroscience in Practice,” in fall of 2023. The idea was to introduce computational theory to graduate students working on experimental neuroscience in a workshop-based course. Each student brought a research project they were already working on, and together we figured out interesting theoretical questions and how to address them. We discussed these projects as a group during class time and one on one outside of class. The result was not entirely what I expected and made me realize we may need a broader approach.
I
nitially, I thought that most of our workshop time would be spent working through the nitty-gritty of applying computational models to each student’s project. But I quickly learned that the hardest aspect of the course for most students was conceptual: What even is a computational model? Tackling this question requires another essay (or a book!), but the gist of my answer to them was that computational models explain a system’s function in terms of algorithms that solve particular tasks. This requires thinking about what kinds of problems a neural system evolved to solve, the mechanisms it uses to implement an algorithm, and how we can look for evidence of those mechanisms using data.Most students were so deeply steeped in an experiment-first style of thinking that the very notion of a theoretical question framed in terms of computation was obscure. For many students, “computation” meant a piece of software they applied to their data, not a theory of how the brain works. Though running analysis software on data is indeed a computation, it is not a computational model in the sense just defined. This distinction has been blurred in modern neuroscience, which places profound faith in the power of analysis tools (particularly advanced machine-learning algorithms) to discover the organizational principles of the brain when fed enough data. As I’ve argued elsewhere, analysis tools on their own will not get us there, because the principal limiting factor is conceptual: What kinds of computations should we be looking for in the brain? If we don’t know what we’re looking for, we’ll never find it, no matter how sophisticated our analysis tools.
I pivoted, telling the students that I didn’t care if they produced a concrete model at the end of the course, as long as they were able to formulate a clear theoretical question. I wanted to challenge the “analyze first, ask questions later” rubric. This task occupied most of the course.
Looking back, I’m not confident that I moved the needle much. The students went back to their labs and continued doing the sorts of work that they were doing before. I ended the course thinking that the culture will change only when there is a coordinated effort to break the barrier between theorists and experimentalists. We should be training experimentalists to think theoretically about their data: What hypothesis am I trying to test? And we should train theorists to think experimentally about their models: How could my hypothesis be tested?
The Program in Neuroscience at Harvard University briefly ran a certificate program in computational neuroscience, which (to my surprise) attracted mostly experimentalists. To some extent, this was driven by the students’ desire for credentials that would get them technical jobs after graduation. Nonetheless, it offered an opportunity to systematically train experimentalists in theory.
Thinking more broadly, a holistic approach could include more tailored courses, perhaps like the one I taught but reaching a larger proportion of students. It could incorporate more theory into introductory courses and encourage theorists to get involved in experimental research. It might also recruit more faculty who do theory-driven experimental research and provide seed grants for theory-experiment collaborations, such as the Collaborative Research in Computational Neuroscience program, run jointly by the U.S. National Science Foundation and the National Institutes of Health; and the Simons Collaboration on the Global Brain, funded by the Simons Foundation (The Transmitter’s parent organization). With all these pieces in place, we can move beyond random walk science.