It’s easy to do less-than-rigorous science without realizing it. Yet there isn’t a systematic way to learn good science practice, says Konrad Kording, professor of neuroscience at the University of Pennsylvania. As a student, “it felt like I was soaking all that up from my supervisor, and there was no training,” he says.
So Kording helped create the formal training he lacked: Community for Rigor, a science-education initiative funded by the U.S. National Institute of Neurological Disorders and Stroke. (The five-year grant is slated to run until July 2027.) An administrative center Kording leads at the University of Pennsylvania designed the online resource, which houses educational units created in conjunction with teams at nine other institutions. The group released its first free, open-access training, which focuses on confirmation bias, this week and plans to publish four others throughout 2025.
The units include brief lessons, discussion questions and activities that “cement the concepts in practical skills that people can practice and actually apply in their own research,” says Hao Ye, head of curriculum at Community for Rigor. Each unit takes about three hours to complete.
Scientists can use the materials as a lab or individually. But they are designed to facilitate conversations instead of being a “corporate training,” Kording says. The lessons often do not include a correct answer to questions, Ye says, but instead share answers other scientists have given and ask participants to reflect on the differences.
“That’s a way of trying to bring how actual scientists engage with science into our teaching,” Ye says. “Because if we teach people that the only way to do science correctly is to get the right answer, then they’re going to go out and practice science in the wrong way.”
The Transmitter spoke with Kording and Ye about the rigor-related pitfalls they see most often in neuroscience research and how their curriculum aims to combat them.
This interview has been edited for length and clarity.
The Transmitter: What motivated you to start Community for Rigor?
Hao Ye: I see a lot of initiatives that are based around telling researchers that they’re doing things wrong, or they’re bad because they don’t do XYZ. And that never felt right to me as a way to get people to change and improve.
Konrad Kording: I’m one of the grumpy old guys who often looks at science and is worried that what we do might not work as well as it should. And so, for many years, I’ve been on this “let’s think harder about how we do science” train.
TT: What are some of the issues you see in neuroscience?
KK: There is the simple cartoon version of science: We have a hypothesis, and then we run an experiment to test that hypothesis. The result of that experiment, together with data analysis, tells us which of our starting hypotheses we should believe, and then we publish that. And this cycle, as simple as they teach you at school, can fail in so many ways.
A good experiment tests two possible hypotheses, and the world should focus on the direction of the one that comes out as correct. You are plotting out for the community where the next steps should be happening. But a lot of times people test a hypothesis they already have—or that their supervisor has had for many years—and feel that they need to show that this hypothesis is true, regardless of what the outcomes are. If the output is negative, they feel they shouldn’t publish it, at the minimum, and probably that they failed as a student.
Then the experiments that people run are often very indirect—and weak—tests of underlying hypotheses. Then people make experimental-design mistakes and data-analysis mistakes. And you could even correctly do all the things and still make your research irrelevant by not sharing the data or code, and therefore the world doesn’t really trust you. There are all these little steps where we shortchange science, where we could do better science, and we want people to see and understand that.
TT: How did you translate those issues into a training curriculum?
KK: We are standing on the shoulders of giants. A lot of people in the community have asked how science fails. People have discovered that publication bias is a problem, and that outcome switching is a problem, and that randomization is a problem. The curriculum topics came about because there is, and always has been, a set of idealists in the field who really ask themselves how things go wrong.
HY: We also observed the common traps that people fall into. And we have limited time and limited budget, so we’re starting with the problems that we think are going to be the most important and going to have the biggest impact: causation versus correlation, randomization for experimental setups, confirmation bias and developing better research questions.
TT: What are your long-term hopes for Community for Rigor?
KK: The big hope is that we’re going to somewhat lower—by maybe a few percentage points—the incidence of suboptimal things we all do in our lives as scientists. If we lower it neuroscience-wide by 1 percent, we will have made back the National Institutes of Health investment many, many times over.
HY: A core message that I would want to get out to the people is that learning how to do research never stops; you keep finding new ways to challenge your existing approach. Not just to what you know about a particular system or how the brain works, but even your own thinking about how to try to understand how the brain works. That is a process that I want people to always be thinking about.
TT: How will you try to ensure people use this resource?
HY: We’ve partnered with a bunch of institutions in the Philadelphia area that will use the materials in their graduate student and research trainings. We also will promote the materials using our email list and social media. We also attend conferences to get feedback on the materials and make connections with people.