Picture a male goldeneye duck in the midst of its signature courtship move: its head thrown back, its bill open and pointed up. From a certain angle, it looks like a rabbit, albeit a feathery one. If you were to describe this ambiguous display — but forget the duck-rabbit resemblance or that ducks exist at all — you might conclude that you had discovered a skvader, the elusive winged hare of Swedish myth.
You could run all sorts of sound statistical tests to show that a photo of the animal, pixel by pixel and in terms of all its features, matches a skvader better than, say, a rabbit, a cat or a fox. Based on that analysis, you might publish your findings as a novel discovery — and be completely wrong.
A logical flaw like this is surprisingly common in neuroscience studies, argues a paper published 28 November in eLife. The error might not always be as easy to spot as the duck display above, but it shares the same origins: neglecting an important piece of prior knowledge about the question at hand.
This problem, dubbed a “circular analysis of knowledge,” can distort a study’s conclusions or raise questions about a discovery’s validity, says Mikail Rubinov, assistant professor of biomedical engineering, computer science and psychology at Vanderbilt University in Nashville, Tennessee, who authored the new paper. In the field of network neuroscience alone, errors of this type may affect more than half of the papers published over the past decade, or more than 3,000 studies, Rubinov estimates.
The looping analysis typically looks like this: A study that tests a new hypothesis about a phenomenon, but leaves out existing facts that constitute the current understanding of that phenomenon, produces results that look novel but actually repackage already known features. By not testing for that redundancy, the study inadvertently uses circular reasoning and presents existing knowledge as a new discovery — what is known to be a duck is rediscovered as a skvader.
The overall effect is an abundance of false discoveries, redundant explanations and needlessly complicated models that don’t move the needle on the current state-of-the-art understanding of the brain, Rubinov says. “When people present something as new, but actually it’s just repackaged old knowledge, then we are not really gaining insight. We are not really gaining new understanding of how the brain works.”
“[Rubinov’s paper] touches upon a very important idea,” says Linda Douw, associate professor of anatomy and neurosciences at Amsterdam University Medical Center in the Netherlands. Particularly in network neuroscience — a young, prolific field characterized by an abundance of new models and methods — “sometimes it’s difficult to really see where the things we do with the method are new, and where they are basically a repetition of things that have long been known, only with new terms attached to them,” Douw says. This isn’t always a problem on its own, she adds, but becomes a problem when researchers seem to believe their findings should replace or revise existing knowledge.
Neuroscience is still in the process of building a bedrock, and although speculative models can stretch the boundaries of ideas, it shouldn’t be what you build your house on, says Mac Shine, associate professor of computational systems neurobiology at the University of Sydney in Australia. “[Rubinov’s paper] is a really great clarion call for us to admit where we’re at and try to do better at developing a robust theoretical foundation for our science.”