Mistaking a duck for a skvader: How a conceptual form of circular analysis may taint many neuroscience studies

These logical loops are harder to spot than circularity involving noise in the data, but they result from neglecting something closer to home: existing knowledge about the brain.

A duck on the water in profile, with its beak facing upwards, looks like a rabbit.
Circular analysis: From a certain angle, a duck with its head thrown back resembles a mythical feathery rabbit, or skvader.
Kevin Sawford / Getty Images

Picture a male goldeneye duck in the midst of its signature courtship move: its head thrown back, its bill open and pointed up. From a certain angle, it looks like a rabbit, albeit a feathery one. If you were to describe this ambiguous display — but forget the duck-rabbit resemblance or that ducks exist at all — you might conclude that you had discovered a skvader, the elusive winged hare of Swedish myth.

You could run all sorts of sound statistical tests to show that a photo of the animal, pixel by pixel and in terms of all its features, matches a skvader better than, say, a rabbit, a cat or a fox. Based on that analysis, you might publish your findings as a novel discovery — and be completely wrong.

A logical flaw like this is surprisingly common in neuroscience studies, argues a paper published 28 November in eLife. The error might not always be as easy to spot as the duck display above, but it shares the same origins: neglecting an important piece of prior knowledge about the question at hand.

This problem, dubbed a “circular analysis of knowledge,” can distort a study’s conclusions or raise questions about a discovery’s validity, says Mikail Rubinov, assistant professor of biomedical engineering, computer science and psychology at Vanderbilt University in Nashville, Tennessee, who authored the new paper. In the field of network neuroscience alone, errors of this type may affect more than half of the papers published over the past decade, or more than 3,000 studies, Rubinov estimates.

The looping analysis typically looks like this: A study that tests a new hypothesis about a phenomenon, but leaves out existing facts that constitute the current understanding of that phenomenon, produces results that look novel but actually repackage already known features. By not testing for that redundancy, the study inadvertently uses circular reasoning and presents existing knowledge as a new discovery — what is known to be a duck is rediscovered as a skvader.

The overall effect is an abundance of false discoveries, redundant explanations and needlessly complicated models that don’t move the needle on the current state-of-the-art understanding of the brain, Rubinov says. “When people present something as new, but actually it’s just repackaged old knowledge, then we are not really gaining insight. We are not really gaining new understanding of how the brain works.”

“[Rubinov’s paper] touches upon a very important idea,” says Linda Douw, associate professor of anatomy and neurosciences at Amsterdam University Medical Center in the Netherlands. Particularly in network neuroscience — a young, prolific field characterized by an abundance of new models and methods — “sometimes it’s difficult to really see where the things we do with the method are new, and where they are basically a repetition of things that have long been known, only with new terms attached to them,” Douw says. This isn’t always a problem on its own, she adds, but becomes a problem when researchers seem to believe their findings should replace or revise existing knowledge.

Neuroscience is still in the process of building a bedrock, and although speculative models can stretch the boundaries of ideas, it shouldn’t be what you build your house on, says Mac Shine, associate professor of computational systems neurobiology at the University of Sydney in Australia. “[Rubinov’s paper] is a really great clarion call for us to admit where we’re at and try to do better at developing a robust theoretical foundation for our science.”

Fur and feather fake: Around 1890, a newspaper in Norrland, Sweden ran a story about a skvader reportedly shot near Sundsvall. Around 1915 the taxidermist Rudolf Granberg mounted a “skvader” from the skins of a caperrcaillie hen and a mountain hare, which is still on display in the Cultural Museum of Medelpad in Sundsvall.
Courtesy of Gösta Knochenhauer

C

ircular analyses involve methods — from data selection criteria to the choice of tests — that influence or predetermine the results. A familiar form of circular reasoning, previously shown to be widespread in research, involves mishandling noise in the data, leading to distorted or false findings. Circular analysis of knowledge, on the other hand, can lead to redundant results and a false sense of a finding’s importance.

It is also often inconspicuous, Rubinov says. For example, a study may propose that particular activity patterns in the visual cortex contain internal representations of observed stimuli and that the brain interprets their meaning much like an artificial neural network decodes input images. Or, a study may propose that oscillatory neural activity underpins the visual system’s ability to integrate bits of input into a holistic perception.

But activity patterns in the visual cortex may also simply reflect interactions between the visual and motor systems, Rubinov points out. And oscillatory activity could be the byproduct of the interplay between inhibitory and excitatory neurons. In other words, the studies in both cases are centered around a feature that could be redundant with a more basic feature known about the visual cortex. So a study speculating additional roles for these neural signals needs to show they have significance above and beyond phenomena that are already known.

The problem, Rubinov argues, is that a study often tests for all the important confounders except for the existing knowledge. It might, for example, check the oscillatory signal against noisy or non-oscillatory signals, but not against a model that includes the balancing activity of inhibitory and excitatory neurons. Researchers might take passing this test as confirmation of their speculative model about the importance of oscillations in perception. But their analysis is circular because the hypothesis is tested against a strawman — a model missing the relevant basic features — ensuring its success.

T

o avoid circularity, Rubinov proposes comparing speculative models to “benchmark models” that represent all the accepted facts and features relevant to the system under study. The practice is similar to randomized controlled trials in medical research, he says. “We can come up with any model we want. But then if we want to actually show that this model is successful, we have to go ahead and rigorously try to reject it.”

But first, such benchmark models need to be built, starting with aggregating the scattered knowledge amassed about each system. That daunting task requires extensive community effort but is ultimately necessary for real progress, Rubinov says. “I think it is good to build the strong models as benchmarks, and I think it’s also good for the field to try to test and reject speculative models that do not stand up to the rigorous tests — and then use that to make real progress.”

Some cases may be exempt from this practice, Douw notes, either because it’s practically impossible to benchmark some types of experimental findings or because the study has a different goal — for example, describing a known brain characteristic from a different perspective or relating it to a clinical outcome. In these situations, more nuanced wording and careful interpretation could be an adequate approach, “as long as we don’t oversell the type of insights the paper is bringing,” Douw says. “There are ways to communicate the findings without making it sound like you’re re-inventing the wheel.”

And redundant explanations can still have merit, Shine says. “We’re in some ways finding many different ways to look at the system that are coherent with one another.” The real frustration is that the field as a whole rewards novelty over deep understanding, he adds. “The deep understanding is often more painstaking, slow, and less exciting.”

Benchmarking is a difficult and slow process, Rubinov agrees, adding that the field should be more accepting of null results and recognize their contribution to the process. “We don’t have a God-given right to make a discovery every year.”

Sign up for our weekly newsletter.

Catch up on what you may have missed from our recent coverage.