Letting go: An overreliance on controlled experimental designs might hamper our understanding of the neural basis of natural behaviors and the circuits that regulate them.
‘Natural Neuroscience: Toward a Systems Neuroscience of Natural Behaviors,’ an excerpt
In his new book, published today, Nachum Ulanovsky calls on the field to embrace naturalistic conditions and move away from overcontrolled experiments.
Letting go: An overreliance on controlled experimental designs might hamper our understanding of the neural basis of natural behaviors and the circuits that regulate them.
Courtesy of MIT Press
To illustrate the gap between natural behaviors in the wild and behavior as studied in the laboratory, I will start with an example: visual scene analysis in jumping spiders. This book is not going to focus on spiders, bats, or other nonstandard model species—in fact, it will focus mostly on the standard mammalian models (rodents, monkeys, and humans)—but nevertheless, it is useful to start with an example from a more humble creature to illustrate the magnitude of the problem.
Jumping spiders are highly visual, with a pair of large frontal eyes, which they use to stalk their insect prey and then rapidly attack it from a close distance. They were shown to be able to plan a complex, three-dimensional (3D) route to their prey (figure 1.1a). For example, the jumping spider may climb down from its perch and then climb up another branch and make several correct choices at branch bifurcations—even on curved branches that meander in 3D, one behind the other—and it does all of this in the face of visual occlusions by branches and against the background of complex visual clutter that includes moving leaves that can easily obscure the little fly that the spider is hunting. Remarkably, while the spider is on the move, it seems to know at all times the 3D direction to its target—even when the target is occluded!—as evident when it stops and turns its body to face the 3D direction of the target.
Figure 1.1: (a) The challenges of real-world visual scene analysis facing a jumping spider. (b) Example of visual scene analysis as studied in the laboratory. To identify the circle in this image, the viewer needs to group together the short lines forming the foreground circle and segregate them from the short lines in the background (distractors). (c) Examples of classical stimuli that are commonly used in visual neuroscience research are short lines (left) and Gabor patches (right).
Such feats of 3D spatial perception, spatial memory, and sophisticated 3D route planning are performed daily by the little jumping spiders using their tiny brains. In other cases, the spider would spot a potential mate and have to identify that it belongs to the same species; and if it is a female that approaches a male, she will inspect the courtship dance of the male carefully, and will integrate the visual input with the vibrations created by his dance, to decide whether it is a suitable mate. To achieve all of this, the spider must solve a set of difficult problems: it needs to construct an internal representation of the 3D layout of the environment, separate foreground from background and clutter, identify a target, decide whether it wants to attack the target or mate with it, plan and execute a complex route in 3D, and integrate over time, while on the move, the visual information about the 3D scene and the target location, all while actively scanning the complex visual scene with its eyes. Very few of these astonishing behaviors are being studied in their full complexity within neuroscience laboratories, whether in spiders or in mammals.
This is but one example of the kinds of real-world scene analysis problems that animals must solve daily. But how do we typically study scene analysis in the laboratory? In most cases, two-dimensional (2D) simple drawings are used, where subjects need to do segregation and grouping in the spirit of the Gestalt school of psychology (figure 1.1b). Such laboratory stimuli are studied both psychophysically and at the level of neuronal processing. Notably, these are relatively complex stimuli compared to the classical simple stimuli that are commonly used in visual neuroscience, such as isolated simple lines, dots, or Gabor patches (figure 1.1c). Just inspecting figure 1.1 clearly illustrates the gap: figures 1.1b and 1.1c are not truly complex, are not naturalistic, and are not even close to the real-world complexities and problems facing the spider, as illustrated in figure 1.1a. And this gap is even worse in mammals, with their highly evolved brains and more sophisticated behaviors. This book is aimed to start closing this gap: to demonstrate that it is crucial to study the brains of animals and humans as they perform evermore naturalistic behaviors.
R
esearch in behavioral neuroscience can be classified qualitatively along two axes (figure 1.2): how controlled the behavior is (uncontrolled→controlled axis) versus how natural the behavior is (artificial→natural axis). Historically, the large majority of experiments in neuroscience until the 1990s (and to some degree, well into the 2000s) were conducted either in vitro (in slices, cultures, or isolated ganglia) or in vivo under anesthesia or in artificial, head-fixed conditions. All these experiments occupied the upper-left quadrant of “controlled but artificial” in this conceptual 2D diagram (figure 1.2; see dots at the top left). Research in behavioral neuroscience also followed this controlled approach: each study focused on one particular behavior of interest, and the experiments usually were done in highly controlled and restricted conditions and often were only loosely related to natural behavior. There were notable exceptions, of course, such as the seminal recordings of hippocampal place cells, which were pioneered by John O’Keefe in the 1970s, where neurons were recorded in freely moving rats that explored the environment on their own volition—a highly natural behavior for rats—but these experiments were an exception to the rule. The rule was: “The more controlled, the better.”
All active scientists, myself included, endorse the crucial importance of careful empirical observation and systematic experimentation, which are essential to collect a broad knowledge base of facts, which is then used to refute existing theories and generate new ideas. However, we must stop and ponder the accompanying notion, that of controlled experiments, where the aim is to focus on just one particular cause of the phenomenon at hand while fixing (or eliminating) all other causes. Is it indeed true that breaking phenomena into their parts, as advocated by Descartes, always leads to better understanding? Is it always true that fixing/eliminating all factors except one—and repeating this process one by one for all the contributing factors—is the best approach for understanding the phenomenon?
In some branches of physics, this reductionist approach is clearly appropriate. For example, if our hypothesis is that the pressure of a gas in a container is influenced separately by the temperature of the gas and by the volume of the container, then we should conduct experiments where we fix the volume and vary the temperature and other experiments where we fix the temperature and vary the volume. By measuring the pressure in both sets of experiments, we can elucidate the separate contribution of each of these well-controlled factors: temperature and volume. However, the key question is: Are there counterexamples? Specifically, can we find natural phenomena whose causes are so complex, coupled, and interwoven that they cannot be separated—that is, where the whole cannot be broken into parts? Even in physics, the answer is clearly “yes,” as exemplified by condensed-matter physics and statistical physics—where the fundamental physical properties of microscopic particles cannot explain many macroscopic phenomena, such as superconductivity and superfluidity, the properties of glasses and of ordinary crystals, and a whole host of phase transitions.
Figure 1.2: Schematic illustration of different experimental styles in systems neuroscience and behavioral neuroscience, displayed as a scatterplot with two axes: the uncontrolled–controlled axis and the artificial–natural axis. VR, virtual reality; fMRI, functional magnetic resonance imaging; IT, inferior temporal.
All of these are examples of emergent phenomena, where the whole cannot be broken into parts. As eloquently put by the eminent physicist Philip Anderson: “It seems inevitable to [believe] an obvious corollary of reductionism: that if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws [namely, particle physicists and astrophysicists]. The main fallacy of this kind of thinking is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.” The failure of the constructionist type of reductionism and the prevalence of emergent phenomena are especially prominent when studying brain and behavior—and, in particular, higher brain functions. One cannot construct complex behavior and cognition by examining the detailed biophysical mechanisms of individual neurons, channels, or molecules. As phrased by Gomez-Marin and Ghazanfar: “The current approach to behavior and its mechanisms could be characterized as the “Frankenstein error,” or the failure of the principle that what can be taken apart can be put back together again.” It is generally impossible to break down a complex neuronal or behavioral phenomenon by trying to isolate and tightly control its constituents one by one, and then meaningfully reconstruct and explain the phenomenon based on the underlying constituents.
A corollary of this is that when studying the neural basis of complex behaviors, one should not necessarily aspire to perform highly controlled experiments as classically done in neuroscience; rather, experiments should be made more naturalistic, focusing on the neural basis of natural behaviors. Of course, some amount of control is always needed, in any experiment, because all experiments, by definition, involve controlling and manipulating something. But what do we wish to control? How much? And what do we lose if we control too much? In what follows, I will argue that we should move away from overcontrolled experiments and allow animals and humans to behave more naturally.
We care about your data, and we’d like to use cookies to give you a smooth browsing experience. Please agree and read more about our privacy policy.AGREEDISMISS