There are many ways neuroscience could end. Prosaically, society may just lose interest. Of all the ways we can use our finite resources, studying the brain has only recently become one; it may one day return to dust. Other things may take precedence, like feeding the planet or preventing an asteroid strike. Or neuroscience may end as an incidental byproduct, one of the consequences of war or of thoughtlessly disassembling a government or of being sideswiped by a chunk of space rock.
We would prefer it to end on our own terms. We would like neuroscience to end when we understand the brain. Which raises the obvious question: Is this possible? For the answer to be yes, three things need to be true: that there is a finite amount of stuff to know, that stuff is physically accessible and that we understand all the stuff we obtain. But each of these we can reasonably doubt.
The existence of a finite amount of knowledge is not a given. Some arguments suggest that an infinite amount of knowledge is not only possible but inevitable. Physicist David Deutsch proposes the seemingly innocuous idea that knowledge grows when we find a good explanation for a phenomenon, an explanation whose details are hard to vary without changing its predictions and hence breaking it as an explanation. Bad explanations are those whose details can be varied without consequence. Ancient peoples attributing the changing seasons to the gods is a bad explanation, for those gods and their actions can be endlessly varied without altering the existence of four seasons occurring in strict order. Our attributing the changing seasons to the Earth’s tilt in its orbit of the sun is a good explanation, for if we omit the tilt, we lose the four seasons and the opposite patterns of seasons in the Northern and Southern hemispheres. A good explanation means we have nailed down some property of the universe sufficiently well that something can be built upon it.
Deutsch points out that, as a consequence, good explanations must always create new problems. A good explanation creates an inevitable “why” question: Why are these details hard to vary; or why are these details the way they are? For Deutsch, this means we will never run out of problems. Running out would imply the existence of an ultimate explanation. But that cannot exist in a good version: because either the ultimate explanation can be varied and is therefore bad (the gods made it so), or it is good and so cannot explain why that version is true and no other. Thus there is an infinite amount of stuff we could know, in principle.
Let’s say we don’t buy that argument: We intuitively believe there is a finite amount of stuff to know, and so a complete understanding of the brain remains in our grasp—so long as that stuff is physically accessible. But we have good reason to think it is not.
The physics of our universe places strong constraints on what we can know. Consider that we would love to observe what’s happening anywhere in the universe, but we can’t because the speed of light, though mind-bendingly fast, is finite. We can only observe a radius of the universe around us that is limited by the distance light has traveled since photons were first emitted after the Big Bang. The universe exists beyond this light horizon, but we can never directly observe it. Such impossibility is an ever-present threat to science. The impossibility of access to the necessary spatial scales or dimensions is what has turned string theory, loop quantum gravity, and other theories attempting to reconcile quantum mechanics and general relativity into more speculation than science.
Such physical impossibilities may come into play as we strive to understand the brain. Let’s say we figured out that a full causal account linking neural activity to immediate behavior required the individual and simultaneous recording of some large fraction of the neurons in the human brain, while at the same time stimulating some other large, overlapping set of neurons. If optical imaging and optical stimulation turned out to be the only way of getting this resolution of recording and of stimulation in principle, we would likely hit impossibility.
Consider the sheer number of photons we would need. The scattering of so many photons as they pass through the brain’s tissue likely means a signal-to-noise ratio too low to resolve the activity of so many individual neurons. And more photons means more heating, invoking or stopping the firing of neurons and changing the state of the brain from what we needed to observe. Indeed, with imaging at this scale, changing the brain’s state is likely inevitable: Bringing light and lenses to deeper structures would damage many neurons. We could be aware of exactly what we need to know about the brain to understand it, yet simply unable to access it.
Let’s play the optimist: Perhaps human ingenuity will find a way around the limits to obtaining all knowledge of the brain. Then all we have to do is understand that knowledge.
P
hilosophers of mind have long questioned our ability to truly understand the human brain. Thomas Nagel famously asked us to imagine what it’s like to be a bat, to have leathery wings and sonar navigation, and argued that we, of course, cannot know the bat’s conscious experience of the world. For him it followed that, if we cannot describe the subjective experience of a bat, then we cannot create an objective description of conscious experience, independent of species, because we can never know what we are trying to objectively measure. If we cannot do that, aspects of understanding the brain will remain inaccessible to us.Colin McGinn took this further and argued that a human brain can never understand human consciousness, even in principle, because our minds are bound by our perceptual capacity and so are cognitively closed to the concepts needed to understand human consciousness, like an armadillo trying to understand math: Math exists and is a real property of the world, but the armadillo will never grasp it, no matter how much evidence it gathers or how diligently it studies. Indeed, neuroscience faces a unique problem: It is the only field of research in which the researcher’s understanding is generated by the very thing it is trying to understand. Make of these arguments what you will, but there are grounds to believe we will not be able to understand the knowledge we have gained about the brain, even if it is complete.
We began with the idea that we can truly understand the brain if three things turn out to be true: There is finite knowledge about the brain; we have access to that knowledge; and we have the means to understand it. The above arguments cast doubt on each of these things. We might capture that doubt by assigning probabilities to each of these things being true, so that the probability of neuroscience ending with an understanding of the brain is simply P(finite knowledge ) x P(accessible knowledge) x P(understandable knowledge). Written like that, we see the standard trap of compound probability: Each condition can be highly likely, yet the probability of all three conditions being met is still unlikely. For example, if we assigned a maximum probability of 0.79 to each, the probability of all three being true will always be less than half—it is more probable neuroscience will not end on our terms.
That would mean one of three possible outcomes, depending on which was not true:
- Neuroscience will never end, because knowledge is infinite.
- It will end, but we won’t fully understand the brain because of the physical limits to what we can access.
- It will end, and we will have accessed everything we need in principle to understand the brain, but we simply lack the ability to understand it.
In none of these do we get a complete understanding of the human brain. But perhaps that was never the goal. A reasonable alternative might be grasping the link between brain activity and behavior sufficiently well to fix it when it goes wrong. We can achieve that with prediction, without complete understanding: We can predict behavior from activity and vice versa; we can predict the effects of interventions on both.
If nothing else, the past decade of artificial intelligence has shown us that our ability to predict, and predict well, has far outstripped our ability to understand, from face recognition to language generation. Our ability to predict gives us every reason to think that one day neuroscience will know how to fix or prevent any brain malfunction, should we desire to. And that’s as good an ending as any.