Neuroimaging research practices and methods evolve at remarkable speeds each year. These advances—which have been driven in large part by an influx of computer programmers and statisticians to the field over the past 15 years—push the boundaries of what we can learn about the human brain. But they also run the risk of leaving some people behind. Not everyone in the field possesses advanced technical skills, and a lack of programming expertise might make it difficult for some researchers to adopt new tools. Just keeping up with each new improvement or level of technological sophistication can be overwhelming, for both established neuroimaging experts and newcomers alike.
Take hyperalignment, for example. This relatively new machine-learning technique uses functional MRI (fMRI) signals to classify what a person experiences during testing, such as what part of a movie they watched. This method aligns each study participant’s fMRI signal in a way that is unaffected by individual variation in brain shape and size, making it significantly more accurate than traditional machine learning. But it is a complex technique that requires, among other things, familiarity with machine learning and principal components analysis, as well as basic Python coding skills. Though some neuroscientists may understand the concept of hyperalignment, applying it to their own data may present challenges that aren’t easily solved by consulting an online repository of sample code. And without the right guidance, researchers might have difficulty interpreting experimental outputs and dealing with software compatibility issues.
For the field of neuroimaging to move forward, we need to make sure new tools are easy for everyone to use. This way we can bridge the gap between the computationally savvy and those who have less programming experience but good ideas for experiments. Fortunately, many researchers have recognized this need and are working to develop resources to make new neuroimaging tools more intuitive and more accessible than ever.
O
ne outstanding recent example of these efforts is Neurodesk, a suite of programs used for neuroimaging data analysis. Neurodesk sits inside a container, a software package that includes its own prerequisites to run and does not depend on having anything else installed on your computer. I can attest to its effectiveness, having used it at a neuroimaging workshop I led at Vanderbilt University in September. In previous workshops, I spent considerable time with various students debugging software installation and computer issues. This time, more than 80 attendees ran the required examples and code within Neurodesk without any problems, making the software and concepts much easier to use and understand. Programs such as Neurodesk are becoming more popular, particularly given that many data acquisition and analysis tools depend on a software packages developed for different operating systems, which can be challenging to integrate with one another.Another useful development for newcomers to computer programming is Jupyter Notebooks, a collection of open-source, web-based applications that can run Python code within a web browser and therefore make the code more portable and easier to read across independent laboratories. Since their development about a decade ago, they have been quickly adopted by many scientific fields, especially neuroimaging. They also make learning a programming language much easier and more intuitive, given that you can download someone else’s Notebook from a webpage and run blocks of code in a stepwise fashion. The Jupyter Notebooks written by Luke Chang and other instructors at Dartmouth University, for example, provide an excellent introduction to hyperalignment. Furthermore, editing and running these Notebooks within an environment such as Neurodesk ensures that any software compatibility and version issues are kept to a minimum.
Lastly, neuroscientists can tap online tutorials to get a grasp on the latest software advances in the field. Though there is no substitute for a full-length semester course on a given topic or software package, many students simply do not have access to high-quality instruction at their institution in every conceivable branch of neuroimaging and neuroscience. Some universities specialize in machine learning, for example. Others have a high concentration of faculty who use functional connectivity in their research.
To benefit those students and others, more scientists are creating tutorials on YouTube to give an “over-the-shoulder” look at how to perform specific steps. My YouTube channel, for example, features lessons on how to use many different neuroimaging software packages. Jeanette Mumford’s channel provides overviews of statistical concepts and how they apply to neuroimaging. Video-centric journals such as the Journal of Visualized Experiments (JoVE) are great resources for researchers to clarify how to implement different techniques. In one of my personal favorite articles, Jeffrey Phillips and his colleagues show how to analyze a diffusion dataset in Diffusion Spectrum Imaging Studio and visualize the results of high-resolution tractography.
These kinds of educational resources are becoming more widespread across all fields of science, and particularly in neuroimaging, which has a heavy computer-programming component. By broadening both access to and ease of use for new neuroimaging tools, we can collectively help raise the everyday researcher’s general level of technological skill. Doing so will enable more researchers in the neuroimaging community to fulfill their experimental ambitions and help the field flourish.