Artificial intelligence can help neuroscientists track animal behavior and connect it to brain activity. With training, machine-learning tools can scan video recordings to automatically identify and track an animal’s limbs and movement.
A new labeling technique has streamlined the algorithm training process, enabling researchers to build better, more generalizable models for tracking animal movement. The approach includes the use of glow-in-the-dark dyes and code that trains deep neural networks to recognize an animal’s position and motion.
“We can generate a million samples of visually diverse training data in one afternoon,” says lead investigator Eiman Azim, associate professor of molecular neurobiology at the Salk Institute for Biological Studies in La Jolla, California.
In the past decade, neuroscience has seen a surge of movement quantification techniques. Whereas scientists used to follow motion by attaching reflective markers to specific locations on an animal’s body, deep neural networks now power tools such as DeepLabCut and Social LEAP Estimates Animal Poses (SLEAP), which can track an animal’s position without physical markers.
But such approaches falter outside of highly constrained lab settings. “These neural networks become hyperspecialized to particular visual environments,” Azim says. Any slight change to the lighting, the camera angle or the animal, and “the networks fall apart, and we have to start all over,” he says.
Starting over often entails a small army of researchers staring at computer screens for weeks at a time to manually annotate video frames with landmarks (a hand or hind leg, for example) to train their model. But Azim and his team’s new approach, GlowTrack, uses fluorescent dye and a series of computational tricks to create millions of automatically labeled frames in a single session.
T
he GlowTrack process starts with painting the animal with dye that fluoresces under ultraviolet light. In one test, for example, Azim’s team painted a mouse’s hand. They placed the mouse in a dome with a spinning platform and a range of lighting and camera-angle options to make their footage as visually diverse as possible.Then they rapidly strobed UV and visible light at hundreds of hertz (too quickly for the animals to perceive) while filming at a similarly high frame rate. The resulting recording offered pairs of nearly identical pictures: one with regular lighting and one with UV light, in which only the fluorescent hand would be visible.
For every frame of footage, an algorithm transfers the location of the glowing label to the regular picture, generating millions of training examples from a single recording session. “It’s as if a human sat there, clicked on all the visible images and said, ‘That’s the spot,’” Azim says. “You’re taking the human out of the equation.”
The team published their findings in Nature Communications in September.
GlowTrack can also train neural networks to identify multiple points of interest on the body. To do this, researchers “make a Jackson Pollock painting” on the animal, Azim says, covering part of the body with random speckle patterns to create “visual barcodes,” or distinctive clusters on the animal that computer-vision algorithms can recognize and track across video frames.
As a proof of concept for this approach, he and his colleagues sprinkled tiny dots of UV dye on a human participant’s fingers and successfully detected several landmarks, which could then be tracked as they moved over time.
T
he visual barcode approach only works, however, if the speckles stay in place while the person — or animal — moves. Although this is possible for skin, it’s not the case on fur and other textured surfaces, presenting a major limitation. Shaving the surface of interest is one possible workaround, but the team is searching for better solutions.Although many lab scientists will likely stick with tried-and-true motion-tracking techniques in the immediate future, GlowTrack is “great for developers who are willing to generate their own large datasets,” says Sam Golden, assistant professor of neuroscience at the University of Washington in Seattle, who was not involved in this project. With more development and better fluorescent dyes, he also sees “creative and useful end-user applications” for animal behavior researchers.
Mackenzie Mathis, assistant professor of neuroscience at the Swiss Federal Institute of Technology in Lausanne, Switzerland, who was not involved in the study, finds the work interesting but has questions about its broader use in animal behavior research, especially beyond the lab. Not all animals, she says, are as small and easy to handle as mice: “Imagine doing this for cheetahs.”
But Azim notes that future studies in other animals — possibly even large cats — would only need data from one animal to train a model for other members of that species.
His team envisions applications for GlowTrack, including robotics, ecology and art. “Capturing how things move is a very big challenge that applies across many fields, not just ours,” Azim says. Now that the code behind GlowTrack is available for anyone to download for free, he adds, “We’re very interested to hear from different communities about what’s useful and what’s not working.”