New software uses artificial intelligence to automatically identify and quantify certain types of mouse social behavior from videos of mice interacting in a cage — even animals with cables implanted to monitor their brain activity.
The tool, called the Mouse Action Recognition System (MARS), could accelerate research on how autism-linked genetic mutations or drug treatments impact the behavior of mice, says co-lead investigator Ann Kennedy, assistant professor of neuroscience at Northwestern University in Chicago, Illinois. It could also standardize how different labs characterize behaviors, as different researchers may identify the same behavior differently, she says.
MARS is part of a recent effort to develop software that can automatically analyze video of mouse behavior. In most labs, researchers annotate behaviors by hand, which can take four to five hours for every hour of video, according to Kennedy. MARS can analyze an hour of video in just two to three hours, running in the background and leaving researchers free to do other work.
The software processes footage from an overhead monochrome video camera with a lens adjusted to capture infrared wavelengths. (The experimental setup is illuminated only with red light because mice are active at night.) The software tracks seven key points on the rodents’ bodies to calculate the relative postures of two mice. From this information, the software can determine if the two animals are investigating, attacking or mounting each other, so long as the mice have different coat colors. Researchers described the system in November in eLife.
T
he researchers trained the software on nearly seven hours of video, including about four hours of mice undergoing a standard assay in which a lone mouse in a cage is introduced to a foreign intruder mouse. In some of the footage, at least one of the mice had a fiber optic cable or an endoscope attached to its skull, a setup often used to control or record the activity of neurons.The team used the crowdsourcing service Amazon Mechanical Turk to recruit people to manually flag the animals’ body parts in each frame of the videos. The software learned to read the rodents’ poses based on these key points. The researchers then ran MARS on an additional seven hours of video that had not been annotated and allowed the software to derive the animals’ postures on its own.
Finally, the researchers fed MARS the same footage, more than14 hours in total, this time with individual behaviors coded by one of the researchers on the team. MARS learned how to translate the animals’ poses into specific interactions: mounting, attacking or investigating.
Tested on about two hours of video of resident-intruder interactions, MARS was as accurate as the team members were at flagging the important key points on the mice and identifying attack and investigation; it was only about 3 percentage points worse at identifying mounting.
T
he researchers also unleashed the software on 45 hours of video capturing interactions involving mice with mutations in genes linked to autism: CHD8, CUL3 and NLGN3. The software confirmed previous results; for example, CHD8 mice showed more aggression than controls.The software also found that BTBR mice, an asocial inbred strain that lacks a corpus callosum, spend less time investigating intruder mice than control mice do, matching previous results. And MARS was further able to identify which region of the intruder mouse the resident mouse was interacting with: BTBR mice spend less time inspecting the intruder’s face and genitals than controls do. The BTBR mice may miss pheromonal clues, Kennedy says.
The software includes a user interface called BENTO that enables researchers to sync MARS-processed videos with other kinds of data captured during the rodents’ interactions, such as neuronal activity and audio. This feature revealed that a subset of 28 neurons in a male mouse’s hypothalamus became active only during the first moments of the mouse mounting a female intruder. The BENTO interface allows researchers to notice such significant moments in mouse behavior that they might otherwise miss, Kennedy says.
Human annotators varied in how they identified behaviors, the team found, which could create problems when labs compare results, Kennedy says. Using software such as MARS creates a standard so that “you can make a real apples-to-apples comparison between different groups,” she says.
The software is available for download on GitHub. Labs can run the trained software as is, or they can train it on new key points and behaviors. Kennedy and her colleagues next plan to explore software that uses ‘unsupervised learning’ to make its own classifications based on raw data, rather than being trained.