Large language models
Recent articles
How artificial agents can help us understand social recognition
Neuroscience is chasing the complexity of social behavior, yet we have not answered the simplest question in the chain: How does a brain know “who is who”? Emerging multi-agent artificial intelligence may help accelerate our understanding of this fundamental computation.
How artificial agents can help us understand social recognition
Neuroscience is chasing the complexity of social behavior, yet we have not answered the simplest question in the chain: How does a brain know “who is who”? Emerging multi-agent artificial intelligence may help accelerate our understanding of this fundamental computation.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.
The BabyLM Challenge: In search of more efficient learning algorithms, researchers look to infants
A competition that trains language models on relatively small datasets of words, closer in size to what a child hears up to age 13, seeks solutions to some of the major challenges of today’s large language models.
‘Digital humans’ in a virtual world
By combining large language models with modular cognitive control architecture, Robert Yang and his collaborators have built agents that are capable of grounded reasoning at a linguistic level. Striking collective behaviors have emerged.
‘Digital humans’ in a virtual world
By combining large language models with modular cognitive control architecture, Robert Yang and his collaborators have built agents that are capable of grounded reasoning at a linguistic level. Striking collective behaviors have emerged.
Are brains and AI converging?—an excerpt from ‘ChatGPT and the Future of AI: The Deep Language Revolution’
In his new book, to be published next week, computational neuroscience pioneer Terrence Sejnowski tackles debates about AI’s capacity to mirror cognitive processes.
Are brains and AI converging?—an excerpt from ‘ChatGPT and the Future of AI: The Deep Language Revolution’
In his new book, to be published next week, computational neuroscience pioneer Terrence Sejnowski tackles debates about AI’s capacity to mirror cognitive processes.
Explore more from The Transmitter
Michael Shadlen explains how theory of mind ushers nonconscious thoughts into consciousness
All of our thoughts, mostly nonconscious, are interrogations of the world, Shadlen says. The opportunity to report our answers to ourselves or others brings a thought into conscious awareness.
Michael Shadlen explains how theory of mind ushers nonconscious thoughts into consciousness
All of our thoughts, mostly nonconscious, are interrogations of the world, Shadlen says. The opportunity to report our answers to ourselves or others brings a thought into conscious awareness.
‘Peer review is our strength’: Q&A with Walter Koroshetz, former NINDS director
In his first week off the job, the former National Institute of Neurological Disorders and Stroke director urges U.S. scientists to remain optimistic about the future of neuroscience research, even if the executive branch “may not value what we do.”
‘Peer review is our strength’: Q&A with Walter Koroshetz, former NINDS director
In his first week off the job, the former National Institute of Neurological Disorders and Stroke director urges U.S. scientists to remain optimistic about the future of neuroscience research, even if the executive branch “may not value what we do.”
Viral remnant in chimpanzees silences brain gene humans still use
The retroviral insert appears to inadvertently switch off a gene involved in brain development.
Viral remnant in chimpanzees silences brain gene humans still use
The retroviral insert appears to inadvertently switch off a gene involved in brain development.