Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…

A leading neurolinguist explains linguistic theory and large language models-the top contenders for understanding human language-and evaluates them in the context of the brain.
Contemporary linguistics, founded and inspired by Noam Chomsky, seeks to understand the hallmark of our humanity-language. Linguists develop powerful tools to discover how knowledge of language is acquired and how the brain puts it to use. AI experts, using vastly different methods, create remarkable neural networks-large language models (LLMs) such as ChatGPT-said to learn and use language like us.
Chomsky called LLMs "a false promise." AI leader Geoffrey Hinton has declared that "neural nets are much better at processing language than anything ever produced by the Chomsky School of Linguistics."
Who is right, and how can we tell? Do we learn everything from scratch, or could some knowledge be innate? Is our brain one big network, or is it built out of modules, language being one of them?
In How Deeply Human Is Language?, Yosef Grodzinsky explains both approaches and confronts them with the reality as it emerges from the engineering, linguistic, and neurological record. He walks readers through vastly different methods, tools, and findings from all these fields. Aiming to find a common path forward, he describes the conflict, but also locates points of potential contact, and sketches a joint research program that may unite these communities in a common effort to understand knowledge and learning in the brain.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.
A leading neurolinguist explains linguistic theory and large language models-the top contenders for understanding human language-and evaluates them in the context of the brain.
Contemporary linguistics, founded and inspired by Noam Chomsky, seeks to understand the hallmark of our humanity-language. Linguists develop powerful tools to discover how knowledge of language is acquired and how the brain puts it to use. AI experts, using vastly different methods, create remarkable neural networks-large language models (LLMs) such as ChatGPT-said to learn and use language like us.
Chomsky called LLMs "a false promise." AI leader Geoffrey Hinton has declared that "neural nets are much better at processing language than anything ever produced by the Chomsky School of Linguistics."
Who is right, and how can we tell? Do we learn everything from scratch, or could some knowledge be innate? Is our brain one big network, or is it built out of modules, language being one of them?
In How Deeply Human Is Language?, Yosef Grodzinsky explains both approaches and confronts them with the reality as it emerges from the engineering, linguistic, and neurological record. He walks readers through vastly different methods, tools, and findings from all these fields. Aiming to find a common path forward, he describes the conflict, but also locates points of potential contact, and sketches a joint research program that may unite these communities in a common effort to understand knowledge and learning in the brain.