IC Colloquium: How does the brain create language?
By: Christos Papadimitriou - Columbia University
Video of his talk
Abstract
There is no doubt that all cognitive phenomena are the result of the activity of neurons and synapses, and yet there has been slow progress toward articulating a computational theory of how exactly this happens. I will introduce a simplified mathematical model of the brain, which we call NEMO, involving brain areas, spiking neurons, random synapses and weights, local inhibition, Hebbian plasticity, and long-range interneurons -- importantly, there is no backpropagation in NEMO. Emergent behaviors of the resulting dynamical system -- established both analytically and through simulations -- include stable neural representations which we call assemblies, sequence memorization, one-shot learning, and universal computation. NEMO is also a software-based neuromorphic system that can be simulated efficiently at the scale of tens of millions of neurons, emulating certain high-level cognitive phenomena, such as parsing of natural language. I will describe our recent implementation of a basic language acquisition system: a neural tabula rasa which, on input consisting of a modest amount of grounded sentences in any natural language, is capable of learning a lexicon, syntax, semantics, comprehension, and generation in the same language (all terms will be defined). Finally, I will argue that experimenting with such brain-like devices, devoid of backpropagation, besides providing insights into the way the brain works, can reveal novel and complementary avenues to learning, and may end up advancing AI.
Bio
Christos H. Papadimitriou is the Donovan Family Professor of Computer Science at Columbia University. Before joining Columbia in 2017 he taught at Berkeley for 22 years, and before that at Harvard, MIT, Athens Polytechnic, Stanford, and UCSD. He has written four widely used textbooks, and hundreds of articles on algorithms and complexity, and their applications to optimization, databases, control, AI and robotics, economics and game theory, the Internet, evolution, and recently the brain. He was the founding Senior Scientist of the Simons Institute on the Theory of Computing. He holds a PhD from Princeton and nine honorary doctorates, including from EPFL, ETH, and the Universities of Athens and Paris Orsay, while in 2014 the President of the Hellenic Republic named him commander of the Order of the Phoenix. He is a member of the National Academy of Sciences of the US, the American Academy of Arts and Sciences, the National Academy of Engineering, the Academia Europaea, and has received the Knuth prize, the Gödel prize, IEEE's Babbage award, IEEE's von Neumann medal, and IEEE's Women of the Edvac Computer Pioneer Award, the IFORS von Neumann Theory prize, and Technion’s Harvey award. He has also written three novels: “Turing,” “Logicomix,” and his latest “Independence.”
More information
Video of his talk
Abstract
There is no doubt that all cognitive phenomena are the result of the activity of neurons and synapses, and yet there has been slow progress toward articulating a computational theory of how exactly this happens. I will introduce a simplified mathematical model of the brain, which we call NEMO, involving brain areas, spiking neurons, random synapses and weights, local inhibition, Hebbian plasticity, and long-range interneurons -- importantly, there is no backpropagation in NEMO. Emergent behaviors of the resulting dynamical system -- established both analytically and through simulations -- include stable neural representations which we call assemblies, sequence memorization, one-shot learning, and universal computation. NEMO is also a software-based neuromorphic system that can be simulated efficiently at the scale of tens of millions of neurons, emulating certain high-level cognitive phenomena, such as parsing of natural language. I will describe our recent implementation of a basic language acquisition system: a neural tabula rasa which, on input consisting of a modest amount of grounded sentences in any natural language, is capable of learning a lexicon, syntax, semantics, comprehension, and generation in the same language (all terms will be defined). Finally, I will argue that experimenting with such brain-like devices, devoid of backpropagation, besides providing insights into the way the brain works, can reveal novel and complementary avenues to learning, and may end up advancing AI.
Bio
Christos H. Papadimitriou is the Donovan Family Professor of Computer Science at Columbia University. Before joining Columbia in 2017 he taught at Berkeley for 22 years, and before that at Harvard, MIT, Athens Polytechnic, Stanford, and UCSD. He has written four widely used textbooks, and hundreds of articles on algorithms and complexity, and their applications to optimization, databases, control, AI and robotics, economics and game theory, the Internet, evolution, and recently the brain. He was the founding Senior Scientist of the Simons Institute on the Theory of Computing. He holds a PhD from Princeton and nine honorary doctorates, including from EPFL, ETH, and the Universities of Athens and Paris Orsay, while in 2014 the President of the Hellenic Republic named him commander of the Order of the Phoenix. He is a member of the National Academy of Sciences of the US, the American Academy of Arts and Sciences, the National Academy of Engineering, the Academia Europaea, and has received the Knuth prize, the Gödel prize, IEEE's Babbage award, IEEE's von Neumann medal, and IEEE's Women of the Edvac Computer Pioneer Award, the IFORS von Neumann Theory prize, and Technion’s Harvey award. He has also written three novels: “Turing,” “Logicomix,” and his latest “Independence.”
More information
Practical information
- General public
- Free
Contact
- Host: Ola Svensson