Santa Fe Institute.- Nature of Intelligence – Complexity Wrap Up
Hopefully you have been along for the ride and have listened to all six episodes. It’s been a lot to digest, a lot to think about. While the field of neuroscience has made great strides, when it comes to the subject of human intelligence there’s still so much to learn. Which is why I’ve appreciated this podcast.
And now we have AI entering the picture. Will it augment our IQ, or surpass us, to our detriment? It’s a mystery. So much upside, yet there’s a dark side to how AI can be used by bad actors operating behind the scenes. If you missed a post:
- Episode 1 – What is Intelligence?
- Episode 2 – The relationship between language and thought
- Episode 3 – What kind of intelligence is an LLM?
- Episode 4 – Babies vs Machines
- Episode 5 – How do we assess intelligence?
- Episode 6 – AI’s changing seasons
As a key to this series is an exploration of AI, I asked Google’s NotebookLM to provide some insights as to the key points that were explored over the series. Does this synopsis align with your impressions? Here’s the cast of characters:
- Melanie Mitchell (host) – Professor at the Santa Fe Institute working on artificial intelligence and cognitive science. In the final episode, she is interviewed about her background, views on AI, AGI, and the future of the field.
- Abha Eli Phoboo (host) – Abha is a writer and an obsessive rewriter. Interested in the arts and sciences, she explores the weak interaction between the two. A CERN Press Officer, she translates physics into English and helps scientists communicate their research to the world.
- Alison Gopnik – Professor of psychology and philosophy, member of the Berkeley AI Research group, external professor with the Santa Fe Institute, who studies how children learn.
- John Krakauer – Professor of neurology, neuroscience, physical medicine, and rehabilitation at Johns Hopkins University School of Medicine, who researches intelligence and physical movement in animals, machines, and humans.
- Ev Fedorenko – Featured in the second episode discussing the relationship between language and thought. Her work includes using fMRI brain scans to examine the relationship between language and other forms of cognition.
- Steve Piantadosi – Featured in the second episode discussing the relationship between language and thought. He provides examples of how language can make learning more efficient.
- Gary Lupyan – Featured in the second episode discussing the relationship between language and thought. He believes language is one of the major reasons for human intelligence, potentially more of a cause than a result.
- Murray Shanahan – Professor of cognitive robotics at Imperial College London and principal research scientist at Google DeepMind.
- Tomer Ullman – Psychologist at Harvard University studying computation, cognition, and development.
- Linda Smith – Chancellor’s Professor of Psychological and Brain Sciences at Indiana University, a developmental psychologist and pioneer of head-mounted camera research with infants.
- Mike Frank – Professor of psychology at Stanford, who studies how children learn and uses large datasets and new methodologies.
- Erica Cartmill – Professor of cognitive science, animal behavior, anthropology, and psychology at Indiana University, who studies cognition and communication across a wide range of species, including great apes and human children.
- Ellie Pavlick – Discusses how we assess intelligence, particularly in machines, and the challenges of applying human tests to AI. She also talks about the difficulty of understanding how LLMs work internally.
AI Summary via NotebookLM
This podcast series explores the complex question: What is intelligence?. It highlights that defining intelligence is difficult and that there is no single, simple definition; it’s more like a “suitcase word” packed with various capabilities. The series draws on insights from cognitive scientists, child development specialists, animal researchers, and AI experts.
Human intelligence involves many facets. It includes learning about cause and effect by experimenting and interacting with the world. Humans are good at generalizing knowledge and making analogies, applying what they learn in one situation to new ones without needing vast amounts of retraining. Common sense, which relies on innate understandings of the physical world and flexibility in thinking, is also crucial.
Language is seen as a backbone of human culture and a powerful tool for sharing information and ideas, enabling us to learn without direct experience and understand abstract concepts. There is debate, however, on whether language is a cause or a result of human intelligence, and whether language and thought are fundamentally separate or intertwined. Some evidence suggests they can be separate, at least in adults. Human intelligence also relies heavily on our social nature, drive to collaborate, and the unique role of caregiving in development.
Large Language Models (LLMs) like ChatGPT are a focus of the series. These systems are trained on enormous amounts of human-generated text data from the internet. They work by finding statistical correlations in language and predicting the most likely next word or “token”. While LLMs can produce sophisticated and sometimes creative language, there are significant differences compared to human intelligence.
LLMs learn passively from data, unlike humans who learn actively through interaction with the world. They lack an inherent drive to explore or understand the world. There is debate on whether LLMs truly “understand” language in a meaningful sense or simply know how to use words based on patterns. They also cannot engage with the world to update “beliefs” and sometimes make things up, a behavior called “hallucinating”.
Assessing the intelligence of LLMs is challenging. Applying tests designed for humans, like the SAT, might not mean the same thing for a machine. Some researchers suggest LLMs might be learning how to pass the test rather than exhibiting general reasoning ability. Understanding how LLMs actually work internally (“mechanistic understanding”) is seen as crucial but is still a nascent area of research. Some propose thinking of LLMs as sophisticated “role-players” rather than entities with beliefs or consciousness. LLMs might also be better understood as reflecting collective knowledge rather than a single agent’s intelligence.
The concept of Artificial General Intelligence (AGI), often described as human-level intelligence in machines, is discussed, but its definition remains vague and debated. The current path to building powerful AI is seen by some as unsustainable due to the immense data and energy requirements, suggesting that future AI might need to be more “embodied” and learn more like humans or animals.
Beyond theoretical fears, the series highlights real, present risks of AI, including the spread of deepfakes and disinformation, which can erode trust and make it harder to find reliable information online. The unauthorized use of human-generated data for training AI is also raised as an ethical concern.
Top Five Audience Takeaways
- Defining “Intelligence” is Surprisingly Difficult. Instead of being a simple, single thing we can measure, intelligence is like a “suitcase word” packed with many different abilities and ways of being. Researchers across various fields agree that there’s no easy, complete definition of what makes something intelligent, whether it’s a person, an animal, or a machine.
- Human Intelligence is Deeply Tied to Active Experience and Social Interaction. Humans don’t just passively absorb information; we learn by actively exploring the world, doing “little experiments,” and figuring out cause and effect. Our ability to generalize knowledge to new situations with limited examples is crucial. Furthermore, language, our drive to collaborate, and the unique role of caregiving are fundamental to how our intelligence develops and functions.
- Today’s Powerful AI, like ChatGPT (LLMs), Works Very Differently from Human Intelligence. These systems are trained on enormous amounts of text data from the internet, learning by finding statistical patterns and predicting the next word. Unlike humans, they learn passively, lack an inherent drive to explore the world, don’t have beliefs, and can sometimes “hallucinate” or make things up. While they can produce impressive language, there’s a significant debate about whether they truly “understand” in a human sense or are just very sophisticated at using patterns.
- Testing AI Intelligence Using Human Standards is Tricky. Applying tests designed for humans, like the SAT or theory-of-mind tasks, to LLMs might not accurately reflect their capabilities. LLMs might simply be learning how to pass the specific test through pattern matching from their vast training data, rather than exhibiting genuine reasoning or understanding. Understanding how these AI systems arrive at their answers – looking “under the hood” – is a crucial but difficult area of research. We also need to be mindful that our human-centric view can limit how we assess intelligence in other entities, including animals.
- Current AI Approaches Face Significant Challenges and Present Real Risks. The reliance on massive data and energy to build powerful AI systems may not be sustainable or efficient in the long run. Beyond theoretical fears about Artificial General Intelligence (AGI), there are immediate concerns like the spread of deepfakes and misinformation, which can erode trust and make finding reliable information difficult. There are also ethical questions about using vast amounts of human-generated data to train AI without permission or benefit to the creators. Some researchers suggest future AI development might need to take a different path, perhaps learning more like babies or animals, to be more sustainable and genuinely intelligent.
◆
◆
contact me to discuss your storytelling goals!
◆
Subscribe to the newsletter for the latest updates!
Copyright Storytelling with Impact® – All rights reserved