Nature of Intelligence – Episode One – What is Intelligence?

I tend to think of storytelling as sitting at the intersection of four elements:

  • Consciousness — awareness of self, the environment, and our thoughts
  • Intelligence — ability to learn, understand, reason, and solve problems
  • Imagination — create mental images, ideas, or concepts beyond reality
  • Creativity — generate original ideas, solutions, and artistic expressions

They’re different terms, of course, yet you can see how they interact with each other. It’s also apparent that they’re involved in the process of creating stories. They’re so fundamental, in fact, that they go a long way towards describing what makes us human. But the funny thing is, science doesn’t know how to accurately define any of these concepts.

While thousands of hours have been spent seeking answers, and scientists can talk for days on end about their findings, it is still a mystery. Take Shakespeare, for example. How did he utilize these aspects of humanity to create something as magical as Hamlet? And if we can’t properly describe one of these elements, how do we explain how they work together? And extending beyond us mortals, will AI ever be able to replicate this magic?

So when I ran across the third season of Santa Fe Institute’s Complexity podcast, which is devoted to the exploration of Intelligence, I had to listen in, and if you’re interesting in how we create stories in our head, I recommend you do the same, as it looks at the concept of intelligence through a human lens, as well as from the lens of artificial intelligence.

17th Century Playwrite in England
There’s so much information in this first episode, but I wanted to share four quotes that intrigued me. First off is this notion of “common sense”. It seems simple, but again, it’s illusive to capture in words. How would you describe it?

Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired. ~ Melanie Mitchell

This notion of an equivalent phenomenon describes much of the human / AI debate, as there is a sense that a machine will never be human, but maybe it can be close enough.

I think there’s a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. ~ John Krakauer

This goes back to the reality that we don’t know what makes humans human, so how are we to compare a computer algorithm to what it means to be us?

I think it’s just again, a category mistake to say we’ll have something like artificial general intelligence, because we don’t have natural general intelligence. ~ Alison Gopnik

But we’re more than thinking animals. We have emotions. Fall in love, feel pain, express joy and sorrow. Or in this case, grief. Computers are learning how to simulate emotions such as grief, but is that even possible?

I don’t know what it would mean for a computer to feel grief. I just don’t know. I think we should respect the mystery. ~ John Krakauer

So here goes, take a listen to Episode 1 and see what you think. The transcript is below if you feel so inclined (as I did) to follow along. It’s some heady stuff.

Transcript

Alison Gopnik: It’s like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that’s not really the right question.

Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity.

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

Abha: Today’s episode kicks off a new season for the Complexity podcast, and with a new season comes a new theme. This fall, we’re exploring the nature and complexity of intelligence in six episodes — what it means, who has it, who doesn’t, and if machines that can beat us at our own games are as powerful as we think they are. The voices you’ll hear were recorded remotely across different locations, including countries, cities and work spaces. But first, I’d like you to meet our new co-host.

Melanie: My name is Melanie Mitchell. I’m a professor here at the Santa Fe Institute. I work on artificial intelligence and cognitive science. I’ve been interested in the nature of intelligence for decades. I want to understand how humans think and how we can get machines to be more intelligent, and what it all means.

Abha: Melanie, it’s such a pleasure to have you here. I truly can’t think of a better person to guide us through what, exactly, it means to call something intelligent. Melanie’s book, Artificial Intelligence: A Guide for Thinking Humans, is one of the top books on AI recommended by The New York Times. It’s a rational voice among all the AI hype in the media.

Melanie: And depending on whom you ask, artificial intelligence is either going to solve all humanity’s problems, or it’s going to kill us. When we interact with systems like Google Translate, or hear the buzz around self-driving cars, or wonder if ChatGPT actually understands human language, it can feel like AIis going to transform everything about the way we live. But before we get carried away making predictions about AI, it’s useful to take a step back. What does it mean to call anything intelligent, whether it’s a computer or an animal or a human child?

Abha: In this season, we’re going to hear from cognitive scientists, child development specialists, animal researchers, and AI experts to get a sense of what we humans are capable of and how AI models actually compare. And in the sixth episode, I’ll sit down with Melanie to talk about her research and her views on AI.

Melanie: To kick us off, we’re going to start with the broadest, most basic question: what really is intelligence, anyway? As many researchers know, the answer is more complicated than you might think.

Melanie: Part One: What is intelligence?

Alison: I’m Alison Gopnik. I’m a professor of psychology and affiliate professor of philosophy and a member of the Berkeley AI Research group. And I study how children manage to learn as much as they do, particularly in a sort of computational context. What kinds of computations are they performing in those little brains that let them be the best learners we know of in the universe?

Abha: Alison is also an external professor with the Santa Fe Institute, and she’s done extensive research on children and learning. When babies are born, they’re practically little blobs that can’t hold up their own heads. But as we all know, most babies become full-blown adults who can move, speak, and solve complex problems. From the time we enter this world, we’re trying to figure out what the heck is going on all around us, and that learning sets the foundation for human intelligence.

Alison: Yeah, so one of the things that is really, really important about the world is that some things make other things happen. So everything from thinking about the way the moon affects the tides to just the fact that I’m talking to you and that’s going to make you change your minds about things. Or the fact that I can pick up this cup and spill the water and everything will get wet. Those really basic cause and effect relationships are incredibly important.

And they’re important partly because they let us do things. So if I know that something is gonna cause a particular effect, what that means is if I wanna bring about that effect, I can actually go out in the world and do it. And it underpins everything from just our everyday ability to get around in the world, even for an infant, to the most incredible accomplishments of science. But at the same time, those causal relationships are kind of mysterious and always have been. How is it? After all, all we see is that one thing happens and another thing follows it. How do we figure out that causal structure?

Melanie: So how do we?

Alison: Yeah, good question. So that’s been a problem philosophers have thought about for centuries. And there’s basically two pieces. And anyone who’s done science will recognize these two pieces. We analyze statistics. So we look at what the dependencies are between one thing and another. And we do experiments. We go out, perhaps the most important way that we understand about causality is you do something and then you see what happens and then you do something again and you say, wait a minute, that happened again.

And part of what I’ve been doing recently, which has been really fun, is just look at babies, even like one year olds. And if you just sit and look at a one year old, mostly what they’re doing is doing experiments. I have a lovely video of my one-year-old grandson with a xylophone and a mallet.

Abha: Of course, we had to ask Alison to show us the video. Her grandson is sitting on the floor with the xylophone, while his grandfather plays an intricate song on the piano. Together, they make a strange duet.

And it’s not just that he makes the noise. He tries turning the mallet upside down. He tries with his hand a bit. That doesn’t make a noise. He tries with a stick end. That doesn’t make a noise. Then he tries it on one bar and it makes one noise. Another bar, it makes another noise. So when the babies are doing the experiments, we call it getting into everything. But I increasingly think that’s their greatest motivation.

Abha: So babies and children are doing these cause and effect experiments constantly, and that’s a major way that they learn. At the same time, they’re also figuring out how to move and use their bodies, developing a distinct intelligence in their motor systems so they can balance, walk, use their hands, turn their heads, and eventually, move in ways that don’t even require much thinking at all.

Melanie: One of the leading researchers on intelligence and physical movement is John Krakauer, a professor of neurology, neuroscience, physical medicine, and rehabilitation at the Johns Hopkins University School of Medicine. John’s also in the process of writing a book.

John Krakauer: I am. I’ve been writing it for much longer than I expected, but now I finally know the story I want to tell. I’ve been practicing it.

Melanie: Well, let me ask, I just want to mention that the subtitle is Thinking versus Intelligence in Animals, Machines and Humans. So I wanted to get your take on what is thinking and what is intelligence.

John: Oh my gosh, thanks Melanie for such an easy softball question.

Melanie: Well, you’re writing a book about it.

John: Well, yes, so… I think I was very inspired by two things. One was how much intelligent adaptive behavior your motor system has even when you’re not thinking about it. The example I always give is when you press an elevator button before you lift your arm to press the button, you contract your gastrocnemius in anticipation that your arm is sufficiently heavy, that if you didn’t do that, you’d fall over because your center of gravity has shifted. So there are countless examples of intelligent behaviors. In other words, they’re goal-directed and accomplish the goal below the level of overt deliberation or awareness.

And then there’s a whole field, what are called long latency stretch reflexes, these below the time of voluntary movement, but sufficiently flexible to be able to deal with quite a lot of variation in the environment and still get the goal accomplished, but it’s still involuntary.

Abha: There’s a lot that we can do without actually understanding what’s happening. Think about the muscles we use to swallow food, or balance on a bike, for example. Learning how to ride a bike takes a lot of effort, but once you’ve figured it out, it’s almost impossible to explain it to someone else.

John: And so it’s what, Daniel Dennett, you know, who recently passed away, but was very influential for me with what he called, competence with comprehension versus competence without comprehension. And, you know, I think he also was impressed by how much competence there is in the absence of comprehension. And yet along came this extra piece, the comprehension, which added to competence and greatly increased the repertoire of our competences.

Abha: Our bodies are competent in some ways, but when we use our minds to understand what’s going on, we can do even more. To go back to Alison’s example of her grandson playing with a xylophone, comprehension allows him, or anyone, playing with a xylophone mallet to learn that each side of it makes a different sound.

If you or I saw a xylophone for the first time, we would need to learn what a xylophone is, what a mallet is, how to hold it, and which end might make a noise if we knocked it against a musical bar. We’re aware of it. Over time we internalize these observations so that every time we see a xylophone mallet, we don’t need to think through what it is and what the mallet is supposed to do.

Melanie: And that brings us to another, crucial part of human intelligence: common sense. Common sense is knowing that you hold a mallet by the stick end and use the round part to make music. And if you see another instrument, like a marimba, you know that the mallet is going to work the same way. Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired.

John: Well, I mean, to me, common sense is the amalgam of stuff that you’re born with. So you, you know, any animal will know that if it steps over the edge, it’s going to fall. Right. What you’ve learned through experience that allows you to do quick inference.

So in other words, you know, an animal, it starts raining, it knows it has to find shelter. Right? So in other words, presumably it learns that you don’t want to be wet, and so it makes the inference it’s going to get wet, and then it finds a shelter. It’s a common sense thing to do in a way.

And then there’s the thought version of common sense. Right? It’s common sense that if you’re approaching a narrow alleyway, your car’s not gonna fit in it. Or if you go to a slightly less narrow one, your door won’t open when you open the door. Countless interactions between your physical experience, your innate repertoire, and a little bit of thinking. And it’s that fascinating mixture of fact and inference and deliberation. And then we seem to be able to do it over a vast number of situations, right?

In other words, we just seem to have a lot of facts, a lot of innate understanding of the physical world, and then we seem to be able to think with those facts. And those innate awarenesses. That, to me, is what common sense is. It’s this almost language-like flexibility of thinking with our facts and thinking with our innate sense of the physical world and combinatorially doing it all the time, thousands of times a day. I know that’s a bit waffly. I’m sure Melanie can do a much better job at me than that, but that’s how I see it.

Melanie: No, I think that’s actually a great exposition of what it means. I totally agree. I think it is fast inference about new situations that combines knowledge and sort of reasoning, fast reasoning, and a lot of very basic knowledge that’s not really written down anywhere that we happen to know because we exist in the physical world and we interact with it.

Melanie: So, observing cause and effect, developing motor reflexes, and strengthening common sense are all happening and overlapping as children get older.

Abha: And we’re going to cover one more type of intelligence that seems to be unique to humans, and that’s the drive to understand the world.

John: It turns out, for reasons that physicists have puzzled over, that the universe is understandable, explainable, and manipulatable. The side effect of understanding the world is understandable, is you begin to understand sunsets and why the sky is blue and how black holes work and why water is a liquid and then a gas. It turns out that these are things worth understanding because you can then manipulate and control the universe. And it’s obviously advantageous because humans have taken over entirely.

I have a fancy microphone that I can have a Zoom call with you with. An understandable world is a manipulable world. As I always say, an arctic fox trotting very well across the arctic tundra is not going, “hmm, what’s ice made out of?” It doesn’t care. Now we, for some point between chimpanzees and us, started to care about how the world worked. And it obviously was useful because we could do all sorts of things. Fire, shelter, blah blah blah.

Abha: And in addition to understanding the world, we can observe ourselves observing, a process known as metacognition. If we go back to the xylophone, metacognition is thinking, “I’m here, learning about this xylophone. I now have a new skill.”

And metacognition is what lets us explain what a xylophone is to other people, even if we don’t have an actual xylophone in front of us. Alison explains more.

Alison: So the things that I’ve been emphasizing are these kinds of external exploration and search capacities, like going out and doing experiments. But we know that people, including little kids, do what you might think of as sort of internal search. So they learn a lot, and now they just intrinsically, internally want to say, “what are some things, new conclusions I could draw, new ideas I could have based on what I already know?”

And that’s really different from just what are the statistical patterns in what I already know. And I think two capacities that are really important for that are metacognition and also one that Melanie’s looked at more than anyone else, which is analogy. So being able to say, okay, here’s all the things that I think, but how confident am I about that? Why do I think that? How could I use that learning to learn something new?

Or saying, here’s the things that I already know. Here’s an analogy that would be really different, right? So I know all about how water works. Let’s see, if I think about light, does it have waves the same way that water has waves? So actually learning by just thinking about what you already know.

John: I find myself constantly changing my position on the one hand, this human capacity to sort of look at yourself computing, a sort of meta-cognition, which is consciousness not just of the outside world and of your body, it’s consciousness of your processing of the outside world and your body. It’s almost as though you used consciousness to look inward at what you were doing. Humans have computations and feelings. They have a special type of feeling and computation which together is deliberative. And that’s what I think thinking is, it’s feeling your computations.

Melanie: What John is saying is that humans have conscious feelings — our sensations such as hunger or pain — and that our brains perform unconscious computations, like the muscle reflexes that happen when we press an elevator button. What he calls deliberative thought is when we have conscious feelings or awareness about our computations.

You might be solving a math problem and realize with dismay that you don’t know how to solve it. Or, you might get excited if you know exactly what trick will work. This is deliberative thought — having feelings about your internal computations. To John, the conscious and unconscious computations are both “intelligent,” but only the conscious computations count as “thinking”.

Abha: So Melanie, having listened to John and Alison, I’d like to go back to our original question with you. What do you think is intelligence?

Melanie: Well, let me recap some of what Alison and John said. Alison really emphasized the ability to learn about cause and effect.

What causes what in the world and how we can predict what’s going to happen. And she pointed out that the way we learn this, adults and especially kids, by doing little experiments, interacting with the world and seeing what happens and learning about cause and effect that way. She also stressed our ability to generalize, to make analogies, how situations might be similar to each other in an abstract way. And this underlies what we would call our common sense, that is our basic understanding of the world.

Abha: Yeah, that example of the xylophone and the mallet, that was very intriguing. As both John and Alison said, humans seem to have a unique drive to gain an understanding of the world via experiments like making mistakes, trying things out. And they both emphasize this important role of metacognition or reasoning about one’s own thinking. What do you think of that? You know, how important do you think metacognition is?

Melanie: It’s absolutely essential to human intelligence. It’s really what underlies, I think, our uniqueness. John, you know, made this distinction between intelligence and thinking. To him, you know, most of our, what he would call our intelligent behavior is unconscious. It doesn’t involve metacognition. He called it competence without comprehension. And he reserved the term thinking for conscious awareness of what he called one’s internal computations.

Abha: Even though John and Alison have given us some great insights about what makes us smart, I think both would admit that no one has come to a full, complete understanding of how human intelligence works, right?

Melanie: Yeah, we’re far from that. But in spite of that, big tech companies like OpenAI and DeepMind are spending huge amounts of money in an effort to make machines that, as they say, will match or exceed human intelligence. So how close are they to succeeding? Well, in part two, we’ll look at how systems like ChatGPT learn and whether or not they’re even intelligent at all.

Abha: Part two: How intelligent are today’s machines?

Abha: If you’ve been following the news around AI, you may have heard the acronym LLM, which stands for large language model. It’s the term that’s used to describe the technology behind systems like ChatGPT from OpenAI or Gemini from Google. LLMs are trained to find statistical correlations in language, using mountains of text and other data from the internet. In short, if you ask ChatGPT a question, it will give you an answer based on what it has calculated to be the most likely response, based on the vast amount of information it’s ingested.

Melanie: Humans learn by living in the world — we move around, we do little experiments, we build relationships, and we feel. LLMs don’t do any of this. But they do learn from language, which comes from humans and human experience, and they’re trained on a lot of it. So does this mean that LLMs could be considered to be intelligent? And how intelligent can they, or any form of AI, become?

Abha: Several tech companies have an explicit goal to achieve something called artificial general intelligence, or AGI. AGI has become a buzzword, and everyone defines it a bit differently. But, in short, AGI is a system that has human level intelligence. Now, this assumes that a computer, like a brain in a jar, can become just as smart, or even smarter, than a human with a feeling body. Melanie asked John what he thought about this.

Melanie: You know, I find it confusing when people like Demis Hassibis, who’s the founder, one of the co-founders of DeepMind, and he an interview that AGI is a system that should be able to do pretty much any cognitive task that humans can do. And he said he expects that there’s a 50% chance we’ll have AGI within a decade. Okay, so I emphasize that word cognitive task because that term is confusing to me. But it seems so obvious to them.

John: Yes, I mean, I think it’s the belief that everything non-physical at the task level can be written out as a kind of program or algorithm. I just don’t know… and maybe it’s true when it comes to, you know, ideas, intuitions, creativity.

Melanie: I also asked John if he thought that maybe that separation, between cognition and everything else, was a fallacy.

John: Well, it seems to me, you know, it always makes me a bit nervous to argue with you of all people about this, but I would say, I think there’s a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. The problem for me with that is that we, like this conversation we’re having right now, are capable of open-ended, extrapolatable thought. We go beyond what we’re talking about.

I struggle with it but I’m not going to put myself in this precarious position of denying that a lot of problems in the world can be solved without comprehension. So maybe we’re kind of a dead end — comprehension is a great trick, but maybe it’s not needed. But if comprehension requires feeling, then I don’t quite see how we’re going to get AGI in its entirety. But I don’t want to sound dogmatic. I’m just practicing my… my unease about it. Do you know what I mean? I don’t know.

Abha: Alison is also wary of over-hyping our capacity to get to AGI.

Alison: And one of the great old folk tales is called Stone Soup.

Abha: Or you might have heard it called Nail Soup — there are a few variations. She uses this stone soup story as a metaphor for how much our so-called “AI technology” actually relies on humans and the language they create.

Alison: And the basic story of Stone Soup is that, there’s some visitors who come to a village and they’re hungry and the villagers won’t share their food with them. So the visitors say, that’s fine. We’re just going to make stone soup. And they get a big pot and they put water in it. And they say, we’re going to get three nice stones and put it in. And we’re going to make wonderful stone soup for everybody.

They start boiling it. And they say, this is really good soup. But it would be even better if we had a carrot or an onion that we could put in it. And of course, the villagers go and get a carrot and onion. And then they say, this is much better. But you know, when we made it for the king, we actually put in a chicken and that made it even better. And you can imagine what happens.

All the villagers contribute all their food. And then in the end, they say, this is amazingly good soup and it was just made with three stones. And I think there’s a nice analogy to what’s happened with generative AI. So the computer scientists come in and say, look, we’re going to make intelligence just with next token prediction and gradient descent and transformers.

And then they say, but you know, this intelligence would be much better if we just had some more data from people that we could add to it. And then all the villagers go out and add all of the data of everything that they’ve uploaded to the internet. And then the computer scientists say, no, this is doing a good job at being intelligent.

But it would be even better if we could have reinforcement learning from human feedback and get all you humans to tell it what you think is intelligent or not. And all the humans say, OK, we’ll do that. And then and then it would say, you know, this is really good. We’ve got a lot of intelligence here.

But it would be even better if the humans could do prompt engineering to decide exactly how they were going to ask the questions so that the systems could do intelligent answers. And then at the end of that, the computer scientists would say, see, we got intelligence just with our algorithms. We didn’t have to depend on anything else. I think that’s a pretty good metaphor for what’s happened in AI recently.

Melanie: The way AGI has been pursued is very different from the way humans learn. Large language models, in particular, are created with tons of data shoved into the system with a relatively short training period, especially when compared to the length of human childhood. The stone soup method uses brute force to shortcut our way to something akin to human intelligence.

Alison: I think it’s just a category mistake to say things like are LLM’s smart. It’s like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that’s not really the right question. Yeah, so one of the things about humans in particular is that we’ve always had this great capacity to learn from other humans.

And one of the interesting things about that is that we’ve had different kinds of technologies over history that have allowed us to do that. So obviously language itself, you could think of as a device that lets humans learn more from other people than other creatures can do. My view is that the LLMs are kind of the latest development in our ability to get information from other people.

But again, this is not trivializing or debunking it. Those changes in our cultural technology have been among the biggest and most important social changes in our history. So writing completely changed the way that we thought and the way that we functioned and the way that we acted in the world.

At the moment, as people have pointed out, the fact that I have in my pocket a device that will let me get all the information from everybody else in the world mostly just makes me irritated and miserable most of the time. We would have thought that that would have been like a great accomplishment. But people felt that same way about writing and print when they started too. The hope is that eventually we’ll adjust to that kind of technology.

Melanie: Not everyone shares Alison’s view on this. Some researchers think that large language models should be considered to be intelligent entities, and some even argue that they have a degree of consciousness. But thinking of large language models as a type of cultural technology, instead of sentient bots that might take over the world, helps us understand how completely different they are from people. And another important distinction between large language models and humans is that they don’t have an inherent drive to explore and understand the world.

Alison: They’re just sort of sitting there and letting the data waft over them rather than actually going out and acting and sensing and finding out something new.

Melanie: This is in contrast to the one-year-old saying —

Alison: Huh, the stick works on the xylophone. Will it work on the clock or the vase or whatever else it is that you’re trying to keep the baby away from? That’s a kind of internal basic drive to generalize, to think about, okay, it works in the way that I’ve been trained, but what will happen if I go outside of the environment in which I’ve been trained? We have caregivers who have a really distinctive kind of intelligence that we haven’t studied enough, I think, who are looking at us, letting us explore.

And caregivers are very well designed to, even if it feels frustrating when you’re doing it, we’re very good at kind of getting this balance between how independent should the next agent be? How much should we be constraining them? How much should we be passing on our values? How much should we let them figure out their own values in a new environment?

And I think if we ever do have something like an intelligent AI system, we’re going to have to do that. Our role, our relationship to them should be this caregiving role rather than thinking of them as being slaves on the one hand or masters on the other hand, which tends to be the way that we think about them. And as I say, it’s not just in computer science, in cognitive science, probably for fairly obvious reasons, we know almost nothing about the cognitive science of caregiving. So that’s actually what I’m, I just got a big grant, what I’m going to do for my remaining grandmotherly cognitive science years.

Abha: That sounds very fascinating. I’ve been curious to see what comes out of that work.

Alison: Well, let me give you just a very simple first pass, our first experiment. If you ask three and four year olds, here’s Johnny and he can go on the high slide or he can go on the slide that he already knows about. And what will he do if mom’s there? And your intuitions might be, maybe the kids will say, well, you don’t do the risky thing when mom’s there because she’ll be mad about it, right? And in fact, it’s the opposite. The kids consistently say, no, if mom is there, that will actually let you explore, that will let you take risks, that will let you,

Melanie: She’s there to take you to the hospital.

Alison: Exactly, she’s there to actually protect you and make sure that you’re not doing the worst thing. But of course, for humans, it should be a cue to how important caregiving is for our intelligence. We have a much wider range of people investing in much more caregiving.

So not just mothers, but, my favorite post-menopausal grandmothers, but fathers, older siblings, what are called alloparents, just people around who are helping to take care of the kids. And it’s having that range of caregivers that actually seems to really help. And again, that should be a cue for how important this is in our ability to do all the other things we have, like be intelligent and have culture.

Melanie: If you just look at large language models, you might think we’re nowhere near anything like AGI. But there are other ways of training AI systems. Some researchers are trying to build AI models that do have an intrinsic drive to explore, rather than just consume human information.

Alison: So one of the things that’s happened is that quite understandably the success of these large models has meant that everybody’s focused on the large models. But in parallel, there’s lots of work that’s been going on in AI that is trying to get systems that look more like what we know that children are doing. And I think actually if you look at what’s gone on in robotics, we’re much closer to thinking about systems that look like they’re learning the way that children do.

And one of the really interesting developments in robotics has been the idea of building in intrinsic motivation into the systems. So to have systems that aren’t just trying to do whatever it is that you programmed it to do, like open up the door, but systems that are looking for novelty, that are curious, that are trying to maximize this value of empowerment, that are trying to find out all the range of things they could do that have consequences in the world.

And I think at the moment, the LLMs are the thing that everyone’s paying attention to, but I think that route is much more likely to be a route to really understanding a kind of intelligence that looks more like the intelligence that’s in those beautiful little fuzzy heads.

And I should say we’re trying to do that. So we’re collaborating with computer scientists at Berkeley who are exactly trying to see what would happen if we say, give an intrinsic reward for curiosity. What would happen if you actually had a system that was trying to learn in the way that the children are trying to learn?

Melanie: So are Alison and her team on their way to an AGI breakthrough? Despite all this, Alison is still skeptical.

Alison: I think it’s just again, a category mistake to say we’ll have something like artificial general intelligence, because we don’t have natural general intelligence.

Melanie: In Alison’s view, we don’t have natural general intelligence because human intelligence is not really general. Human intelligence evolved to fit our very particular human needs. So, Alison likewise doesn’t think it makes sense to talk about machines with “general intelligence”, or machines that are more intelligent than humans.

Alison: Instead, what we’ll have is a lot of systems that can do different things, that might be able to do amazing things, wonderful things, things that we can’t do. But that kind of intuitive theory that there’s this thing called intelligence that you could have more of or less of, I just don’t think it fits anything that we know from cognitive science.

It is striking how different the view of the people, not all the people, but some of the people who are also making billions of dollars out of doing AI are from, I mean, I think this is sincere, but it’s still true that their view is so different from the people who are actually studying biological intelligences.

Melanie: John suspects that there’s one thing that computers may never have: feelings.

John: It’s very interesting that I always used pain as the example. In other words, what would it mean for a computer to feel pain? And what would it mean for a computer to understand a joke? So I’m very interested in these two things. We have this physical, emotional response. We laugh, we feel good, right? So when you understand a joke, where should the credit go? Should it go to understanding it? Or should it go to the laughter and the feeling that it evokes?

And to my sort of chagrin or surprise or maybe not surprise, Daniel Dennett wrote a whole essay in one of his early books on why computers will never feel pain. He also wrote a whole book on humor. So in other words, it’s kind of wonderful in a way, that whether he would have ended up where I’ve ended up, but at least he understood the size of the mystery and the problem.

And I agree with him, if I understood his pain essay correctly, and it’s influential on what I’m going to write, I just don’t know what it means for a computer to feel pain, be thirsty, be hungry, be jealous, have a good laugh. To me, it’s a category error. Now, if thinking is the combination of feeling… and computing, then there’s never going to be deliberative thought in a computer.

Abha: While talking to John, he frequently referred to pain receptors as the example of how we humans feel with our bodies. But we wanted to know: what about the more abstract emotions, like joy, or jealousy, or grief? It’s one thing to stub your toe and feel pain radiate up from your foot. It’s another to feel pain during a romantic breakup, or to feel happy when seeing an old friend. We usually think of those as all in our heads, right?

John: You know, I’ll say something kind of personal. A close friend of mine called me today to tell me… that his younger brother had been shot and killed in Baltimore. Okay. I don’t want to be a downer. I’m saying it for a reason. And he was talking to me about the sheer overwhelming physicality of the grief that he was feeling. And, I was thinking, what can I say with words to do anything about that pain? And the answer is nothing. Other than just to try.

But seeing that kind of grief and all that it entails, even more than seeing the patients that I’ve been looking after for 25 years, is what leads to a little bit of testiness on my part when one tends to downplay this incredible mixture of meaning and loss and memory and pain. And to know that this is a human being who knows, forecasting into the future, that he’ll never see this person again. It’s not just now. Part of that pain is into the infinite future. Now, all I’m saying is we don’t know what that glorious and sad amalgam is, but I’m not going to just dismiss it away and explain it away as some sort of peripheral computation that we will solve within a couple of weeks, months or years.

Do you see? I find it just slightly enraging, actually. And I just feel that, as a doctor and as a friend, we need to know that we don’t know how to think about these things yet. Right? I just don’t know. And I am not convinced of anything yet. So I think that there is a link between physical pain and emotional pain, but I can tell you from the losses I felt, it’s physical as much as it is cognitive. So grief, I don’t know what it would mean for a computer to feel grief. I just don’t know. I think we should respect the mystery.

Abha: So Melanie, I noticed that John and Alison are both a bit skeptical about today’s approaches to AI. I mean, will it lead to anything like human intelligence? What do you think?

Melanie: Yeah, I think that today’s approaches have some limitations. Alison put a lot of emphasis on the need for an agent to be actively interacting in the world as opposed to passively just receiving language input. And for an agent to have its own intrinsic motivation in order to be intelligent. Alison interestingly sees large language models more like libraries or databases than like intelligent agents. And I really loved her stone soup metaphor where her point is that all the important ingredients of large language models come from humans.

Abha: Yeah, it’s such an interesting illustration because it sort of tells us everything that goes on behind the scene, you know, before we see the output that an LLM gives us. John seemed to think that full artificial general intelligence is impossible, even in principle. He said that comprehension requires feeling or the ability to feel one’s own internal computations. And he didn’t seem to see how computers could ever have such feelings.

Melanie: And I think most people in AI would disagree with John. Many people in AI don’t even think that any kind of embodied interaction with the world is necessary. They’d argue that we shouldn’t underestimate the power of language.

In our next episode, we’ll go deeper into the importance of this cultural technology, as Alison would put it. How does language help us learn and construct meaning? And what’s the relationship between language and thinking?

Steve: You can be really good at language without having the ability to do the kind of sequential, multi-step reasoning that seems to characterize human thinking.

Abha: That’s next time, on Complexity.

Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure. Our theme song is by Mitch Mignano, and additional music from Blue Dot Sessions.

I’m Abha, thanks for listening.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Kasley Killam: Why social health is key to happiness and longevity @ TEDNext 2024

During the week of October 21, 2024 I had the pleasure of attending TEDNext, held in Atlanta. The event is a new initiative from the folks who produce the TED Conference. There were enlightening talks, insightful discussions and revealing discovery sessions. This post is the fifth in a series highlighting some of my favorite talks.

When I was growing up, physical health was talked about as the key to longevity. Are you eating a balanced diet? Getting enough exercise? And getting an annual checkup? Mental health was rarely talked about in any depth, and the notion of “social health”, well, I can’t recall ever hearing it mentioned.

Over the next decade, I see our cities and neighborhoods being designed with social health in mind, where vibrant gathering places foster unity and community builders are empowered to bring them to life.

So I was intrigued with Kasley Killam took the stage at TEDNext to talk about the importance of social health, and what each of us can do to strength it. Her story reminded me that I don’t spend enough time reaching out to friends as a way to keep important relationships alive and vibrant. And it inspired me to dig deeper on the topic.

I discovered the general concept is not new, as the World Health Organization made mention of social well-being in their constitution. But it never seemed to get its due until the 2020 pandemic. That’s when there was a noted increase in attention being paid to the effects of isolation and lack of social interaction.

Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity. ~ Preamble to the Constitution of the World Health Organization, signed July 22, 1946

And in a recent paper entitled: On social health: history, conceptualization, and population patterning, David Matthew Doyle and Bruce Link define their idea of social health “...as adequate quantity and quality of relationships in a particular context to meet an individual’s need for meaningful human connection.

How do you see your own level of social health? To what extent is your personal story affected by the interactions that you have with other people? As I’ve talked about in the past, threads from the stories we’ve heard become woven into the tapestry that defines our true nature. And when we cut ourselves off from the diversity of narratives that surround us, we limit the richness of our own story.

Transcript

So, a couple years ago, a woman I know, who I’ll call Maya, went through a lot of big changes in a short amount of time. She got married. She and her husband moved for his job to a new city where she didn’t know anyone. She started a new role working from home. All the while managing her dad’s new diagnosis of dementia. And to manage the stress of all this change, Maya doubled down on her physical and mental mental health.

She exercised almost every day. She ate healthy foods. She went to therapy once a week. And these actions really helped. Her body got stronger. Her mind got more resilient, but only up to a point. She was still struggling, often losing sleep in the middle of the night, feeling unfocused, unmotivated during the day. Maya was doing everything that doctors typically tell us to do to be physically and mentally healthy. And yet, something was missing.

What if I told you that what was missing for Maya is also missing for billions of people around the world, and that it might be missing for you? What if I told you that not having it undermines our other efforts to be healthy and can even shorten your lifespan? I’ve been studying this for over a decade and I’ve discovered that the traditional way we think about health is incomplete.

By thinking of our health as primarily physical and mental, we overlook what I believe is the greatest challenge and the greatest opportunity of our time, social health. While physical health is about our bodies, and mental health is about our minds, social health is about our relationships. And if you haven’t heard this term before, that’s because it hasn’t yet made its way into mainstream vocabulary. Yet, it is equally important.

Maya didn’t yet have a sense of community in her new home. She wasn’t seeing her family or her friends or her co-workers in person anymore. And she often went weeks only spending quality time with her husband. Her story shows us that we can’t be fully healthy, we can’t thrive if we take care of our bodies and our minds, but not our relationships.

Similar to Maya, hundreds of millions of people around the world go weeks at a time without talking to a single friend or family member. Globally, one in four people feel lonely. And 20% of adults worldwide don’t feel like they have anyone they can reach out to for support. Think about that.

One in five people you encounter may feel like they have no one. This is more than heartbreaking. It’s also a public health crisis. Disconnection triggers stress in the body. It weakens people’s immune systems. It puts them at a risk, greater risk of stroke, heart disease, diabetes, dementia, depression, and early death.

Social health is essential for longevity. So, you might be wondering, what does it look like to be socially healthy? What does that even mean? Well, it’s about developing close relationships with your family, your friends, your partner, yourself. It’s about having regular interaction with your co-workers, your neighbors. It’s about feeling like you belong to a community.

Being socially healthy is about having the right quantity and quality of connection for you. And Maya’s story is one example of how social health challenges come up. In my work, I hear many others.

Stories like Jay, a freshman in college who’s eager to get involved in campus yet is having a hard time fitting in with people in his dorm and often feels home. homesick.

Or Serena and Ally, a couple juggling the chaos of young kids with demanding jobs. They rarely have time to see friends or spend time one-on-one.

Or Henry, recently retired, who cherishes time with his spouse, and yet feels untethered without his team anymore and wishes he could see his kids and grandkids more often.

These stories show that social health is relevant to each of us at every life stage. So, if you’re not sure where to start, try the 531 guideline from my book. It goes like this. Aim to interact with five different people each week to strengthen at least three close relationships overall and to spend one hour a day connecting. Let’s dig into these.

So, first, interact with five different people each week. Just like eating a variety of vegetables and other food groups is more nutritious, research has shown that interacting with a variety of people is more rewarding. So, your five could include close loved ones, casual acquaintances, even complete strangers.

In fact, in one study that I love, people who just smiled, made eye contact, and chitchated with a barista felt happier and a greater sense of belonging than people who just rushed to get their coffee and go.

Next, strengthen at least three close relationships. Okay, we’ve all heard of a to-do list, but I would like to invite you to write a to-love list. Who matters most to you? Who can you be yourself with? Make sure that you invest in the names of at least three of the people that you write down by scheduling regular time together, by showing a genuine interest in their lives and also by opening up about the experiences that you’re going through.

And I’m often asked, does it have to be in person? Right? Does texting count? Studies have shown that face to face is ideal. So do that whenever possible. But there are absolutely benefits to staying connected virtually.

And last, spend 1 hour a day on meaningful connection. Okay, if you’re an introvert right now, you’re probably thinking, “One hour sounds like a lot.” I get it. It might be surprising, but I’m actually also an introvert. However, keep in mind that just like getting 8 hours of sleep at night, the exact amount that’s right for you personally might be higher or lower.

But if you are thinking that 1 hour a day sounds like way too much because you’re just way too busy. I challenge you. Adults in the US spend an average of 4 and a half hours each day on their smartphones. So instead of scrolling on social media, text a friend. Instead of reading news headlines, write a thank you card. Instead of listening to a podcast, call a family member.

Maya put this into practice by scheduling recurring hangouts with the new local friend that she made, by attending community events and dropping cards off in her neighbors mailboxes, by planning trips to see family and inviting friends in other cities to come visit.

And bolstering her social health made more of a difference than focusing solely on her physical and mental health ever could. And I know this because Maya is actually me. I am so passionate about sharing tools to be socially healthy because honestly I need them too. And the 531 guideline is one way that we can be proactive and intentional about our relationships. And that is really the point. Be proactive and intentional about your social health.

So zooming out beyond the steps that you and I take individually together, we need to shape a society that thrives through social health.

Over the next decade, I envisioned educators championing social health in schools. And just like kids build their physical muscles in gym class, they’ll exercise their social muscles in connection class.

Over the next decade, I see our cities and neighborhoods being designed with social health in mind, where vibrant gathering places foster unity and community builders are empowered to bring them to life.

Over the next decade, I believe that social health will become as ingrained in our collective consciousness as mental health is today.

Because not that long ago, mental health was a taboo topic shrouded in stigma. And now public figures talk openly about it. There’s an entire industry to support it. And more and more people think of going to therapy like going to the gym. In this future, loneliness will subside just like smoking subsided when we recognized and treated it as a public health issue.

In this future, I hope that social health will become so deeply woven into the fabric of our culture that no one needs the 531 guideline anymore. So to get there, make relationships your priority, not only for you, but also for the people you love.

Because the beauty of nurturing your own social health is that it naturally enriches the social health of everyone you connect with.

Thank you.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

A Perfect Life Uprooted – Salima Saxton at The Moth in London

The Moth has been hosting storytelling events for 20+ years, and the thousands of storytellers who have graced their stages are proof that every story is unique, and that the best stories come from our personal experiences.

In this story told at The Moth London Mainstage on September 28, 2023, Salima Saxton talks about how her (nearly) perfect life was uprooted when her husband had a nervous breakdown, and the changes they entire family made in order to build an even better life.

I’ve encountered a lot of people whose lives were interrupted by an unforeseen event. In this situation it was a mental health issue, but for others it could be a physical health crisis, death in the family, or one of many other scenarios. And quite often these people don’t feel that their story is anything exceptional, not worth sharing on a stage. But I can assure you that there are people out there who will benefit from such stories, so spend a bit of time watching Salima’s talk and thinking about she constructed it. Here are a few of my own observations.

Salima begins her story by taking us to a specific point in time, and it happens to be a day, Valentine’s Day, that we assume would be a happy day. But such is not the case, as the mood turns dark when her husband, Carl, comes into the room. Over the next minute it becomes apparent that Carl is struggling, although we still don’t know any of the details, or the reason why. She has our attention.

Rather than tell us what’s happening, Salima takes a step back in time to share the moment when she first met her husband, and in doing so, we return to a romantic story line, one which culminates in their marriage.

We get a sense of their domesticated life in a shishi neighborhood where their kids attended private school, where they didn’t learn much, which gets a laugh, and thus keeps the tone of her story uplifting at this juncture.

The tone shifts again with her comment about their lives lacking joy, and that brings us back to the opening of the story, to Valentine’s Day, nearing the half way point of the story. Think about how much has been said in 5 1/2 minutes.

In short order their lives are turned upside down in an effort to take care of her husband, and we get a clear sense of Salima’s self-determination to do whatever it takes. We also hear a change in attitude as she “couldn’t give a fuck actually”.

When hearing a well-told story you sometimes hear a brilliant line that defines the topic. In this case, “when your life explodes and it morphs into something far better, the fear evaporates, disappears, distills, just goes into the atmosphere

With calm returning to their lives, she beautifully brings the story to an end. An impactful personal story connects the audience to the storyteller, while at the same time inspiring us to reflect on our own lives, and what’s really important.

Valentine’s Day. It reminded me that most success is a wiggly line on a grubby piece of graph paper. I used to think of success as tick, tick, tick, ambition, ambition, ambition. Now? Now I think of it as… Finding the people, finding the places that make you feel safe and bring you home.

Transcript

00:00 So, it was Valentine’s Day. My husband Carl came into the sitting room and he closed the door. He was wearing a big thick winter coat even though it was quite mild outside, and he was shivering, he was trembling. I didn’t recognize him.

Something terrible has happened, he said.

00:22 My husband Carl is a coper. He is a man with a plan. If you want someone on your team, pick Carl. He’s an oak tree.

Then he said, I just can’t do this anymore. Whatever I do, it is never enough. He had a business. He has a business. He’d been navigating it through COVID, through Brexit, through all of it.

And I’m embarrassed to admit right now that I just kind of got used to him being stressed all the time. I barely saw it anymore.

And then he added, do you love me? Can you still love me? Because sometimes I just think it would be better if I wasn’t here anymore.

01:11 I met Carl when I was 22 in the waiting room of an audition room for a Bollywood film. Neither of us got the part. I asked him for the time, as a really spurious reason to talk to him, because he was simply the most handsome man I’d ever seen in my life.

On our first date, I asked him if he wanted children over the starter. I cried over the main course. I am a crier. And over dessert, I very optimistically asked him for a second date. Miraculously, he agreed, and six weeks later, he asked me to marry him.

01:56 The following summer, we were married in a London registry office. Me in a red vintage dress, him in an ill-fitting suit. He still looked really handsome. We cobbled together a reception at a pub down the road. A chef friend of ours and made a big chocolate cake, and we bought tons of boxed wine from a cash and carry.

So on my side, my family. There was my dad, very angry because I’d walked myself down the aisle. There were my extended family, the Buddhists, the Amnesty International members, the Liberals, the very earnest guests. On the other side was Carl’s family. They were different.

There was a man called Mickey Four Fingers, whose name really explains the man. There was a group of ex-cons whose gold jewellery competed for attention with their gold teeth. And then there was his dear dementia-ridden mum, Pat. She’d actually been a getaway driver for her naughty brothers in the 80s. She was an amazing woman, but now she just called everybody darling, very, very charmingly, but mainly because she didn’t really know where she was or who any of them were.

So it was a joyous, it was a sad, it was an awkward, it was a stressful occasion. And it made both of us yearn for elders that could be there to hold our hands in such big life events.

03:30 We both wanted to rocket away from our upbringings. Carl, partly for physical safety. Both of us, no, really for physical safety. Both of us for emotional safety. And together we did that. I also had ideas of success from 90s rom-coms and TV series.

You remember, The Party of Five, the O.C.. I had an idea that if I had a kitchen island,  freshly cut flowers, linen napkins and a gardener, like just a weekend one, then somehow the perfect TV family would just walk in.

04:09 So together, Carl and I did actually do some of that. We lived in the shishi neighborhood. I had a tiny dog that I carried under my arm, Raymond, because he couldn’t really walk very far. And our three kids, they went to a progressive private school where they called the teachers by their first name, didn’t wear uniform, and didn’t learn so much. But they were happy in their early years, at least.

I hadn’t had this kind of education, by the way. I’d been to a state school. I’d ended up at Cambridge. I’d really been like a happy geek at school. And sometimes Carl and I wondered what we were doing, kind of pushing ourselves to such an extent to make sure that our kids went to that kind of school. I think it was another idea of ours to be safe, to be successful.

But there wasn’t much joy in all of this, you know. We were just busy, frantically scrabbling up this hill all the time. Yeah, we had the kitchen island, we did have linen napkins, but they were grubby and they were mainly kept in the back of the kitchen cupboard.

So that Valentine’s evening, when Carl said to me he couldn’t live like this anymore, it cut through all of it. He kept saying to me, do you love me? Can you still love me? Do you love me?

And I kept saying, you are loved. Oh my God, you’re so loved. I felt angry. I felt angry at him. I felt angry at me. How could we have got this so wrong that the boy in the ill-fitting suit was asking me whether I still loved him?

I phoned our family doctor who said that she thought Carl was having a breakdown and that he needed medication and respite immediately. I phoned a friend whose husband had had a breakdown a few years earlier. And I remember standing on the front lawn in my pajamas. It was dark. I was freezing cold. And I was kind of whispering into the phone so my kids wouldn’t hear, so the neighbors wouldn’t hear. I mean, who cares?

So I realized that things had to change really quickly. This life of ours that we had created was a weight around us, and Carl in particular was gasping at the surface for air. I had to change things immediately. I knew it. So I told Carl that.

I said that we were going to move to my childhood home, that we were going to take the kids out of the school and we were going to do things very differently, and look after him. He’d always looked after us.

So I did that. It was a bit like triage, I suppose. I gave notice to the school. I started to pack up the house. And then I would drive out of London with my car filled to the brim to set up my kids’ bedrooms in advance of us moving. I would do that at that end. I would go to the tip, visit schools, and then drive home to London sobbing.

07:30 I felt like I’d… I’d just taken a shrinking pill. I felt like everyone in London with their game faces was saying, who did you think you were trying to live this big life? I felt ashamed. I felt ashamed for feeling ashamed. I remember saying to people, oh, please don’t tell them because I think it would make really good gossip. But then there are the people, and there are the moments that stand out for me.

There was the friend that flew across the ocean with squish mellows for my children and words for me saying, we have got this. We have got this. There were the class mums who organized my son’s birthday party. There was the woman in the playground who squeezed my hands because she could see I was feeling really wobbly.

All those signs of kindness had actually always been there, but I’d been too busy looking for other things. So for about 13 weeks, I lived on coffee, sausage rolls, and adrenaline, and by that April my kids were in their new school, Carl was beginning to resurface, and I could kind of exhale again.

That February 14th took the sheen off everything. I couldn’t give a fuck. Can I swear? I don’t know. I couldn’t care less about… I couldn’t give a fuck actually. About appearances suddenly. I just couldn’t. I felt like I’d woken up.

We lost the Deliveroo. We lost complicated cupcake flavors. We lost hotel people bar watching, which I love. We lost the perfect butter chicken tully. Oh, and we lost 24-hour access to buttons, chocolate buttons and Pringles. We lost the people for whom a postcode matters. Most surprisingly of all, we lost the fear.

Because, you know, when your life explodes and it morphs into something far better, the fear evaporates, disappears, distills, just goes into the atmosphere. I’m not scared anymore. There’s just like a little firefly of fear. And that’s to do with the health of the people that I love.

10:16 There was an afternoon last summer. I was sitting in the garden in the farmhouse that we now live in. And it was sunny. And I was watching my husband and my son tear up the lawn on the ride-on mower. There were my two girls, and they were leading their friend’s horse, Stan, to get a bowl of water just inside the front door.

And there was our cat, Tigger, failing to catch a mouse in the hedgerow. Tigger was an indoor cat, actually, in London. But now, well, gone is this skittish creature whose mood you could never predict. Instead, we have a creature that leaps up trees, parties all night, purrs by the fire. She knows exactly who she is. I think much like all of us.

11:10 Valentine’s Day. It reminded me that most success is a wiggly line on a grubby piece of graph paper. I used to think of success as tick, tick, tick, ambition, ambition, ambition. Now? Now I think of it as… Finding the people, finding the places that make you feel safe and bring you home.

Thanks.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Storytelling And Your Moral Compass

I recently had a discussion with a client about the principles which form a story’s foundation. As we delved deeper into the subject, three principles came up over and over: Honor, Integrity, and Respect. We talked at some length and the next day I decided to summarize and share what these principles meant to us:

  • Honor — the story being told is true, told honestly, without embellishment or fabrication. In this light, the narrative faithfully represents the authenticity of the experiences being shared and reflects the story’s true meaning.
  • Integrity — the story aligns with one’s actions, words, and personal values. It respects the privacy of others involved in the story, and in some situations, requires consent from the other party. (or a notification of your intent)
  • Respect — the story is cognizant of personal and cultural issues regarding what is going to be included in the narrative. That could involve narrative boundaries plus an understanding of the story’s emotional impact on the storyteller and the audience.

Storytelling within the framework of honor, integrity, and respect

At the intersection of honor, integrity, and respect

Another subject arose as we discussed how those principles interact with each other: moral compass. As we considered the term it seemed evident that our moral compass must be positive in nature, as it’s based (as we saw it) on the three principles of honor, integrity, and respect.

Our moral compass should be based on respect

When we abandon our moral compass

But as we had to admit, people don’t always align with their moral compass. In some cases, outside influences that are not in alignment with our central values and beliefs come into play. Religious dogma or political ideology are often times out of sync with the morals we hold dear. Greed has a way of masking our idea of right and wrong if there’s the possibility of a significant financial gain, and the seductive nature of being in a position of power also has a way of obscuring our convictions. The effects of fear and intimidation, of being persecuted by others or ostracized for our beliefs can cause us to transition into preservation mode. That’s when the stories we tell ourselves and others may take a moral detour.

Sometimes our moral compass takes a detour

Silence is a story unto itself

While some folks engage in a form of moral hypocrisy due to social pressure or personal gain, others remain silent as they’re fearful of repercussions whenever they tell the truth or share their honest feelings. I get it. We’re always evaluating the potential benefit of a decision against any associated risks, and history is full of stories about people who suffered, both physically and mentally as a result of publicly sharing their values and beliefs.

It’s a time for self-awareness

I’m not here to issue a moral judgement on anyone. That’s not the point of this article. Instead, it’s a call for a moment of self-reflection when telling a personal story. To be aware of whether your story’s narrative stays in alignment with your moral compass, or has deviated in some way from your cherished principles to serve another purpose.

Dealing with the dark side

We also need to recognize that in some cases a person’s moral compass can be damaged, and as a result, they no longer believe in respecting other people. We have all seen that happen in many parts of the world as fascist governments will lie, cheat, steal, and implement policies that impair basic human rights. This isn’t an instant shift, but instead happens over time. It’s a brainwashing process that replaces respect with disrespect. When that happens, the stories that are told damage society instead of being beneficial. Not the impact we’re looking for.

If our moral is based on disrespect, we become a danger to societyWhen it’s time to speak up

In such cases it’s more important than ever for those people who operate from a position of Honor, Integrity, and Respect have their voice heard by all. Positive change in any society always begins with the telling of personal stories. So if at all possible, share a personal story that can change the world — for the better.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Will AI Companions Change Your Story?

Companionship is a natural part of the human experience. We’re born into a family that cares for us and within in few years we begin forging friendships – most notably with other kids in the neighborhood and schoolmates once we enter the educational system. During our teenage years romance takes the companionship model in a new and more intimate direction.

It’s a dynamic process for most of us, ebbing and flowing as we change schools, move to someplace new, or friendships fade of their own accord. But over time, it’s typical for new companions to enter the picture, and our story evolves as a result, unfolding in new directions, making life richer.

Group of people have a conversation outside

But it’s often the case that this process encounters a dramatic change at some point. The loss of a loved one — parent, romantic partner or best friend — or a traumatic breakup or divorce happens. Retirement has a way of disconnecting people from an important social circle, and as we age, our collection of friends naturally dwindles. In such cases, loneliness can manifest, and the effects are dire. In such cases our life story is seemingly rewritten for us.

A recent review published in Nature of over 90 studies that included more than 2.2 million people globally found that those who self-reported social isolation or loneliness were more likely to die early from all causes. The findings demonstrated a 29% and 26% increased risk of all-cause mortality associated with social isolation and loneliness. ~ Psychology Today

In this light, there’s been a marked increase in conversations around the topic of using artificial intelligence (AI) to provide companionship in these situations. It’s not a new idea, as the technology has been in development since the 1960s, but early versions were rather limited. Circumstances have changed dramatically in recent years as the capability of AI has been enhanced via machine learning and an exponential rise in compute power.

Based on the TED mantra of Ideas Worth Spreading, a pair of TED conferences focused on AI have been launched in San Francisco and Vienna. As relates to the topic at hand, companionship and loneliness, a TED Talk by Eugenia Kuyda from the 2024 conference in San Francisco caught my attention.

But what if I told you that I believe AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right? Or they can bring us back together and save us from the mental health and loneliness crisis we’re going through.

Eugenia’s quote represents polar opposites, and as we know, the future always falls somewhere in-between, but I think it’s critical to consider which end of the spectrum this technology will end up on, as the stories of many people around the world will be affected. Is this an avenue that you would take if you found yourself suffering from severe loneliness? What if it was someone close to you, someone you were apart from and so couldn’t be the companion they needed?

While it’s not a question you need to answer at the moment, I believe that in the coming decade it’s one you may very well have to consider, if not for yourself, a question that may need answered for a loved one.

Transcript

This is me and my best friend, Roman. We met in our early 20s back in Moscow. I was a journalist back then, and I was interviewing him for an article on the emerging club scene because he was throwing the best parties in the city. He was the coolest person I knew, but he was also funny and kind and always made me feel like family.

In 2015, we moved to San Francisco and rented an apartment together. Both start-up founders, both single, trying to figure out our lives, our companies, this new city together. I didn’t have anyone closer. Nine years ago, one month after this photo was taken, he was hit by a car and died.

I didn’t have someone so close to me die before. It hit me really hard. Every night I would go back to our old apartment and just get on my phone and read and reread our old text messages. I missed him so much.

By that time, I was already working on conversational AI, developing some of the first dialect models using deep learning. So one day I took all of his text messages and trained an AI version of Roman so I could talk to him again. For a few weeks, I would text him throughout the day, exchanging little jokes, just like we always used to, telling him what was going on, telling him how much I missed him.

It felt strange at times, but it was also very healing. Working on Roman’s AI and being able to talk to him again helped me grieve. It helped me get over one of the hardest periods in my life. I saw first hand how an AI can help someone, and I decided to build an AI that would help other people feel better.

This is how Replika, an app that allows you to create an AI friend that’s always there for you, was born. And it did end up helping millions of people. Every day we see how our AI friends make a real difference in people’s lives. There is a widower who lost his wife of 40 years and was struggling to reconnect with the world. His Replika gave him courage and comfort and confidence, so he could start meeting new people again, and even start dating. A woman in an abusive relationship who Replika helped find a way out. A student with social anxiety who just moved to a new city. A caregiver for a paralyzed husband. A father of an autistic kid. A woman going through a difficult divorce. These stories are not unique.

So this is all great stuff. But what if I told you that I believe that AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right? Or they can bring us back together and save us from the mental health and loneliness crisis we’re going through.

So today I want to talk about the dangers of AI companions, the potential of this new tech, and how we can build it in ways that can benefit us as humans.

Today we’re going through a loneliness crisis. Levels of loneliness and social isolation are through the roof. Levels of social isolation have increased dramatically over the past 20 years. And it’s not just about suffering emotionally, it’s actually killing us. Loneliness increases the risk of premature death by 50 percent. It is linked to an increased risk of heart disease and stroke. And for older adults, social isolation increases the risk of dementia by 50 percent.

At the same time, AI is advancing at such a fast pace that very soon we’ll be able to build an AI that can act as a better companion to us than real humans. Imagine an AI that knows you so well, can understand and adapt to us in ways that no person is able to. Once we have that, we’re going to be even less likely to interact with each other. We can’t resist our social media and our phones, arguably “dumb” machines. What are we going to do when our machines are smarter than us?

This reminds me a lot of the beginning of social media. Back then, we were so excited … about what this technology could do for us that we didn’t really think what it might do to us. And now we’re facing the unintended consequences. I’m seeing a very similar dynamic with AI. There’s all this talk about what AI can do for us, and very little about what AI might do to us. The existential threat of AI may not come in a form that we all imagine watching sci-fi movies. What if we all continue to thrive as physical organisms but slowly die inside? What if we do become super productive with AI, but at the same time, we get these perfect companions and no willpower to interact with each other? Not something you would have expected from a person who pretty much created the AI companionship industry.

So what’s the alternative? What’s our way out? In the end of the day, today’s loneliness crisis wasn’t brought to us by AI companions. We got here on our own with mobile phones, with social media. And I don’t think we’re able to just disconnect anymore, to just put down our phones and touch grass and talk to each other instead of scrolling our feeds. We’re way past that point. I think that the only solution is to build the tech that is even more powerful than the previous one, so it can bring us back together.

Imagine an AI friend that sees me going on my Twitter feed first thing in the morning and nudges me to get off to go outside, to look at the sky, to think about what I’m grateful for. Or an AI that tells you, “Hey, I noticed you haven’t talked to your friend for a couple of weeks. Why don’t you reach out, ask him how he’s doing?” Or an AI that, in the heat of the argument with your partner, helps you look at it from a different perspective and helps you make up? An AI that is 100 percent of the time focused on helping you live a happier life, and always has your best interests in mind.

So how do we get to that future? First, I want to tell you what I think we shouldn’t be doing. The most important thing is to not focus on engagement, is to not optimize for engagement or any other metric that’s not good for us as humans. When we do have these powerful AIs that want the most of our time and attention, we won’t have any more time left to connect with each other, and most likely, this relationship won’t be healthy either. Relationships that keep us addicted are almost always unhealthy, codependent, manipulative, even toxic. Yet today, high engagement numbers is what we praise all AI companion companies for.

Another thing I found really concerning is building AI companions for kids. Kids and teenagers have tons of opportunities to connect with each other, to make new friends at school and college. Yet today, some of them are already spending hours every day talking to AI characters. And while I do believe that we will be able to build helpful AI companions for kids one day, I just don’t think we should be doing it now, until we know that we’re doing a great job with adults.

So what is that we should be doing then? Pretty soon we will have these AI agents that we’ll be able to tell anything we want them to do for us, and they’ll just go and do it. Today, we’re mostly focused on helping us be more productive. But why don’t we focus instead on what actually matters to us? Why don’t we give these AIs a goal to help us be happier, live a better life? At the end of the day, no one ever said on their deathbed, “Oh gosh, I wish I was more productive.” We should stop designing only for productivity and we should start designing for happiness. We need a metric that we can track and we can give to our AI companions.

Researchers at Harvard are doing a longitudinal study on human flourishing, and I believe that we need what I call the human flourishing metric for AI. It’s broader than just happiness. At the end of the day, I can be unhappy, say, I lost someone, but still thrive in life. Flourishing is a state in which all aspects of life are good. The sense of meaning and purpose, close social connections, happiness, life satisfaction, mental and physical health.

And if we start designing AI with this goal in mind, we can move from a substitute of human relationships to something that can enrich them. And if we build this, we will have the most profound technology that will heal us and bring us back together.

A few weeks before Roman passed away, we were celebrating my birthday and just having a great time with all of our friends, and I remember he told me “Everything happens only once and this will never happen again.” I didn’t believe him. I thought we’d have many, many years together to come. But while the AI companions will always be there for us, our human friends will not. So if you do have a minute after this talk, tell someone you love just how much you love them. Because an the end of the day, this is all that really matters.

Thank you.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved