Technology refers to any story that has some form of technology in the narrative.

Shahram Izadi: The Next Computer? Your Glasses @ TED 2025

Now that the 2025 version of the TED Conference has concluded, I’ll be posting a series of articles that will highlight some of the best talks. And their new tagline, “Ideas Change Everything”, inspired me to come up with a similar phrase, which is, “Personal Stories Change Everything”, so watch these talks through a lens of improving your own storytelling prowess.

In this talk from Shahram Izadi, titled, The Next Computer? Your Glasses, we get a glimpse into a future technology that brings the power of AI into our daily lives by way of eye-ware. Adopting the metaphor of seeing the world anew, this demo exceeded my expectations, and it had me wondering how our view of the world might change in the coming years.

Shahram Izadi is a British-Iranian computer scientist known for his contributions in extended reality, augmented reality, computer vision, human-computer interaction and artificial intelligence. ~ Wikipedia

Are you ready for this technology upgrade? And what if everyone were wearing these glasses? If the purpose is simply to provide information, it wouldn’t seem to be problem, but what if these glasses came with face recognition capability? Maybe there will be security protocols in place to prevent that, but anyone with the technological prowess could (I imagine) bypass such limitations.

It’s interesting to ponder how all our experiences, and thus our personal stories, could be transformed by the wave of AI technology that is soon coming our way. Our smartphones are becoming AI aware already. So they can perform tasks for us, which is cool, but every interaction will be tracked by the applications we’re using. Are you okay with that? I’m on the fence at the moment.

If you enjoyed this article…Buy Me A Coffee

Transcript

Shahram Izadi: For our entire lives, we’ve been living for a computing revolution. Many of you here in this room have contributed to it, with leaps forward in compute, connectivity, mobile technologies, and now AI. For my part, I’ve dedicated my entire career to augmented reality, fusing the real world with computing experiences.

I started this journey about 25 years ago for my PhD. It might sound groundbreaking, but check out these early prototypes. The technology was really primitive back then, but the potential felt limitless when we were experimenting in that university lab.

What I didn’t know at that time was that many of the fundamental innovations for my work would come from AI researchers in a different lab. While I was helping computers see the world, these AI researchers were helping computers reason about the world.

Since then, large language models and multimodal reasoning have enabled richer language and image understanding. These models are now fast enough for live conversations where the AI can act on your behalf and, most importantly, with your permission.

And augmented reality and virtual reality has moved computing from the rectangular screen to the 360 immersive display to now even the world itself becoming the display. We now refer to this broad collection of experiences as extended reality or XR.

Until now, these innovations have occurred separately and in silos. Well here comes act two of the computing revolution. AI and XR are converging, unlocking radical new ways to interact with technology on your terms.

Computers will become more lightweight and personal. They will share your vantage point, understand your real-world context and have a natural interface that’s both simple and conversational.

Countless people and companies are innovating in this space, including many on the TED stage this week. We’re excited to contribute to this momentum with Android XR. It’s an operating system we’re building with Samsung that brings XR hardware together with Gemini, our AI assistant, to augment human intelligence.

It can support a broad range of devices, from headsets to glasses to form factors we haven’t even dreamed of yet. OK, let me show you where we’re heading by inviting my colleague Nishtha to the stage. Welcome, Nishtha.

(Applause)

Nishtha Bhatia: Hi.

SI: These ordinary-looking glasses are packed full of technology. A miniaturized camera and microphones give the AI the ability to see and hear the world. Speakers let you listen to the AI and play music or even take calls. And these glasses also have a tiny, high-resolution in-lens display that’s full color that I’m holding in my hand. It’s very, very small.

These glasses work with your phone streaming back and forth, allowing the glasses to be very lightweight and access all your phone apps. And if you’re wondering, I’m actually wearing the glasses too. They’re actually helping me see all of you in the audience because they have prescription lenses inside them. But they’re also displaying my speaker notes for me as well.

(Laughter)

For this demo, you’ll see what Nishtha is seeing on the screen behind her. And this is the very first time we’re showing these glasses in public. So this is going to be a live demo of conceptual hardware, early software. What could go wrong? Nishtha, best of luck. No pressure.

NB: Amazing. Alright, let’s just make sure that these glasses are shown on the screen behind us. OK.

[Glasses screen off]

Awesome.

(Applause)

NB: Now I’m going to launch Gemini.

Gemini: Hello there.

NB: Hi, Gemini. Can you start us off with something fun and creative, and write a haiku for what I’m seeing right now?

G: Sure. Faces all aglow. Eager minds await the words. Sparks of thought ignite.

SI: Some anxious but happy faces as well, yeah. As you can see, the AI sees what Nishtha sees, hears what she hears and is responding in real time. But that’s just table stakes. I think everyone in this audience has seen demos like this before. Let’s actually step it up a notch with something we call Memory. For a rolling contextual window, the AI remembers what you see without having to be told what to keep track of.

NB: So you guys may have noticed I snuck a peek back at the shelf a moment ago.
I wasn’t paying attention, but let’s see if Gemini was. Hey, did you happen to catch the title of the white book that was on the shelf behind me?

G: The white book is “Atomic Habits” by James Clear.

NB: That is absolutely right. So let’s try something harder. I keep losing my hotel key card. Do you know where I last left the card?

G: The hotel key card is to the right of the music record.

NB: Great.

SI: For someone as forgetful as me, that’s a killer app. OK, this AI is inherently multimodal. So it’s using visual data and natural language to digest complex queries in real time while remembering your past context. Nishtha, let’s see some more.

NB: Yeah, let’s bring this book back. So I don’t really have time to read it right now. But I wonder if Gemini might be able to help me summarize what’s going on. Like, I don’t know. What does this diagram mean? It looks complex.

G: This diagram is labeled “The Habit Line” and illustrates the process of habit formation. It shows how automaticity increases with repetitions until a habit is formed.

NB: OK.

SI: Now imagine being able to communicate and understand any language. The AI can shift its output to speak different languages with dialects and accents that feel human and natural.

NB: Yeah, let’s go to this sign over here. Can you translate this to English for me?

G: The sign states: “Private property, no trespassing.”

SI: OK, let’s prove this is a live demo. Does anyone in the audience have a suggestion for a different language to translate to?

Audience: Farsi.

SI: Farsi. We tried Farsi, it failed last time, but let’s try it again.

NB: Do you want to try Farsi?

SI: Because I do speak Farsi. It’s my mother tongue, so thank you.

NB: Gemini, let’s just give this a shot. Can you translate this sign to Farsi for us?

G: (Speaking Farsi)

SI: Great, awesome. It speaks Farsi.

NB: That’s amazing. So, as Shahram mentioned, you all may have seen translation demos like this before, but what’s new now is that in addition to just saying things in a different language, I can also speak to Gemini in another language. I know Hindi, so let’s give this a shot.

(Speaks Hindi)

G: (Speaks Hindi)

NB: So Gemini said you all look focused and excited, and it has a better accent than I do.

(Laughter)

SI: Alright, now let’s see how the AI can connect the physical world with your digital content and take action.

NB: Yeah, let’s get some music going in here. OK, Gemini, why don’t you play us a track from this record?

GB: Sure, here’s “Bad Dreams” by Teddy Swims.

(Music)

SI: Perfect. In a few seconds, the AI recognized the album, looked up the tracklist and took action using the phone apps. OK, Nishtha, it does look like the demo gods are with us. Maybe with you more than me. Let’s do one last demo I know you’re keen to give.

NB: Yes, this is my first time in Vancouver, and I love going on walks. So why don’t you navigate me to a park nearby with views of the ocean?

G: OK, I am starting navigation to Lighthouse Park, which has magnificent views of the Pacific Ocean. Is there anything else I can assist you with?

NB: Honestly, with these directions and a 3D map, I should be all set, and hopefully I won’t look like a tourist. Thank you all.

(Applause)

SI: Thank you, Nishtha, that was awesome. OK, we’ve seen glasses. Now let’s turn our attention to the other side of the spectrum: headsets. You’ve seen these types of devices before, but when we first brought AI to a headset, it completely caught me by surprise.

For this demo, we’re going to use the Project Moohan headset that Samsung is launching later this year. Compared to glasses, headsets give you an infinite display for getting work done or immersing yourself in a movie, or maybe a TED Talk at some point. Let me bring up my colleague Max to show us even more. Hey, Max.

Max Spear: Hello.

(Applause)

SI: And the same thing is going to go here. You’ll see exactly what Max is seeing behind on the screen. Go for it, Max.

MS: Let’s do it. You’ll notice we start grounded in the real world. And I’m going to control the entire system with my eyes, hands and voice. But where things get really interesting is when we invite Gemini in as this conversational companion that can come with us anywhere.

G: Hello.

MS: Hey, Gemini, can you bring up my trip planner for me, please?

G: Of course. Opening up your trip planner.

MS: Cool, but I left these windows really disorganized. Can you help with that?

G: Of course I can help with that.

SI: No clicks, no keyboards. It’s just a conversation. And the AI is taking action. OK, some more audience participation. Someone shout out a name of a place you want to visit.

Audience: Melbourne.
Audience: Cape Town.

SI: Let’s go to Cape Town.

Max: OK, sounds fun. Can you please take me to Cape Town?

G: Certainly. Let me help with organizing the windows.

SI (Laughs)

MS: Awesome. And can you also take me to Cape Town?

(Laughter)

G: I can certainly do that. Let me take you to Cape Town.

MS: And we’re very organized as we go there, perfect.

SI: As you can see, the AI is taking Max’s requests, figuring out how best to answer it, opening up the Maps app. And from there, he can actually explore anywhere in the world in this 3D view.

MS: OK, this is pretty incredible. Viewing the world from this angle, I can even zoom in to city levels. But what’s really interesting is having an AI here who can see what I see. Can you tell me more about the significance of this place?

G: I can indeed provide information about Table Mountain. Table Mountain holds a profound significance, deeply rooted in both its natural splendor and its rich cultural history. It has long been a spiritual and cultural symbol for the Indigenous Khoekhoe and San people.

MS: OK, very good, Gemini. I’ll come back to this later. It also does look like a table, that’s handy.

SI: OK, Max, let’s bring this closer to home. How about exploring winter sports, seeing that we’re in Vancouver.

MS: OK, let’s get some inspiration for Shahram. Can you please show me 360 videos of this?

[Powder snowboarding]

G: Sure, here are some 360-degree powder snowboarding videos.

MS: OK, I’m kind of new to snowboarding, so hopefully this video will help me learn. Although this looks already way more intense than I think I’m ready for. What was the name of that trick?

G: That’s a backside 540 with a grab.

SI: The AI knows exactly what Max is looking at, even in rich video content, and can answer any question in real-time. Max, are you sure this is Whistler?

MS: You know, Shahram, I actually have no idea. I’ve never been here before, but let’s find out. What run do you think this is?

G: It’s probably one of the ungroomed runs in summit bowl or the top section of West Ridge on Mount Bachelor, as they are close to the summit and face north towards Three Sisters and Broken Top, the prominent mountains visible in the background.

MS: OK, Mount Bachelor, Oregon, got it.

SI: OK, let’s have a little bit of fun now. Let’s have AI narrate this video. Somebody in the audience, shout out a fun narration style like, nature documentary or something like that.

Audience: Morgan Freeman.

SI: We can’t do actors, but we’ll get to that at some point. Any other suggestions?

Audience: Movie trailer.
Audience: Horror movie.

SI: Horror movie. That’s going to be — it might be gory, but let’s go for it.

MS: Slightly dark, but let’s do it. Can you please describe what you’re seeing as if you’re an overly enthusiastic character in a horror movie?

G: Oh, darling, this is simply marvelous. A desolate mountain scape. How delightfully ominous. Prepare for a chilling dance with the unknown. Where every gust of wind whispers tales of icy terror and lurking shadows.

MS: OK, Gemini, that’s great. Return to more positive things another day.

SI: OK, Max, I know there’s one last demo, and you’ve been apparently practicing your farming skills?

MS: Yes, Shahram, I’ve been excited to get back to my rural roots and also embrace one of my favorite things, which is playing games. But it becomes a whole new, next-level experience when you have an AI by your side, both to help keep you company as you play, but also to help you learn how to play these kinds of games in the first place. So, for example, what do you think I should do next here?

G: It looks like this is early in the game in “Stardew Valley.” You need to earn money, so try planting some seeds. First, use your hoe to till the soil, and then plant your seeds. It looks like you have some parsnips. Since it isn’t raining, you’ll also need to water them with your watering can.

MS: Seems like a lot, and maybe robots will do this for us one day, but for now, I’m going to have to return back to my farm later. For one last thing, can you please give a playful two-sentence summary of all the ground we covered today?

G: We soared from organizing a digital realm to traversing snowy mountains, and even found time to farm some virtual parsnips. Quite the adventure today.

MS: Quite the adventure, indeed. Thanks, Gemini.

(Applause)

SI: Fantastic, thank you Max. We’re entering an exciting new phase of the computing revolution. Headsets and glasses are just the beginning. All this points to a single vision of the future, a world where helpful AI will converge with lightweight XR. XR devices will become increasingly more wearable, giving us instant access to information.

While AI is going to become more contextually aware, more conversational, more personalized, working with us on our terms and in our language. We’re no longer augmenting our reality, but rather augmenting our intelligence.

Thank you so much.


My Concluding Thoughts

In my experience, most of the personal stories I’ve encounter live in the analog world. Yes, the internet plays a role from time to time — social media & dating apps, etc. — but it’s more of a sideline.

AI, however, is a different sort of digital beast. One that is increasingly infiltrating every corner of our daily endeavors. And I’m wondering how that will change our perception. If this TED Talk is an indicator of future trends, the shift could be big.

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Carole Cadwalladr: This Is What a Digital Coup Looks Like @ TED 2025

Having spent my career in the computer industry, I’ve long been fascinated by digital technology. For most of those years, my experience was positive. Every invention, every new plateau of features and performance only increased my wonderment and appreciation. The internet’s potential was so mind-boggling.

And that positive viewpoint continued into the early days of social media, but began to sour as those platforms devolved into the hot mess they are today. (that topic is worthy of a very long article, but I’ll spare you for now)

When artificial intelligence came to my attention around 5 years ago it seemed interesting, but to be honest, I didn’t see how it would benefit me. After all, my passion is helping people tell impactful personal stories — stories based on an experience or an idea — so I didn’t see AI as a meaningful tool. But 2 years ago their potential became more apparent. The way in which they could assimilate information made search engines look like Ford Model Ts.

But my enthusiasm became a bit tarnished as I explored the technology further. Not only were all those LLMs (Large Language Models) being trained on massive amounts of intellectual property that was never paid for, they were also sucking up increasing amounts of personal data. And as many other software programs and apps wove AI into their code, that trend continued to accelerate.

Is the greatest technological invention becoming the greatest threat to humanity?

That’s what I started to wonder, as it seemed to me from recent reports that any sense of morality in the AI space was being pushed aside in an effort to “win” the race for dominance. The icing on the cake came when the Silicon Valley oligarchy bowed down to the oligarchy residing in Washington D.C. 

So I was keenly interested in hearing Carole Cadwalladr‘s take on this situation when she spoke at the 2025 TED Conference. Carole was an important figure in exposing the Facebook-Cambridge Analytica scandal, so she was no stranger to the potential for tech to subvert democracy and cause great harm to society.

The Russian and American presidents are now speaking the same words. They are telling the same lies. We are watching the collapse of the international order in real time, and this is just the start. Coups are like concrete. When they stop moving, they set. It is already later than we think.

Her talk was a chilling reminder of other times in history when the power-elite ignored the rights and needs of the many to satisfy their own maniacal need for wealth and power. When there is no check-and-balance — which is the case in the United States, as the current administration will make no effort to protect citizen’s rights — things always go from bad to worse.

Follow the data. It’s always the data. It’s the crack cocaine of Silicon Valley. You know, the first thing that Elon Musk did was to send his cyber troops into the US Treasury to get access to the data. That is not a coincidence, it’s a hack.

Time will tell how this story will play out, but my big concern is that the personal stories of millions (if not billions) will be adversely affected by some of the ways in which AI is being utilized. With honor and integrity in short supply, this is one topic worth paying close attention to. The next 2-3 years will be most critical.

I’ve worked with a lot of speakers who have important stories to tell on a range of subjects, from the science of climate change to their efforts intended to bring peace to the world or how to improve the education system. As you listen to her story, think about how your expertise could enlighten an audience and inspire them to think differently about an important topic.

If you enjoyed this article…Buy Me A Coffee

Transcript

I’ve been feeling a lot of panic and fear about this talk, and not just from the normal reasons of public speaking, although that’s there too. But it’s also because I want to say something meaningful, and I’ve been overwhelmed by the enormity of what is happening right now.

And there’s a particular set of circumstances which have also been feeding into my confusion and denial. And that is because the last time that I stood on this stage, it led to a three-year legal battle, culminated in London’s High Court, in which it felt like I was on trial for my life, because I was. My career, my reputation, my finances, even my home was on the line.

All because I came here to warn you that I didn’t think democracy was going to survive the technology that you’re building, however incredible it is. In fact, I was the person who almost didn’t survive. And pretty much everything that I was warning about is now coming true.

I can’t sugarcoat it. It’s a bit of a headfuck.

I have a lot of emotions about coming here, and TED also, I suspect, is feeling them too. But what actually I finally realized yesterday is that the denial and the confusion that I’ve been feeling is maybe what you’re feeling too.

I felt powerless for a really long time. So if that’s what you’re feeling, I get it. But we have to act now. My alarm system is ringing again.

There are things that we can do. In my case, I survived, and you will too. But it’s by learning how to fight back. This is my guide, and it has to start with naming it.

It’s a coup. I know you probably don’t want to hear that, and especially here, but we can’t fight it if we can’t see it, and we can’t see it if we don’t name it.
(Applause)

The Russian and American presidents are now speaking the same words. They are telling the same lies. We are watching the collapse of the international order in real time, and this is just the start. Coups are like concrete. When they stop moving, they set. It is already later than we think.

This image – some of you in this room might know these people. I call it “Tech Bros in Hostage Situations.” It’s a message to you. This is Putin’s playbook. He allows a business elite to make untold riches in exchange for absolute loyalty. Some people are calling this oligarchy, but it’s actually bigger than that. These are global platforms.

It’s broligarchy.

(Laughter and Applause)

There is an alignment of interests that runs from through Silicon Valley to what is now a coming autocracy. It’s a type of power that the world has never seen before.

Follow the data. It’s always the data. It’s the crack cocaine of Silicon Valley. You know, the first thing that Elon Musk did was to send his cyber troops into the US Treasury to get access to the data. That is not a coincidence, it’s a hack. That data is now feeding AIs that are choosing who to sack and who to replace – sorry, eliminate fraud and waste.

(Laughter)

When we broke the Cambridge Analytica story about the harvesting 87 million people’s Facebook data, people freaked out, rightly. This is chicken feed compared to that, but it is the blueprint. It’s always the data.

Protect your private life. Which is why it’s so important you start thinking about your private life. The broligarchy doesn’t want you to have one. This is the old headquarters of the East German secret police. They kept detailed files on almost one in three of their citizens. That is nothing compared to what Google has on every single one of us, and hundreds of other companies. The entire business model of Silicon Valley is surveillance. It harvests our data in order to sell us stuff. We are already living inside the architecture of totalitarianism.

(Applause)

It may not have been deliberate, but we now have to start acting as if we live in East Germany, and Instagram is the Stasi.

Politics is downstream from culture. So I actually learned this from somebody who I think of as one of the great philosophers of our age: Steve Bannon.

(Laughter)

He actually stole it from somebody else. But it’s not politicians who have the power. He knows that. It’s why he’s a podcast bro these days. But culture now is just what’s next on your phone. And that’s AI. Culture is AI now. And forget the killer robots. If you want to know what the first great AI apocalypse is, we’re already living it. It’s total information collapse.

And if you take one thing only away from this talk, it’s:

Politics is technology now. And that’s why everybody in this room, you can’t look away. It’s why your CEOs have been taken captive and are paraded on TV like hostages. But you, you have a choice.

Individuals are stronger than institutions. So Trump, he calls the press the enemies of the people, and he probably doesn’t even know that he’s quoting Stalin. So, what happened to me is a playbook, and it’s now coming for all sorts of other people.

It was actually a friend of this guy who came after me, Nigel Farage, it’s a Brexit funder. I’m not going to go super into the details. But 19, sorry, 19 press freedom organizations called the lawsuit against me a SLAPP. That means it’s a strategic litigation against public participation. A really long-winded way of saying it’s using law as a weapon to shut people up, not just journalists, but other public people too, and it works.

I just want to tell you about one aspect of the litigation which I found terrifying, and that was the data harvesting. There’s this quote, you may know it, Cardinal Richelieu: “If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.” In my case, the first forensic searches of my phone and laptop yielded 40,000 pieces of data.

This is my messages, my emails, my voice memos, my personal life. And the whole thing about this, the attack which came for me was really personal, because the thing about this litigation isn’t is only one part of the playbook. It was also this sort of massive online campaign of abuse which is just day after day after day after day after day because my most unforgivable crime was reporting while female. It was a digital witch burning.

And I believe that this man came after me personally, not because the Guardian and not Ted, it was because I looked like the weakest link. But he was wrong.

(Applause)

30,000 people rose up to support me. They contributed almost a million pounds to a legal defense fund because they saw a bully trying to crush me, and they would not let it stand. And it always makes me emotional when I think about that. I just heard somebody was saying the camera person, I don’t know where they are, contributed.

This whole talk is actually my gratitude towards everybody who did that. But it’s also why I know about what we have to do next. You know, Trump is suing news organizations and every day they’re settling. These are big corporates with corporate interests. Not everybody can stand up to power, but there are people who are doing it, and we can support them. We have to have each other’s backs right now because we are the cavalry now.

There are facts & we can know them. You know, this is really important to me, but I spoke to a UK libel lawyer before this talk. I want to say that there is an awful lot of facts set down in a High Court judgment. And we’re actually taking the case now to the European Court of Human Rights. We’re testing the UK on its laws around freedom of expression.

So look after facts, you’ll miss them when they’ve gone. This is Wayback Machine, give them money. They’re trying to preserve the internet as it’s being deleted day by day.

(Applause)

History is our best chance of getting out of this. You know, you probably know this phrase, “Do not obey in advance.” That’s Tim Snyder, who’s a historian of authoritarianism. We now are in techno-authoritarianism. We have to learn how to digitally disobey. That can be as simple as the drop-down box: Don’t accept the cookies, don’t give your real name, download Signal, the encrypted messaging app. Don’t bomb Yemen. Don’t add the editor of The Atlantic to your group chats.

(Laughter)

Don’t experiment on children. Ah, don’t experiment on children. You know, social mores change. We don’t send children down coal mines anymore. And in years to come, allowing your child to be data harvested from birth will be considered child abuse. You didn’t know, but now you do. Privacy is power.

And we have more of it than we think. I had this little epiphany yesterday in which I realized actually the moments when I felt most powerless were the moments that I felt I was actually most powerful. It was because my journalism had impact.

We have more power than we think. They want us to feel powerless, that’s the plan. There is so much though that we can learn from people who’ve been through this before. Alexei Navalny, the leader of the Russian opposition, he always talked about a beautiful Russia of the future. He painted a vision. There is a beautiful internet of the future, free from corporate capture and data tracking. We can build it. It is going to take a movement, but we can learn from movements that there have been before us.

This is my colleagues and I on strike in December because my news organization, The Guardian, decided to sell our corner of it, The Observer, the Sunday title. And it was a battle we really didn’t need at this time, and we didn’t actually win. But, you know, you can’t win every battle, but you definitely won’t win if you don’t fight.

So I’m gonna, I want to leave you with this. This is ChatGPT writing a TED Talk in the style of Carole Cadwalladr. And it is creepily plausible. But what it doesn’t know, because AI is actually as dumb as a rock, is that I am going to turn to Sam Altman, who is coming here, a Ted speaker, and say that this does not belong to you. ChatGPT has been trained on my IP, my labor, my personal data.

(Applause)

And I did not consent. You know, The Guardian has effectively got rid of more than 100 journalists. We actually leave the building next week. And shortly afterwards, it signed a syndication deal with OpenAI. Or as I think of it, it married its rapist. But I do not consent. And while we still have copyright laws in my country – government, UK government is trying to tear them up at the moment in order to suck up to Silicon Valley and Trump – but while we have them, use them. Because what is happening to my industry is happening to yours too. And it’s more than theft, it’s a violation. Data rights are human rights.

(Applause)

In 2019, I came here and I called out the gods of Silicon Valley. I was wrong. Sam Altman, Mark Zuckerberg, Elon Musk, you are not gods. You are men, and you are careless.

(Standing Ovation)

You think that by allying yourself with an autocrat, you will be protected. That’s not how history works. It’s not even how oligarchy works. This is Mikhail Khodorkovsky. He was an oligarch until he was sent to Siberia to prison for 10 years after Putin tired of him. You are sucking up to a tyrant who is trying to destroy the laws who made your businesses possible. You are collaborators. You are complicit in a regime of fear and cruelty.

But the rest of us, we all here, we have a choice. I chose to come back to Ted because I’m reclaiming my story, my words.

(Applause)

We are not powerless. The 30,000 people who supported me proved that. We are not powerless because we know who we are, and we know what we stand for. And my question to Silicon Valley is: Do you?

Thank you.

(Standing Ovation)

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Understanding Narrative Transportation Using Manus AI

In my last post I talked about Embracing Narrative Transportation as a way to enhance the impact of your personal stories. And while I enjoy writing articles about storytelling, I’ve also been exploring AI (Artificial Intelligence) as a way to broaden my research capabilities.

I have to say, it’s amazing to watch these Chatbots in action. They are fast, and comprehensive in their reach, but they don’t replace humans as they still don’t have human experiences that we have, but they are able to locate and analyze information faster than any human.

So I decided to see what some of the latest chatbots had to say about the topic of Narrative Transportation with regards to crafting impactful personal stories. In part one, I asked Manus AI what it thought. Below is the prompt that I used, its initial response, and the output it generated. Warning, it’s a long article, but should you have an interest in AI, it’s an interesting read

Prompt:

I’m trying to understand the topic of Narrative Transportation in depth, as it relates to storytelling in general, but more specifically, how we can use Narrative Transportation to create impactful personal narratives.

Response:

I’ll help you understand Narrative Transportation in depth, both in general storytelling and specifically for creating impactful personal narratives. I’ll research this topic thoroughly and provide you with comprehensive information. Let me get started right away./em>

Output:

Here’s what Manus AI had to say:

Understanding Narrative Transportation: Creating Impactful Personal Narratives

Introduction

Narrative transportation is a powerful psychological phenomenon that occurs when individuals become completely immersed in a story. This comprehensive guide explores the concept of narrative transportation, its application in storytelling, and specifically how it can be leveraged to create impactful personal narratives. By understanding the mechanisms of narrative transportation and implementing specific techniques, you can craft personal narratives that deeply engage audiences, foster connections, and potentially transform beliefs and attitudes.

Part I: Understanding Narrative Transportation

The Concept of Narrative Transportation

Narrative transportation refers to the psychological experience of being completely immersed in a story. When a person is transported into a narrative, they become absorbed cognitively, emotionally, and in the imagery of the story, regardless of the medium (print, film, podcasts, video games, etc.).
The concept was first formally defined by Richard Gerrig in his 1993 book “Experiencing Narrative Worlds,” where he used travel as a metaphor for reading. He described it as a state in which the reader—referred to as the traveler—becomes detached from their original reality due to deep engagement with the story.
Melanie Green and Timothy Brock later expanded on this idea, describing narrative transportation as “a distinct mental process, an integrative melding of attention, imagery, and feelings.” They conceptualized it as the experience of being carried away by a narrative.

Key Features of Narrative Transportation

  1. Focused attention – Receivers become so focused on the story they do not think about distractions in their environment
  2. Emotional Engagement – There is a strong emotional response to the story with empathy for the characters
  3. Mental Imagery – Receivers have a mental image of the story in their heads or try to predict what might happen in the future of the story
  4. Cognitive Detachment from Reality – Receivers feel like they are really in the story and not in their real-world environment
  5. Belief and Attitude Shift – Narrative transportation can influence receivers’ beliefs, attitudes, and behaviors, aligning them with the story’s content
  6. Reduced Counterarguing – When transported, the receiver is less likely to challenge the story’s content
  7. Long-term Efficacy – The receiver is impacted even after the story ends, affecting long-term behavior and attitude change

Psychological Mechanisms of Narrative Transportation

Reduced Counterarguing

Transporting stories create a more open, accepting mindset for individuals. When readers are transported in an entertaining narrative, they may not want to interrupt their enjoyment to argue with the story. This reduced counterarguing can occur even when the persuasive intent is obvious.

Connections with Characters

Audience members may:

  • See themselves in story characters
  • Come to see characters as friends
  • Admire the characters

Through these connections, audiences may change their attitudes and beliefs to align with those of narrative characters. Identification with and liking of story characters has been shown to increase the adoption of beliefs advocated by the character.

Mental Imagery

The transportation-imagery model highlights the role of visual imagery in transportation-based belief change. The experience of being transported into a story links vivid images with beliefs implied by the story. This connection between images and beliefs may be one basis for the power of narrative persuasion.

Emotional Engagement

Stories are particularly powerful when they evoke strong emotions. The pattern of emotional experience that occurs as readers progress through a story (emotional flow) can help maintain interest, particularly for longer narratives.

Reminding Readers of Personal Experiences

Stories can prompt self-examination and change by reminding readers of experiences in their own lives that relate to those in the narrative. Individuals who are more immersed in a story show greater generalization of the beliefs implied by the story, but additionally, whether the story brings to mind events from the readers’ real lives seems to be important in determining narrative impact.

Factors Affecting Narrative Transportation

Audience Factors:

  • World view (e.g., pro-environmental)
  • Prior knowledge and experience
  • Attitudes and beliefs
  • Empathy
  • Political ideology
  • Religiosity
  • Transportability (individual tendency to become immersed)
  • Need for affect (enjoyment of emotional experiences)
  • Need for cognition (enjoyment of thinking)

Story Factors:

  • Fiction vs nonfiction
  • First-person vs third-person
  • Well-developed characters
  • Intent of character
  • Personally relevant stories
  • Stories that stay engaging
  • Coherence and structure
  • Emotional intensity
  • Suspense and dramatic tension
  • Psychological realism
  • Cultural appropriateness

Part II: Narrative Transportation in Personal Narratives

Unique Aspects of Personal Narratives

Personal narratives differ from fictional narratives in several key ways when it comes to narrative transportation:

Authenticity and Vulnerability

  1. Authenticity as a Bridge: In personal narratives, authenticity serves as a bridge between the storyteller’s world and the audience’s. When authors share their genuine experiences, emotions, and vulnerabilities, they create a unique connection that can be more immediate than in fictional narratives.
  2. Vulnerability Creates Intimacy: Personal narratives that expose vulnerability create a level of intimacy that fictional narratives may struggle to achieve. Research by Hagmann et al. (2024) found that “self-revealing personal narratives create vulnerability by exposing the speaker to others’ negative judgment.” This vulnerability signals authenticity and honesty to listeners, increasing trust and connection.
  3. Real-life Consequences: Unlike fictional narratives, personal narratives carry real-life consequences for the storyteller, which can heighten the audience’s emotional investment. The willingness to incur potential interpersonal costs through self-disclosure signals that the speaker values honesty.

Identification and Social Identity

  1. Shared Experience Identification: In personal narratives, identification often occurs through shared experiences rather than character traits. Research on patient identification in cancer narratives found that shared experiences can create powerful connections regardless of demographic differences.
  2. Social Identity Influence: Personal narratives can activate social identities that influence transportation. According to social identity theory, perceived risk to a patient’s identity increases identification with salient social groups who possess a similar identity.
  3. Transportability Across Divides: Personal narratives have shown the ability to build trust across ideological divides. Research has found that people judge ideological opponents as more trustworthy when opposing opinions are expressed through a self-revealing personal narrative than through either data or stories about third parties.

Mechanisms of Narrative Transportation in Personal Narratives

Personalization and Intimacy

  1. Personalization Effect: Research suggests that personal narratives create a sense of “personalness” that drives effectiveness. Personalness is defined as “the level of intimacy delivered when narrating an experience, or the capability of eliciting a sense of intimacy with the ‘other.'”
  2. <Mental Representation: This sense of intimacy motivates engagement with the content and enables the mental representation of the narrative (narrative transportation), consequently increasing the likelihood of accepting the recommendation or message.

Self-Related Control Beliefs

  1. Empowerment Through Transportation: Research has found that “narratives in both written text and video form with protagonists displaying high versus low self-efficacy can, at least temporarily, affect recipients’ own self-related control beliefs when they experience strong transportation into the stories.”
  2. Mediation Through Transportation and Identification: The effect of narrative manipulation on self-related control beliefs was mediated by experienced transportation and identification, suggesting that personal narratives that facilitate transportation can influence how readers view their own capabilities.

Reduced Counterarguing in Personal Contexts

  1. Disarming Effect: Personal narratives may reduce counterarguing more effectively than fictional narratives because questioning someone’s lived experience can seem inappropriate or insensitive.
  2. Trust Building: Research has found that “trust does not suffer when explanations grounded in self-revealing personal narratives are augmented with data, suggesting that our results are not driven by quantitative aversion.”

Impact of Personal Narratives

Building Trust and Changing Beliefs

  1. Trust Across Differences: Personal narratives have shown the ability to build trust across ideological divides. People are more willing to collaborate with ideological opponents who support their views by embedding data in a self-revealing personal narrative, rather than relying on data-only explanations.
  2. Changing Health Behaviors: Research has found that “identification with the main character in the vignettes was a significant predictor of intentions to participate in cancer research, but only when the mediating role of narrative transportation was considered.”
  3. Self-Concept Changes: Personal narratives can influence how readers view themselves. Research has demonstrated that stories featuring strong protagonists who display high self-efficacy can increase recipients’ own self-related control beliefs when they are highly transported into the stories.

Profound Connections with Audiences

  1. Solace and Belonging: Stories that expose the raw, unfiltered emotions of their creators have the power to resonate with readers in a way that is impossible to replicate. In our own struggles and triumphs, we recognize a reflection of ourselves, and it’s in those moments that we find solace, encouragement, and a sense of belonging.
  2. Shared Human Experience: Personal narratives remind us that we are not alone in our experiences. They create a sense of shared humanity that can be particularly powerful for individuals facing challenges or difficult situations.

Part III: Techniques for Creating Impactful Personal Narratives

Structural Techniques

1. Clear Narrative Structure

A well-structured personal narrative guides readers through your emotional journey with ease. The basic structure includes:

  • Beginning: Establish your core idea and hook your readers
  • Middle: Develop your central idea using descriptive language to define your story, setting, characters, and plot
  • End: Summarize lessons learned, relate your core theme back to readers’ lives, and include a call to action

The ABCDE storytelling framework (from Anne Lamott’s book “Bird by Bird”) provides an excellent structure:

  • Action: Drop the reader right into the action. No need to “set the scene.”
  • Background: Once you have the reader’s attention, tell them why it matters.
  • Development: Build the story up. Where is the story going? What’s at risk?
  • Climax: This is the tipping point. The “aha!” The “oh no!” The “SURPRISE!”
  • Ending: Explain what it was all for. What was the lesson?

2. Start with a Strong Hook

Begin your personal narrative with something that immediately captures attention. This could be:
  • A surprising statement
  • A vivid scene
  • A provocative question
  • A moment of tension
The hook should relate to the core message of your narrative and give readers a reason to keep reading.

3. Focus on Transformation

Personal narratives are most impactful when they demonstrate transformation. Structure your narrative around:
  • Who you were before
  • The challenge or catalyst for change
  • The process of transformation
  • Who you became as a result
  • What this means for the reader

Emotional Connection Techniques

1. Vulnerability and Authenticity

The most powerful element in a personal narrative is the emotional connection you establish with readers. To create this:

  • Be honest and open about how experiences have impacted you
  • Include all relevant details, even embarrassing or painful ones
  • Share your feelings as well as your pain
  • Never be afraid to show vulnerability—this gives readers permission to accept their own
  • Write from a foundation of authenticity rather than trying to persuade or sell ideas

2. Descriptive Language and Sensory Details

Use descriptive language to paint a vivid picture in your reader’s mind:

  • Help readers visually experience colors and shapes
  • Create sensations of temperature, texture, and physical feelings
  • Incorporate scents, tastes, and sounds
  • Use all five senses to connect with readers’ emotions
  • Don’t limit your narrative to a sequence of causes and effects

3. Emotional Flow

The pattern of emotional experience throughout your narrative helps maintain interest:

  • Create and then resolve suspense about outcomes
  • Include changes in emotional tone across the course of the story
  • Balance difficult emotions with moments of insight or relief
  • Build emotional intensity toward key realizations
  • End with emotional resolution that feels satisfying

Connection with Readers

1. Make the Reader the Main Character

Your story isn’t really about you—the reader is the main character. To achieve this:

  • Have clarity around why your personal story is interesting to the READER
  • Connect your perspective with universal truths
  • Ask yourself: What’s the problem you want the reader to solve?
  • Consider: What’s the realization you want the reader to have?
  • Focus on: What’s the obstacle you want to help the reader overcome?

2. Identify the Takeaway

The takeaway explains why your story is important and creates a deeper emotional tie with the reader:

  • Connect your experience to something bigger
  • Identify the universal truth at play
  • Clarify what your experience says about you, life, or the world
  • Determine if your story is about frustration, injustice, loyalty, hope, etc.
  • Make the connection between your story and the message explicit

3. Relevance to Reader’s Life

Make your personal narrative relevant to readers’ lives:

  • Give readers a “heads up” so they can avoid pain you experienced
  • Provide relief if they’re currently going through a similar experience
  • Show how your lessons apply to their situations
  • Create points of similarity between your experience and theirs
  • Focus on shared human experiences that transcend specific circumstances

Finding Story-Worthy Material

1. Daily Reflection Practice

Use the “Homework for Life” framework (from Matthew Dicks’ book “Storyworthy”):

  • Take five minutes daily to reflect on your day
  • Ask: If you had to tell a story from today, what would it be?
  • Write one sentence about it
  • Look for difficult times, moments of growth, and proud or awkward experiences
  • Create two columns: date and story

2. Identify Emotional Significance

Pay attention to events that leave a strong impression:

  • What moments do you replay in your mind?
  • What makes you uncomfortable?
  • What reveals your vulnerability?
  • What forces you to confess a character flaw?
  • What challenges your core beliefs about yourself?

3. Choose Meaningful Experiences

Select experiences that have been transformational:

  • Events that caused you to rethink or reconsider your point of view
  • Circumstances that resulted in personal or professional growth
  • Painful situations that forced you to change your beliefs
  • Unique life experiences that taught you important lessons
  • Moments where you overcame adversity

Refinement Techniques

1. Editing and Revision

After completing your first draft:
  • Edit for grammar, spelling, and punctuation errors
  • Add any extra details you may have forgotten
  • Remove irrelevant information that doesn’t serve the core message
  • Ensure transitions between sections are smooth
  • Check that your narrative has a clear beginning, middle, and end

2. Seek Feedback

Get perspective from others:

  • Ask friends or family members to read your narrative
  • Be clear about what kind of feedback you’re looking for
  • Listen to their responses without taking criticism personally
  • Consider whether readers connect with your story as intended
  • Revise based on feedback to improve clarity and impact

3. Maintain an Enjoyable Process

While writing a personal narrative can be emotionally challenging:

  • Have fun with the process
  • Relax and tell your story naturally
  • Remember that the more relaxed you are, the more authentic your narrative will feel
  • View the process as potentially therapeutic and liberating
  • Recognize that sharing your story can benefit both you and your readers

Part IV: Practical Applications of Personal Narratives

Therapeutic and Personal Growth

Personal narratives can be used for:

  • Processing difficult experiences
  • Gaining clarity about life events
  • Discovering patterns in your behavior or thinking
  • Building self-awareness
  • Creating meaning from challenging circumstances

Professional and Leadership Contexts

Personal narratives are valuable in:

  • Building trust with teams or clients
  • Demonstrating values in action
  • Illustrating problem-solving approaches
  • Creating connection in professional relationships
  • Establishing credibility through lived experience

Educational and Inspirational Purposes

Share personal narratives to:

  • Teach important life lessons
  • Inspire others facing similar challenges
  • Provide practical guidance based on experience
  • Create empathy for different perspectives
  • Motivate positive change in others

Part V: A Framework for Applying Narrative Transportation in Personal Narratives

Step 1: Understand Your Purpose

Before crafting your personal narrative, clarify:
  • What you want to achieve with your story
  • Who your audience is
  • What change you hope to inspire
  • What beliefs or attitudes you want to influence
  • What emotional response you want to evoke

Step 2: Select the Right Experience

Choose an experience that:
  • Has transformed you in some way
  • Contains universal themes or lessons
  • Evokes strong emotions
  • Demonstrates vulnerability and authenticity
  • Has clear stakes and consequences

Step 3: Structure for Maximum Impact

Organize your narrative to:
  • Begin with a compelling hook
  • Establish context and background
  • Build tension through challenges and obstacles
  • Reach a climactic moment of realization or change
  • Resolve with reflection and meaning-making

Step 4: Enhance Transportation Elements

Deliberately incorporate elements that facilitate transportation:

  • Rich sensory details to create mental imagery
  • Emotional authenticity to create connection
  • Vulnerability to build trust
  • Universal themes to create relevance
  • Transformation to inspire hope

Step 5: Connect to the Reader

Make your narrative relevant by:

  • Explicitly stating the takeaway or lesson
  • Connecting your experience to broader human experiences
  • Addressing how your story might help the reader
  • Creating points of similarity between your journey and theirs
  • Inviting the reader to reflect on their own experiences

Step 6: Refine and Test

Improve your narrative by:

  • Getting feedback from trusted readers
  • Assessing whether it creates the intended emotional response
  • Checking if the core message is clear
  • Ensuring the narrative flows smoothly
  • Confirming that it feels authentic and honest

Conclusion

Narrative transportation offers a powerful framework for understanding how stories captivate us and influence our beliefs, attitudes, and behaviors. By applying the principles of narrative transportation to personal narratives, you can create stories that not only engage readers but potentially transform them.

Personal narratives have unique strengths in creating transportation through authenticity, vulnerability, and shared human experience. When crafted with intention and skill, they can build trust, foster connection, and inspire change in ways that other forms of communication cannot.

The techniques outlined in this guide provide practical approaches to enhancing narrative transportation in your personal stories. By focusing on structure, emotional connection, reader relevance, and meaningful content, you can create personal narratives that transport readers into your experiences and leave lasting impressions.

Whether you’re writing for personal growth, professional development, or to inspire others, understanding narrative transportation will help you craft more impactful personal narratives that resonate deeply with your audience.

References

  • Gerrig, R. J. (1993). Experiencing narrative worlds: On the psychological activities of reading.
  • Green, M. C., & Brock, T. C. (2000). The role of transportation in the persuasiveness of public narratives.
  • Hagmann, D., Ho, E. H., & Loewenstein, G. (2024). Personal narratives build trust.
  • Isberner, M. B., Richter, T., Schreiner, C., Eisenbach, Y., Sommer, C., & Appel, M. (2020). Empowering stories: Transportation into narratives with strong protagonists increases self-related control beliefs.
  • Lamott, A. (1994). Bird by Bird: Some Instructions on Writing and Life.
  • Neil, J. M., Gough, A., Kee, F., George, T. J., Pufahl, J., & Krieger, J. L. (2020). The influence of patient identification and narrative transportation on intentions to participate in cancer research.
  • Strange, J. J., & Leung, C. C. (1999). How anecdotal accounts in news and in fiction can influence judgments of a social problem’s urgency, causes, and cures.
  • Valenzuela, A., & Galli, L. (2024). The personalization effect: How personal narratives drive electronic word-of-mouth effectiveness.
  • Van Laer, T., De Ruyter, K., Visconti, L. M., & Wetzels, M. (2014). The extended transportation-imagery model: A meta-analysis of the antecedents and consequences of consumers’ narrative transportation.

If you enjoyed this article…Buy Me A Coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Nature of Intelligence – Episode Four – Babies vs Machines

So let’s recap where we’re at with regards to the Complexity podcast from the Santa Fe Institute. This season covers the Nature of Intelligence, going beyond what it means for humans to be intelligent and taking a look at the state of AI (artificial intelligence) from that same perspective. So far we’ve addressed:

And now it’s time to talk about Babies vs Machines.

So far in this season, we’ve looked at intelligence from a few different angles, and it’s clear that AI systems and humans learn in very different ways. And there’s an argument to be made that if we just train AI to learn the way humans do, they’ll get closer to human-like intelligence. ~ Abha Eli Phoboo

This is an intriguing issue for me – the fact that LLMs are trained on data, not experiences. Even though much of the data it’s trained on came out of human experiences, data does not equal doing. And this is especially true with babies. No matter how much information you provide to an LLM that’s related to being a baby, that information is based on observation. And the last time I checked, two-year-olds were not writing scientific papers.

Unlike humans, large language models don’t have this intrinsic drive to participate in social interactions. ~ Melanie Mitchell

Most likely you’ve been in a room with a group of kids. A living room, backyard, playground, or in a school classroom. Think about the level of social interaction that occurs. They’re playing with each other, as well as and telling and hearing stories. Maybe they’re laughing, or if a child had their toy taken away, crying.

This paradigm plays out over and over again in childhood, and without diving too deep into the complex topic of cognitive development, it’s safe to say that these interactions carry great meaning. But LLMs never had a childhood. So it begs the question, can LLM intelligence ever equate to human intelligence?

Transcript

Abha Eli Phoboo: The voices you’ll hear were recorded remotely across different countries, cities and work spaces.

Linda Smith: The data for training children has been curated by evolution. This is in stark contrast to all the large data models. They just scrape everything. Would you educate your kid by scraping off the web?

Abha: From the Santa Fe Institute, this is Complexity.

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

Abha: So far in this season, we’ve looked at intelligence from a few different angles, and it’s clear that AI systems and humans learn in very different ways. And there’s an argument to be made that if we just train AI to learn the way humans do, they’ll get closer to human-like intelligence.

Melanie: But the interesting thing is, our own development is still a mystery that researchers are untangling. For an AI system like a large language model, the engineers that create them know, at least in principle, the structure of their learning algorithms and the data that’s being fed to them. With babies though, we’re still learning about how the raw ingredients come together in the first place.

Abha: Today, we’re going to look at the world through an infant’s eyes. We know that the information babies are absorbing is very different from an LLM’s early development. But how different is it? What are babies experiencing at different stages of their development? How do they learn from their experiences? And how much does the difference between babies and machines matter?

Abha: Part One: The world through a baby’s eyes

Abha: Developmental psychology, the study of how cognition unfolds from birth to adulthood, has been around since the late 19th century. For the first 100 years of its history, this field consisted of psychologists observing babies and children and coming up with theories. After all, babies can’t tell us directly what they’re experiencing.

Melanie: But what if scientists could view the world through a baby’s own eyes? This has only become possible in the last 20 years or so. Psychologists are now able to put cameras on babies’ heads and record everything that they see and hear. And the data collected from these cameras is beginning to change how scientists think about the experiences most important to babies’ early learning.

Linda: I’m Linda Smith, and I’m a professor at Indiana University. I’m a developmental psychologist, and what I am interested in and have been for a kind of long career, is how infants break into language.

And some people think that means that you just study language, but in fact, what babies can do with their bodies, how well they can control their bodies, determines how well they can control their attention and what the input is, what they do, how they handle objects, whether they emit vocalizations, all those things play a direct role in learning language. And so I take a kind of complex or multimodal system approach to trying to understand the cascades and how all these pieces come together.

Melanie: Linda Smith is the Chancellor’s Professor of Psychological and Brain Sciences at Indiana University. She’s one of the pioneers of head-mounted camera research with infants.

Linda: I began putting head cameras on babies because people have throughout my career, major theorists, have at various points made the point that all kinds of things were not learnable. Language wasn’t learnable.

Chomsky said that basically. All this is not learnable. The only way you could possibly know it was for it to be a form of pre-wired knowledge. It seemed to me even back in the 70s, that my thoughts were, we are way smarter than that.

And I should surely hope that if I was put on some mysterious world in some matrix space or whatever, where the physics work differently, that I could figure it out. But we had no idea what the data are.

Most people assume that at the scale of daily life, massive experience, the statistics are kind of the same for everybody. But by putting head cameras on babies, we have found out that they are absolutely, and I’m not alone in this, there’s a lot of people doing this, we have found out that it is absolutely not the same.

Melanie: Linda’s talking about the statistics of the visual world that humans experience. We perceive correlations — certain objects tend to appear together, for example chairs are next to tables, trees are next to shrubs, shoes are worn on feet.

Or at an even more basic, unconscious level, we perceive statistical correlations among edges of objects, colors, certain properties of light, and so on. We perceive correlations in space as well as in time.

Abha: Linda and others discovered that the visual statistics that the youngest babies are exposed to, what they’re learning from in their earliest months, are very different from what we adults tend to see.

Linda: There they are in the world, they’re in their little seats, you know, looking, or on somebody’s shoulder looking. And the images in front of their face, the input available to the eye changes extraordinarily slowly, and slow is good for extracting information.

In the first three months, babies make remarkable progress, both in the tuning of the foundational periods of vision, foundational aspects of vision, edges, contrast sensitivity, chromatic sensitivity. But it’s not like they wait till they get all the basic vision worked out before they can do anything else.

The first three months define the period of faces, they recognize parents’ faces, they become biased in faces. If they live in one ethnic group, they can recognize those faces better and discriminate them better than if they live in another. And all this happens by three months. And some measures suggest that the first three to four months, this is Daphne Mauer’s amazing work of babies with cataracts, that if you don’t have a cataract removed before four months of age for infantile cataracts, that human face perception is disrupted for life.

And that’s likely in the lower level neural circuits, although maybe it’s in the face ones as well. And babies who are three months old can discriminate dogs from cats. I mean, it’s not like they’re not learning anything. They are building a very impressive visual system.

Many of our other mammalian friends get born and immediately get up and run around. We don’t. We sit there, for three months, tot to believe it’s important, right?

Melanie: Linda and her collaborators analyzed the data from head-mounted cameras on infants. And they found that over their first several months of life, these infants are having visual experiences that are driven by their developing motor abilities and their interactions with parents and other caregivers.

And the process unfolds in a way that enables them to efficiently learn about the world. The order in which they experience different aspects of their visual environment actually facilitates learning.

Linda: It’s a principle of learning, not a principle of the human brain. It’s a principle of the structure of data. I think what Mother Nature is doing is, it’s taking the developing baby who’s got to learn everything in language and vision and holding objects and sounds and everything, okay, and social relations and controlling self-regulation.

It is taking them on a little walk through the solution space. The data for training children has been curated by evolution. This is in sort of a marked contrast to all the large data models, right? They just scrape everything. Would you educate your kid by scraping off the web? I mean, would you train your child on this? So anyway, I think the data is important.

Abha: Another developmental psychologist who’s focused on babies and the data they experience is Mike Frank.

Mike Frank: I’m Mike Frank. I’m a professor of psychology at Stanford, and I’m generally interested in how children learn. So how they go from being speechless, wordless babies to, just a few years later, kids that can navigate the world. And so the patterns of growth and change that support that is what fascinates me, and I tend to use larger data sets and new methodologies to investigate those questions.

When I was back in grad school, people started working with this new method, they started putting cameras on kids’ heads. And so Pavan Sinha did it with his newborn and gave us this amazing rich look at what it looked like to be a newborn perceiving the visual world.

And then pioneers like Linda Smith and Chen Yu and Karen Adolf and Dick Aslan and others started experimenting with the method and gathering these really exciting data sets that were maybe upending our view of what children’s input looked like. And that’s really critical because if you’re a learning scientist, if you’re trying to figure out how learning works, you need to know what the inputs are as well as what the processes of learning are.

So I got really excited about this. And when I started my lab at Stanford, I started learning a little bit of crafting and trying to build little devices. We’d order cameras off the internet and then try to staple them onto camping headlamps or glue them on a little aftermarket fisheye lens.

We tried all these different little crafty solutions to get something that kids would enjoy wearing. At that time we were in advance of computer vision technologies by probably about five or seven years, so we thought naively that we could process this flood of video that we were getting from kids. And put it through computer vision and have an answer as to what the kids were seeing and it turned out the vision algorithms failed completely on these data.

They couldn’t process it at all, in part because the cameras were bad. And so they would have just a piece of what the child was seeing, and in part because the vision algorithms were bad, and they were trained on Facebook photos, not on children’s real input. And so they couldn’t process these very different angles and very different orientations and occlusions, cutting off faces and so forth.

So, that was how I got into it, I was thinking I could use computer vision to measure children’s input. And then it turned out I had to wait maybe five or seven years until the algorithms got good enough that that was true.

Melanie: So what are the most interesting things people have learned from this kind of data?

Mike: Well, as somebody interested in communication and social cognition and little babies, I thought the discovery, which I think belongs to Linda Smith and to her collaborators, the discovery that really floored me was that we’d been talking about gaze following and looking at people’s faces for years, that human gaze and human faces were this incredibly rich source of information.

And then when we looked at the head mounted camera videos, babies actually didn’t see faces that often because they’re lying there on the floor. They’re crawling. They’re really living in this world of knees. And so it turned out that when people were excited to spend time with the baby, or to manipulate their attention, they would put their hands right in front of the baby’s face and put some object right in the baby’s face.

And that’s how they would be getting the child’s attention or directing the child’s attention or interacting with them. It’s not that the baby would be looking way up there in the air to where the parent was and figuring out what the parent was looking at.

So this idea of sharing attention through hands and through manipulating the baby’s position and what’s in front of the baby’s face, that was really exciting and surprising as a discovery. And I think we’ve seen that borne out in the videos that we take in kids homes.

Abha: And doing psychological research on babies doesn’t come without its challenges.

Mike: You know, if you want to deal with the baby, you have to recruit that family, make contact with them, get their consent for research. And then the baby has to be in a good mood to be involved in a study or the child has to be willing to participate. And so we work with families online and in person.

We also go to local children’s museums and local nursery schools. And so, often for each of the data points that you see, at least in a traditional empirical study, that’s hours of work by a skilled research assistant or a graduate student doing the recruitment, actually delivering the experience to the child.

Melanie: Over the last several years, Mike and his collaborators have created two enormous datasets of videos taken by head-mounted cameras on children from six months to five years old. These datasets are not only being used by psychologists to better understand human cognitive development, but also by AI researchers to try to train machines to learn about the world more like the way babies do.

We’ll talk more about this research in Part 2.

Melanie: Part 2: Should AI systems learn the same way babies do?

Melanie: As we discussed in our previous episode, while large language models are able to do a lot of really impressive things, their abilities are still pretty limited when compared to humans. Many people in the AI world believe that if we just keep training large language models on more and more data, they’ll get better and better, and soon they’ll match or surpass human intelligence.

Abha: But other AI researchers think there’s something fundamental missing in the way these systems work, and in how they are currently trained. But what’s the missing piece? Can new insights about human cognitive development create a path for AI systems to understand the world in a more robust way?

Linda: I think the big missed factor in understanding human intelligence is understanding the structure, the statistics of the input. And I think the fail point of current AI definitely lies, I think, in the data. And I’d like to make the data used for training, and I’d like to make a case that that is the biggest fail point.

Abha: Today’s neural networks are typically trained on language and images scraped from the web. Linda and other developmental psychologists have tried something different — they’ve trained AI neural networks on image frames from the videos collected from head-mounted cameras. The question is whether this kind of data will make a difference in the neural networks’ abilities.

Linda: If you train them, pre-train them with babies visual inputs, 400 million images, and you order them from birth to 12 months of age, what we call the developmental order, versus you order them backwards from oldest to youngest, or if you randomize them, that the developmental order leads in a trained network that is better to learn the name for actions in later training, to learn object names in later training.

Not everybody is interested in this. They bought into the view that if you get enough data, any data, everything ever known or said in the world, okay, that you will be smart. You’ll be intelligent. It just does not seem to me that that’s necessarily true. There’s a lot of stuff out there that’s not accurate, dead wrong, and odd. Just scraping massive amounts of current knowledge that exists of everything ever written or every picture ever taken, it’s just, it’s not ideal.

Melanie: Is it a matter of getting better data, or getting better sort of ordering of how you teach these systems, or is there something more fundamental missing?

Linda: I don’t think it’s more fundamental actually, okay. I think it’s better data. I think it’s multimodal data. I think it’s data that is deeply in the real world, not in human interpretations of that real world, but deeply in the real world, data coming through the sensory systems. It’s the raw data.

It is not data that has gone through your biased, cultish views on who should or should not get funded in the mortgage, not biased by the worst elements on the web’s view of what a woman should look like, not biased in all these ways. It’s not been filtered through that information. It is raw, okay? It is raw.

Abha: Linda believes that the structure of the data, including its order over time, is the most important factor for learning in both babies and in AI systems. I asked her about the point Alison Gopnik made in our first episode: how important is it that the learning agent, whether it’s a child or a machine, is actively interacting in the real world, rather than passively learning from data it’s given?

Linda acknowledges that this kind of doing, rather than just observing — being able to, through one’s movements or attention, to actually generate the data that one’s learning from — is also key.

Linda: I think you get a lot by observing, but the doing is clearly important. So this is the multimodal enactive kind of view, which I think, doesn’t just get you data from the world at the raw level, although I think that would be a big boon, okay? From the real world, not photographs, okay? And in time. What I do in the next moment, what I say to you, depends on my state of knowledge.

Which means that the data that comes in at the next moment is related to what I need to learn or where I am in my learning. Because it is what I know right now is making me do stuff. That means a learning system and the data for learning, because the learning system generates it, are intertwined. It’s like the very same brain that’s doing the learning is the brain that’s generating the data.

Abha: Perhaps if AI researchers focused more on the structure of their training data rather than on sheer quantity, and if they enabled their machines to interact directly with the world rather than passively learning from data that’s been filtered through human interpretation, AI would end up having a better understanding of the world. Mike notes that, for example, the amount of language current LLMs are trained on is orders of magnitude larger than what kids are exposed to.

Mike: So modern AI systems are trained on huge data sets, and that’s part of their success. So you get the first glimmerings of this amazing flexible intelligence that we start to see when we see GPT-3 with 500 billion words of training data. It’s a trade secret of the companies how much training data they use, but the most recent systems are at least in the 10 trillion plus range of data.

A five-year-old has maybe heard 60 million words. That’d be a reasonable estimate. That’s kind of a high estimate for what a five-year-old has heard. So that’s, you know, six orders of magnitude different in some ways, five to six orders of magnitude different. So the biggest thing that I think about a lot is how huge that difference is between what the child hears and what the language model needs to be trained on.

Kids are amazing learners. And I think by drawing attention to the relative differences in the amount of data that kids and LLMs get, that really highlights just how sophisticated their learning is.

Melanie: But of course they’re getting other sensory modalities like vision and touching things and being able to manipulate objects. Is that gonna make a big difference with the amount of training they’re gonna need?

Mike: This is right where the scientific question is for me, which is what part of the child as a system, as a learning system or in their broader data ecosystem makes the difference. And you could think, well, maybe it’s the fact that they’ve got this rich visual input alongside the language. Maybe that’s the really important thing.

And then you’d have to grapple with the fact that adding, just adding pictures to language models doesn’t make them particularly that much smarter. At least in the most recent commercial systems, adding pictures makes them cool and they can do things with pictures now, but they still make the same mistakes about reasoning about the physical world that they did before.

Abha: Mike also points out that even if you train LLMs on the data generated by head-mounted cameras on babies, that doesn’t necessarily solve the physical reasoning problems.

Melanie: In fact, sometimes you get the opposite effect, where instead of becoming smarter, this data makes these models perform less well. As Linda pointed out earlier, there’s something special about having generated the data oneself, with one’s own body and with respect to what one actually wants to — or needs to — learn.

Mike: There are also some other studies that I think are a bit more of a cautionary tale, which is that if you train models on a lot of human data, they still don’t get that good. Actually, the data that babies have appears to be more, not less challenging, for language models and for computer vision models. These are pretty new results from my lab, but we find that performance doesn’t scale that well when you train on baby data.

You go to videos from a child’s home, you train models on that. And the video is all of the kid playing with the same truck, or there’s only one dog in the house. And then you try to get that model to recognize all the dogs in the world. And it’s like, no, it’s not the dog. So that’s a very different thing, right? So the data that kids get is both deeper and richer in some ways and also much less diverse in other ways.

And yet their visual system is still remarkably good at recognizing a dog, even when they’ve only seen one or two. So that kind of really quick learning and rapid generalization to the appropriate class, that’s something that we’re still struggling with in computer vision. And I think the same thing is true in language learning.

So doing these kinds of simulations with real data from kids, I think, could be very revealing of the strengths and weaknesses of our models.

Abha: What does Mike think is missing from our current models? Why do they need so many more examples of a dog before they can do the simple generalizations that kids are doing?

Mike: Maybe though it’s having a body, maybe it’s being able to move through space and intervene on the world, to change things in the world. Maybe that’s what makes the difference. Or maybe it’s being a social creature interacting with other people who are structuring the world for you and teaching you about the world. That could be important.

Or maybe it’s the system itself. Maybe it’s the baby and the baby has built in some concepts of objects and events and the agents, the people around them as social actors. And it’s really those factors that make the difference.

Abha: In our first episode, we heard a clip of Alison Gopnik’s one-year old grandson experimenting with a xylophone — it’s a really interactive kind of learning, where the child is controlling and creating the data, and then they’re able to generalize to other instruments and experiences. And when it comes to the stuff that babies care about most, they might only need to experience something once for it to stay with them.

Melanie: But also remember that Alison’s grandson was playing music with his grandfather — even though he couldn’t talk, he had a strong desire to play with, to communicate with his grandfather. Unlike humans, large language models don’t have this intrinsic drive to participate in social interactions.

Mike: A six month old can communicate. They can communicate very well about their basic needs. They can transfer information to other people. There’s even some experimental evidence that they can understand a little bit about the intentions of the other people and understand some rudiments of what it means to have a signal to get somebody’s attention or to get them to do something.

So they actually can be quite good at communication. So communication and language being two different things. Communication enables language and is at the heart of language, but you don’t have to know a language in order to be able to communicate.

Melanie: In contrast to babies, LLM’s aren’t driven to communicate. But they can exhibit what Mike calls “communicative behavior”, or what, in the previous episode, Murray Shanahan would have called “role-playing” communication.

Mike: LLMs do not start with communicative ability. LLMs are in the most basic, you know, standard architectures, prediction engines. They are trying to optimize their prediction of the next word. And then of course we layer on lots of other fine-tuning and reinforcement learning with human feedback, these techniques for changing their behavior to match other goals, but they really start basically as predictors.

And it is one of the most astonishing parts about the LLM revolution that you get some communicative behaviors out of very large versions of these models. So that’s really remarkable and I think it’s true. I think you can see pretty good evidence that they are engaging in things that we would call communicative.

Does that mean they fundamentally understand human beings? I don’t know and I think that’s pretty tough to demonstrate. But they engage in the kinds of reasoning about others’ goals and intentions that we look for in children. But they only do that when they’ve got 500 billion words or a trillion words of input.

So they don’t start with communication and then move to language the way we think babies do. They start with predicting whatever it is that they are given as input, which in the case of LLMs is language. And then astonishingly, they appear to extract some higher level generalizations that help them manifest communicative behaviors.

Abha: In spite of the many differences between LLMs and babies, Mike’s still very excited about what LLMs can contribute to our understanding of human cognition.

Mike: I think it’s an amazing time to be a scientist interested in the mind and in language. For 50 years, we’ve been thinking that the really hard part of learning human language is making grammatical sentences. And from that perspective, I think it is intellectually dishonest not to think that we’ve learned something big recently, which is that when you train models, relatively unstructured models, on lots of data about language, they can recover the ability to produce grammatical language. And that’s just amazing.

There were many formal arguments and theoretical arguments that that was impossible, and those arguments were fundamentally wrong, I think. And we have to come to grips with that as a field because it’s really a big change.

On the other hand, the weaknesses of the LLMs also are really revealing, right? That there are aspects of meaning, often those aspects that are grounded in the physical world that are trickier to reason about and take longer and need much more input than just getting a grammatical sentence. And that’s fascinating too.

The classic debate in developmental cognitive science has been about nativism versus empiricism, what must be innate to the child for the child to learn. I think my views are changing rapidly on what needs to be built in. And the next step is going to be trying to use those techniques to figure out what actually is built into the kids and to the human learners.

I’m really excited about the fact that these models have not just become interesting artifacts from an engineering or commercial perspective, but that they are also becoming real scientific tools, real scientific models that can be used and explored as part of this broad, open, accessible ecosystem for people to work on the human mind.

So just fascinating to see this new generation of models get linked to the brain, get it linked to human behavior and becoming part of the scientific discussion.

Abha: Mike’s not only interested in how LLMs can provide insight into human psychology. He’s also written some influential articles on how experimental practice in developmental psychology can help improve our understanding of LLMs.

Melanie: You’ve written some articles about how methods from developmental psychology research might be useful in evaluating the capabilities of LLMs. So what do you see as the problems with the way these systems are currently being evaluated? And how can research psychology contribute to this?

Mike: Well, way back in 2023, which is about 15 years ago in AI time, when GPT-4 came out, there was this whole set of really excited responses to it, which is great. It was very exciting technology. It still is. And some of them looked a lot like the following. “I played GPT-4, the transcript of the Moana movie from Disney, and it cried at the end and said it was sad. Oh my god, GPT-4 has human emotions.” Right.

And this kind of response to me as a psychologist struck me as a kind of classic research methods error, which is you’re not doing an experiment. you’re just observing this anecdote about a system and then jumping to the conclusion that you can infer what’s inside the system’s mind. And, you know, if psychology has developed anything, it’s a body of knowledge about the methods and the rules of that game of inferring what’s inside somebody else’s mind.

It’s by no means a perfect field, but some of these things are pretty, you know, well described and especially in developmental psych. So, classic experiments have a control group and an experimental group, and you compare between those two groups in order to tell if some particular active ingredient makes the difference. And so minimally, you would want to have evaluations with two different, sort of types of material, and comparison between them in order to make that kind of inference.

And so that’s the sort of thing that I have gone around saying and have written about a bit is that you just need to take some basic tools from experimental methods, doing controlled experiments, using kind of tightly controlled simple stimuli so that you know why the LLM or why the child gives you a particular response and so forth, so that you don’t get these experimental findings that turn out later to be artifacts because you didn’t take care of a particular confound in your stimulus materials.

Melanie: What kind of response have you gotten from the AI community?

Mike: I think there’s actually been some openness to this kind of work. There has been a lot of push-back on those initial evaluations of language models. Just to give one kind of concrete example here, I was making fun of people with this human emotions bit, but there were actually a lot of folks that made claims about different ChatGPT versions having what’s called theory of mind, that is being able to reason about the beliefs and desires of other people. So the initial evaluations took essentially stories from the developmental psychology literature that are supposed to diagnose theory of mind. These are things like the Sally Anne task.

Abha: You might remember the Sally-Anne Test from our last episode. Sally puts an object — let’s say a ball, or a book, or some other thing, in one place and then leaves. And then while Sally’s away, Anne moves that object to another hiding spot. And then the test asks: Where will Sally look for her object when she returns?

Melanie: And even though you and I know where Anne put the book or the ball, we also know that Sally does not know that, so when she returns she’ll look in the wrong place for it. Theory of mind is understanding that Sally has a false belief about the situation because she has her own separate experience.

Abha: And if you give ChatGPT a description of the Sally-Anne test, it can solve it. But we don’t know if it can do it because it’s actually reasoning, or just because it’s absorbed so many examples during its training period. And so researchers started making small changes that initially tripped up the LLMs, like changing the names of Sally and Anne. But LLMs have caught on to those too.

Mike: LLMs are pretty good at those kinds of superficial alterations. So maybe you need to make new materials. Maybe you need to actually make new puzzles about people’s beliefs that don’t involve changing the location of an item. Right. So people got a lot better at this. And I wouldn’t say that the state of the art is perfect now. But the approach that you see in papers that have come out even just a year later is much more sophisticated.

They have a lot of different puzzles about reasoning about other people. They’re looking at whether the LLM correctly diagnoses why a particular social faux pas was embarrassing or whether a particular way of saying something was awkward. There’s a lot more reasoning that is necessary in these new benchmarks.

So I think this is actually a case where the discussion, which I was just a small part of, really led to an improvement in the research methods. We still have further to go, but it’s only been a year. So I’m quite optimistic that all of this discussion of methods has actually improved our understanding of how to study the models and also actually improved our understanding of the models themselves.

Abha: So, Melanie, from everything Mike just said, it sounds like researchers who study LLMs are still figuring out the best way to understand how they work. And it’s not unlike the long process of trying to understand babies, too. Right?

Melanie: Right. You know, when I first heard about psychologists putting cameras on babies’ heads to record, I thought it was hilarious. But it sounds like the data collected from these cameras is actually revolutionizing developmental psychology! We heard from Linda that the data shows that the structure of the baby’s visual experiences is quite different from what people had previously thought.

Abha: Right. I mean, it’s amazing that, you know, they don’t actually see our faces so much. As Mike mentioned, they’re in a world of knees, right? And Linda seems to think that the structuring of the data by Mother Nature, as she put it, is what allows babies to learn so much in their first few years of life.

Melanie: Right. Linda talked about the so-called developmental order, which is the temporal order in which babies get different kinds of visual or other experiences as they mature. And what they see and hear is driven by what they can do with their own bodies and their social relationships.

And importantly, it’s also driven by what they want to learn, what they’re curious about. It’s completely different from the way large language models learn, which is by humans feeding them huge amounts of text and photos scraped from the web.

Abha: And this developmental order, I mean, it’s also conducive to babies to learn the right things at the right time. And remember Mike pointed out that the way babies and children learn allows them to do more with less.

They’re able to generalize much more easily than LLMs can. But there’s still a lot of mystery about all of this. People are still trying to make sense of the development of cognition in humans, right?

Melanie: And interestingly, Mike thinks that large language models are actually going to help psychologists in this, even though they’re so different from us. So for example, LLMs can be used as a proof of principle of what can actually be learned versus what has to be built in and of what kinds of behaviors can emerge, like the communication behavior he talked about.

I’m also personally very excited about the other direction, using principles from child development in improving AI systems and also using principles from experimental methodology in figuring out what LLMs are and aren’t capable of.

Abha: Yeah. Often it seems like trying to compare the intelligence of humans and computers is like trying to compare apples to oranges. They seem so different. And trying to use tests that are typically used in humans, like the theory of mind test that Mike referred to and Tomer talked about in our last episode, they don’t seem to always give us the insights we’re looking for.

So what kinds of approaches should be used to evaluate cognitive abilities and LLMs? I mean, is there something to be learned from the methods used to study intelligence in non-human animals?

Melanie: Well, in our next episode, we’ll look more closely at how to assess intelligence, and if we’re even asking the right questions.

Ellie Pavlick: I think, what it means when a person passes the MCAT, or scores well on the SAT, is not the same thing as what it might mean when a neural network does that. We don’t really know what it means when a neural network does that. And that’s part of the problem.

Melanie: That’s next time, on Complexity. Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure, and our theme song is by Mitch Mignano. Additional music from Blue Dot Sessions. I’m Melanie, thanks for listening

If you enjoyed this article…Buy Me A Coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Hans and Ola Rosling: How not to be ignorant about the world @ TEDSalon Berlin

I had the pleasure of attending a special TED event in 2014. TEDSalon Berlin was just a one day affair, yet it featured a number of compelling talks that served as examples of impactful stories on global issues. This post is an analysis of a talk given by Hans RoslingOla Rosling on how knowledge, or a lack of knowledge, shapes our view of the world. For a better future, we need to understand today.

Watch Hans and Ola Rosling’s TED Talk. The numbers that are being presented represent serious topics, yet the focus in not on digging into the trends, but to highlight how our perceptions about these trends are so often wrong. It’s a fun talk to watch, which doesn’t often happen with statistics, yet inspires us to use caution before jumping to conclusions.

Transcript

(my notes in red)

Hans Rosling: I’m going to ask you three multiple choice questions. Use this device. Use this device to answer. The first question is, how did the number of deaths per year from natural disaster, how did that change during the last century? Did it more than double, did it remain about the same in the world as a whole, or did it decrease to less than half? Please answer A, B or C. I see lots of answers. This is much faster than I do it at universities. They are so slow. They keep thinking, thinking, thinking. Oh, very, very good.

Quite different from the reserved style of most TED speakers, Hans brings the energy level up immediately with the tone, volume, and passion in his voice. The digital interaction with the audience also differentiates this talk from a simple narration and makes the audience a character within the narration.

And we go to the next question. So how long did women 30 years old in the world go to school: seven years, five years or three years? A, B or C? Please answer.

And we go to the next question. In the last 20 years, how did the percentage of people in the world who live in extreme poverty change? Extreme poverty — not having enough food for the day. Did it almost double, did it remain more or less the same, or did it halve? A, B or C?

Now, answers. You see, deaths from natural disasters in the world, you can see it from this graph here, from 1900 to 2000. In 1900, there was about half a million people who died every year from natural disasters: floods, earthquakes, volcanic eruption, whatever, droughts. And then, how did that change?

Gapminder asked the public in Sweden. This is how they answered. The Swedish public answered like this: Fifty percent thought it had doubled, 38 percent said it’s more or less the same, 12 said it had halved.

This is the best data from the disaster researchers, and it goes up and down, and it goes to the Second World War, and after that it starts to fall and it keeps falling and it’s down to much less than half. The world has been much, much more capable as the decades go by to protect people from this, you know. So only 12 percent of the Swedes know this.

Hans uses a chart to map the answers for the first question based on a research study, then displays the actual answer as a line which proceeds across the chart along the time axis. It’s a powerful way to overlay a statistical answer in conjunction with a prediction of the answer.

So I went to the zoo and I asked the chimps. The chimps don’t watch the evening news, so the chimps, they choose by random, so the Swedes answer worse than random. Now how did you do? That’s you. You were beaten by the chimps. But it was close. You were three times better than the Swedes, but that’s not enough. You shouldn’t compare yourself to Swedes. You must have higher ambitions in the world.

Adding humor to a statistical story block isn’t the easiest thing to do, but Hans is a natural comedian alongside his talent at following the science. Can you insert a lighthearted comedic moment while sharing data? Some topics don’t lend themselves to humor, so be mindful.

Let’s look at the next answer here: women in school. Here, you can see men went eight years. How long did women go to school? Well, we asked the Swedes like this, and that gives you a hint, doesn’t it? The right answer is probably the one the fewest Swedes picked, isn’t it? Let’s see, let’s see. Here we come. Yes, yes, yes, women have almost caught up. This is the U.S. public. And this is you. Here you come. Ooh. Well, congratulations, you’re twice as good as the Swedes, but you don’t need me —

So how come? I think it’s like this, that everyone is aware that there are countries and there are areas where girls have great difficulties. They are stopped when they go to school, and it’s disgusting. But in the majority of the world, where most people in the world live, most countries, girls today go to school as long as boys, more or less. That doesn’t mean that gender equity is achieved, not at all. They still are confined to terrible, terrible limitations, but schooling is there in the world today. Now, we miss the majority. When you answer, you answer according to the worst places, and there you are right, but you miss the majority.

Beyond the numbers themselves, and how different groups faired when predicting, Hans offers an insight as to why so many people got the answer wrong. We tend to be more aware of where problems exist, not successes. He doesn’t mention it, but in my experience that’s because the news focuses on problems over successes. I’d like to see that topic analyzed in parallel, but that would make for a much longer talk.

What about poverty? Well, it’s very clear that poverty here was almost halved, and in U.S., when we asked the public, only five percent got it right. And you? Ah, you almost made it to the chimps. That little, just a few of you! There must be preconceived ideas, you know. And many in the rich countries, they think that oh, we can never end extreme poverty. Of course they think so, because they don’t even know what has happened. The first thing to think about the future is to know about the present.

This last line is a fundamental perspective that Hans is bringing into this talk – that we need to know about the present and understand what is really going on now in order to consider what the future might look like.

These questions were a few of the first ones in the pilot phase of the Ignorance Project in Gapminder Foundation that we run, and it was started, this project, last year by my boss, and also my son, Ola Rosling. He’s cofounder and director, and he wanted, Ola told me we have to be more systematic when we fight devastating ignorance. So already the pilots reveal this, that so many in the public score worse than random, so we have to think about preconceived ideas, and one of the main preconceived ideas is about world income distribution.

Look here. This is how it was in 1975. It’s the number of people on each income, from one dollar a day. See, there was one hump here, around one dollar a day, and then there was one hump here somewhere between 10 and 100 dollars. The world was two groups. It was a camel world, like a camel with two humps, the poor ones and the rich ones, and there were fewer in between.

Continuing with his fun approach to numbers, Hans uses a prop to not only point at the graph behind him, but to elicit a laugh from the audience. Props are an old trick, but you don’t see them so often anymore, so it’s a refreshing change.

But look how this has changed: As I go forward, what has changed, the world population has grown, and the humps start to merge. The lower humps merged with the upper hump, and the camel dies and we have a dromedary world with one hump only. The percent in poverty has decreased. Still it’s appalling that so many remain in extreme poverty. We still have this group, almost a billion, over there, but that can be ended now.

The challenge we have now is to get away from that, understand where the majority is, and that is very clearly shown in this question. We asked, what is the percentage of the world’s one-year-old children who have got those basic vaccines against measles and other things that we have had for many years: 20, 50 or 80 percent?

Now, this is what the U.S. public and the Swedish answered. Look at the Swedish result: you know what the right answer is. Who the heck is a professor of global health in that country? Well, it’s me. It’s me. It’s very difficult, this. It’s very difficult.

However, Ola’s approach to really measure what we know made headlines, and CNN published these results on their web and they had the questions there, millions answered, and I think there were about 2,000 comments, and this was one of the comments. “I bet no member of the media passed the test,” he said.

So Ola told me, “Take these devices. You are invited to media conferences. Give it to them and measure what the media know.” And ladies and gentlemen, for the first time, the informal results from a conference with U.S. media. And then, lately, from the European Union media. You see, the problem is not that people don’t read and listen to the media. The problem is that the media doesn’t know themselves. What shall we do about this, Ola? Do we have any ideas?

Ola Rosling: Yes, I have an idea, but first, I’m so sorry that you were beaten by the chimps. Fortunately, I will be able to comfort you by showing why it was not your fault, actually. Then, I will equip you with some tricks for beating the chimps in the future. That’s basically what I will do.

But first, let’s look at why are we so ignorant, and it all starts in this place. It’s Hudiksvall. It’s a city in northern Sweden. It’s a neighborhood where I grew up, and it’s a neighborhood with a large problem. Actually, it has exactly the same problem which existed in all the neighborhoods where you grew up as well. It was not representative. Okay? It gave me a very biased view of how life is on this planet. So this is the first piece of the ignorance puzzle. We have a personal bias.

The talk pivots in two respects at this point. Hans give the floor to his son, Ola, and it shifts from demonstrating that the public and media has a lack of awareness when it comes to important statistics, to explaining why that is and what can be done about it.

It’s pretty much at the half way mark, which is common in a problem / solution style talk. It’s important that your audience have a solid understanding of your topic before you present your idea for creating better outcomes in the future.

We have all different experiences from communities and people we meet, and on top of this, we start school, and we add the next problem. Well, I like schools, but teachers tend to teach outdated worldviews, because they learned something when they went to school, and now they describe this world to the students without any bad intentions, and those books, of course, that are printed are outdated in a world that changes. And there is really no practice to keep the teaching material up to date. So that’s what we are focusing on. So we have these outdated facts added on top of our personal bias.

What happens next is news, okay? An excellent journalist knows how to pick the story that will make headlines, and people will read it because it’s sensational. Unusual events are more interesting, no? And they are exaggerated, and especially things we’re afraid of. A shark attack on a Swedish person will get headlines for weeks in Sweden. So these three skewed sources of information were really hard to get away from.

Having presented the reasons for our general lack of knowledge, Ola uses a slide to help focus the audience’s mind on those three topics – Personal bias, Outdated facts, and News bias. The subject is far more complex than this, but for a talk under 20 minutes, it’s important to direct your narrative to the most important ideas. See if you can do that in three or less.

They kind of bombard us and equip our mind with a lot of strange ideas, and on top of it we put the very thing that makes us humans, our human intuition. It was good in evolution. It helped us generalize and jump to conclusions very, very fast. It helped us exaggerate what we were afraid of, and we seek causality where there is none, and we then get an illusion of confidence where we believe that we are the best car drivers, above the average. Everybody answered that question, “Yeah, I drive cars better.”

Okay, this was good evolutionarily, but now when it comes to the worldview, it is the exact reason why it’s upside down. The trends that are increasing are instead falling, and the other way around, and in this case, the chimps use our intuition against us, and it becomes our weakness instead of our strength. It was supposed to be our strength, wasn’t it?

So how do we solve such problems? First, we need to measure it, and then we need to cure it. So by measuring it we can understand what is the pattern of ignorance. We started the pilot last year, and now we’re pretty sure that we will encounter a lot of ignorance across the whole world, and the idea is really to scale it up to all domains or dimensions of global development, such as climate, endangered species, human rights, gender equality, energy, finance.

All different sectors have facts, and there are organizations trying to spread awareness about these facts. So I’ve started actually contacting some of them, like WWF and Amnesty International and UNICEF, and asking them, what are your favorite facts which you think the public doesn’t know?

Okay, I gather those facts. Imagine a long list with, say, 250 facts. And then we poll the public and see where they score worst. So we get a shorter list with the terrible results, like some few examples from Hans, and we have no problem finding these kinds of terrible results. Okay, this little shortlist, what are we going to do with it?

Well, we turn it into a knowledge certificate, a global knowledge certificate, which you can use, if you’re a large organization, a school, a university, or maybe a news agency, to certify yourself as globally knowledgeable. Basically meaning, we don’t hire people who score like chimpanzees. Of course you shouldn’t. So maybe 10 years from now, if this project succeeds, you will be sitting in an interview having to fill out this crazy global knowledge.

Part one of the solution is to create a knowledge certificate…

So now we come to the practical tricks. How are you going to succeed? There is, of course, one way, which is to sit down late nights and learn all the facts by heart by reading all these reports. That will never happen, actually. Not even Hans thinks that’s going to happen. People don’t have that time. People like shortcuts, and here are the shortcuts. We need to turn our intuition into strength again. We need to be able to generalize. So now I’m going to show you some tricks where the misconceptions are turned around into rules of thumb.

Part two of the solution is how to achieve that knowledge…

Let’s start with the first misconception. This is very widespread. Everything is getting worse. You heard it. You thought it yourself. The other way to think is, most things improve. So you’re sitting with a question in front of you and you’re unsure. You should guess “improve.” Okay? Don’t go for the worse. That will help you score better on our tests. That was the first one.

There are rich and poor and the gap is increasing. It’s a terrible inequality. Yeah, it’s an unequal world, but when you look at the data, it’s one hump. Okay? If you feel unsure, go for “the most people are in the middle.” That’s going to help you get the answer right.

Now, the next preconceived idea is first countries and people need to be very, very rich to get the social development like girls in school and be ready for natural disasters. No, no, no. That’s wrong. Look: that huge hump in the middle already have girls in school. So if you are unsure, go for the “the majority already have this,” like electricity and girls in school, these kinds of things. They’re only rules of thumb, so of course they don’t apply to everything, but this is how you can generalize.

Let’s look at the last one. If something, yes, this is a good one, sharks are dangerous. No — well, yes, but they are not so important in the global statistics, that is what I’m saying. I actually, I’m very afraid of sharks. So as soon as I see a question about things I’m afraid of, which might be earthquakes, other religions, maybe I’m afraid of terrorists or sharks, anything that makes me feel, assume you’re going to exaggerate the problem. That’s a rule of thumb. Of course there are dangerous things that are also great. Sharks kill very, very few. That’s how you should think.

With these four rules of thumb, you could probably answer better than the chimps, because the chimps cannot do this. They cannot generalize these kinds of rules. And hopefully we can turn your world around and we’re going to beat the chimps. Okay? That’s a systematic approach.

Ola provides four methods of improving your odds when it comes to guessing trend lines, but are you convinced they will work? I’m not speculating either way. I’m simply asking the question because if you’re creating a problem / solution, idea-driven narrative, what will matter most is whether the audience buys into your idea.

Now the question, is this important? Yeah, it’s important to understand poverty, extreme poverty and how to fight it, and how to bring girls in school. When we realize that actually it’s succeeding, we can understand it. But is it important for everyone else who cares about the rich end of this scale? I would say yes, extremely important, for the same reason. If you have a fact-based worldview of today, you might have a chance to understand what’s coming next in the future.

We’re going back to these two humps in 1975. That’s when I was born, and I selected the West. That’s the current EU countries and North America. Let’s now see how the rest and the West compares in terms of how rich you are. These are the people who can afford to fly abroad with an airplane for a vacation. In 1975, only 30 percent of them lived outside EU and North America. But this has changed, okay?

So first, let’s look at the change up till today, 2014. Today it’s 50/50. The Western domination is over, as of today. That’s nice. So what’s going to happen next? Do you see the big hump? Did you see how it moved? I did a little experiment. I went to the IMF, International Monetary Fund, website. They have a forecast for the next five years of GDP per capita. So I can use that to go five years into the future, assuming the income inequality of each country is the same.

I did that, but I went even further. I used those five years for the next 20 years with the same speed, just as an experiment what might actually happen. Let’s move into the future. In 2020, it’s 57 percent in the rest. In 2025, 63 percent. 2030, 68.

And in 2035, the West is outnumbered in the rich consumer market. These are just projections of GDP per capita into the future. Seventy-three percent of the rich consumers are going to live outside North America and Europe. So yes, I think it’s a good idea for a company to use this certificate to make sure to make fact-based decisions in the future.

It gets a bit heavy with the rapid fire numbers towards the end, and while I come away with the impression that, once again, my assumptions were wrong, I’m not sure that I come away with the feeling that the certificate is a good idea. That’s largely due to the fact that the certificate itself was not fully explained.

One of the challenges that you’ll deal with in presenting an idea with impact is getting the audience to understand both the problem and solution in a short period of time. In this case, my view is that accomplishing that task would need twice the amount of time.

This is where rehearsing in front of other people becomes extremely valuable. Without telling your audience what your talk is about, just present it, then ask them what they thought the talk was about and ask for their opinion as to whether your talk shifted their perception. If people are unclear at the end, another editing cycle is called for.

18:39
Thank you very much.

[Note: all comments inserted into this transcript are my opinions, not those of the speaker, the TED organization, nor anyone else on the planet. In my view, each story is unique, as is every interpretation of that story. The sole purpose of these analytical posts is to inspire a storyteller to become a storylistener, and in doing so, make their stories more impactful.]

If you enjoyed this article…Buy Me A Coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to our newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved