Skip to main content

Ex Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI! They Leaked Our Secrets At Google! AI transcript and summary - episode of podcast The Diary Of A CEO with Steven Bartlett

· 108 min read

Go to PodExtra AI's episode page (Ex Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI! They Leaked Our Secrets At Google!) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Go to PodExtra AI's podcast page (The Diary Of A CEO with Steven Bartlett) to view the AI-processed content of all episodes of this podcast.

The Diary Of A CEO with Steven Bartlett episodes list: view full AI transcripts and summaries of this podcast on the blog

Episode: Ex Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI! They Leaked Our Secrets At Google!

Ex Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI! They Leaked Our Secrets At Google!

Author: DOAC
Duration: 01:49:43

Episode Shownotes

He scaled Google from startup to $2 trillion success, can Eric Schmidt now help save humanity from the dangers of AI? Eric Schmidt is the former CEO of Google and co-founder of Schmidt Sciences. He is also the author of bestselling books such as, ‘The New Digital Age’ and ‘Genesis:

Artificial Intelligence, Hope, and the Human Spirit’. In this conversation, Eric and Steven discuss topics such as, how TikTok is influencing algorithms, the 2 AI tools that companies need, how Google employees leaked secret information, and the link between AI and human survival. (00:00) Intro (02:05) Why Did You Write a Book About AI? (03:49) Your Experience in the Area of AI (05:06) Essential Knowledge to Acquire at 18 (06:49) Is Coding a Dying Art Form? (07:49) What Is Critical Thinking and How Can It Be Acquired? (10:24) Importance of Critical Thinking in AI (13:40) When Your Children's Best Friend Is a Computer (15:38) How Would You Reduce TikTok's Addictiveness? (18:38) Principles of Good Entrepreneurship (20:57) Founder Mode (22:01) The Backstory of Google's Larry and Sergey (24:27) How Did You Join Google? (25:33) Principles of Scaling a Company (28:50) The Significance of Company Culture (33:02) Should Company Culture Change as It Grows? (36:42) Is Innovation Possible in Big Successful Companies? (38:15) How to Structure Teams to Drive Innovation (42:37) Focus at Google (45:25) The Future of AI (48:40) Why Didn’t Google Release a ChatGPT-Style Product First? (51:53) What Would Apple Be Doing if Steve Jobs Were Alive? (55:42) Hiring & Failing Fast (58:53) Microcultures at Google & Growing Too Big (01:04:02) Competition (01:04:39) Deadlines (01:05:17) Business Plans (01:06:28) What Made Google’s Sergey and Larry Special? (01:09:12) Navigating Media Production in the Age of AI (01:12:17) Why AI Emergence Is a Matter of Human Survival (01:17:39) Dangers of AI (01:21:01) AI Models Know More Than We Thought (01:23:45) Will We Have to Guard AI Models with the Army? (01:25:32) What If China or Russia Gains Full Control of AI? (01:27:56) Will AI Make Jobs Redundant? (01:31:09) Incorporating AI into Everyday Life (01:33:20) Sam Altman's Worldcoin (01:34:45) Is AI Superior to Humans in Performing Tasks? (01:35:29) Is AI the End of Humanity? (01:36:05) How Do We Control AI? (01:37:51) Your Biggest Fear About AI (01:40:24) Work from Home vs. Office: Your Perspective (01:42:59) Advice You Wish You’d Received in Your 30s (01:44:44) What Activity Significantly Improves Everyday Life? Join the waitlist for The 1% Diary - https://bit.ly/1-Diary-Waitlist-YT-ad-reads Follow Eric: Twitter - https://g2ul0.app.link/7JNHZYGKuOb You can purchase Eric’s books, here: ‘Genesis: Artificial Intelligence, Hope, and the Human Spirit’ UK version - https://amzn.to/40M9o05 ‘Genesis: Artificial Intelligence, Hope, and the Human Spirit’ US version - https://g2ul0.app.link/UT7lhDTFCOb ‘The Age of AI And Our Human Future’ - https://g2ul0.app.link/bO1UnZ9KuOb ‘Trillion Dollar Coach’ - https://g2ul0.app.link/4D9a9icLuOb ‘How Google Works’ - https://g2ul0.app.link/pEnkHTeLuOb ‘The New Digital Age: Transforming Nations, Businesses, and Our Lives’ - https://g2ul0.app.link/37Vt9yhLuOb Watch the episodes on Youtube - https://g2ul0.app.link/DOACEpisodes You can purchase the The Diary Of A CEO Conversation Cards: Second Edition, here: https://g2ul0.app.link/f31dsUttKKb Follow me: https://g2ul0.app.link/gnGqL4IsKKb PerfectTed - https://www.perfectted.com with code DIARY40 for 40% off NordVPN - http://NORDVPN.COM/DOAC Learn more about your ad choices. Visit megaphone.fm/adchoices

Summary

In this episode, former Google CEO Eric Schmidt discusses the rapid advancements in AI and their implications for humanity. He highlights the necessity of critical thinking and programming skills for young individuals in navigating the complexities of technology. Schmidt raises concerns about misinformation and the influence of social media algorithms on mental health. He reflects on Google's foundational journey and the importance of company culture in driving innovation. Furthermore, he emphasizes the urgent need for AI regulation to mitigate risks associated with cyber threats and biological dangers, advocating for embedding human values in AI decision-making processes.

Go to PodExtra AI's episode page (Ex Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI! They Leaked Our Secrets At Google!) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Full Transcript

00:00:00 Speaker_03
Someone was leaking information on Google, and this stuff is incredibly secret.

00:00:04 Speaker_02
So what are the secrets? Well, the first is Eric Schmidt is the former CEO of Google who grew the company from $100 million to $180 billion. And this is how.

00:00:15 Speaker_01
As someone who's led one of the world's biggest tech companies, what are those first principles for leadership business and doing something great?

00:00:21 Speaker_03
Well, the first is risk taking is key. If you look at Elon, he's an incredible entrepreneur because he has this brilliance where he can take huge risks and fail fast.

00:00:30 Speaker_03
And fast failure is important because if you build the right product, your customers will come. But it's a race to get there as fast as you can because you want to be first, because that's where you make the most amount of money.

00:00:41 Speaker_03
So what are the other principles that I need to be thinking about? So here's a really big one. At Google, we had this 70-20-10 rule that generated $10, $20, $30, $40 billion of extra profits over a decade. And everyone could go do this.

00:00:53 Speaker_03
So the first thing is, what about AI? I can tell you that if you're not using AI at every aspect of your business, you're not going to make it.

00:01:00 Speaker_01
But you've been in the tech industry for a long time. And you've said the advent of artificial intelligence is a question of human survival.

00:01:08 Speaker_03
AI is going to move very quickly. And you will not notice how much of your world has been co-opted by these technologies, because they will produce greater delight. But the questions are, what are the dangers?

00:01:18 Speaker_03
Are we advancing with it, and do we have control over it? What is your biggest fear about AI? My actual fear is different from what you might imagine. My actual fear is, that's a good time to pull the plug.

00:01:35 Speaker_01
Eric, I've read about your career, and you've had an extensive, a varied, a fascinating career, a completely unique career. And that leads me to believe that you could have written about anything.

00:01:47 Speaker_01
You know, you've got some incredible books, all of which I've been through over the last couple of weeks here in front of me. I apologise. No, no, but I mean, these are subjects that I'm just obsessed with.

00:01:55 Speaker_01
But this book in particular, of all the things you could have written about, with the world we find ourselves in, why this? Why Genesis?

00:02:06 Speaker_03
Well, first, thank you for, I wanted to be on the show for a long time, so I'm really happy to be able to be here in person in London. Henry Kissinger, Dr. Kissinger, ended up being one of my greatest and closest friends.

00:02:18 Speaker_03
And 10 years ago, he and I were at a conference where he heard Demis Hassabis speak about AI. And Henry would tell the story that he was about to go catch up on his jet lag. But instead, I said, go do this. And he listened to it.

00:02:34 Speaker_03
And all of a sudden, he understood that we were playing with fire.

00:02:38 Speaker_03
that we were doing something that we did not understand it would have the impact on, and that Henry had been working on this since he was 22 coming out of the army after World War II, and his thesis about Kant and so forth as an undergraduate at Harvard.

00:02:52 Speaker_03
So all of a sudden, I found myself in a whole group of people who were trying to understand, what does it mean to be human in an age of AI? When this stuff starts showing up, how does our life change? How do our thoughts change?

00:03:07 Speaker_03
Humans have never had an intellectual challenger of our own ability or better or worse. It just never happened in history. The arrival of AI is a huge moment in history.

00:03:21 Speaker_01
For anyone that doesn't know your story, or maybe just knows your story from sort of Google onwards, can you tell me the sort of inspiration points, the education, the experiences that you're drawing on when you talk about these subjects?

00:03:34 Speaker_03
Well, like many of the people you meet, as a teenager I was interested in science. I played with model rockets, model trains, the usual things for a boy in my generation.

00:03:47 Speaker_03
I was too young to be a video game addict, but I'm sure I would be today if I were that age. I went to college and I was very interested in computers and they were relatively slow then, but to me they were fascinating.

00:04:00 Speaker_03
To give you an example, the computer that I used in college is 100 million times slower, 100 million times slower than the phone you have in your pocket. And by the way, that was a computer for the entire university.

00:04:14 Speaker_03
So Moore's law, which is this notion of accelerating density of chips, has defined the wealth creation, the career creation, the company creation in my life.

00:04:24 Speaker_03
So I can be understood as lucky because I was born with an interest in something which was about to explode. And when sort of everything happens together, everyone gets swept up in it. And of course, the rest is history.

00:04:38 Speaker_01
I was sat this weekend with my partner's little brother who's 18 years old.

00:04:44 Speaker_01
And as we ate breakfast yesterday before they flew back to Portugal, we had this discussion with her family, her dad was there, her mum was there, Raf, the younger brother was there, and my girlfriend was there.

00:04:56 Speaker_01
Difficult because most of them don't speak English, so we had to use, funnily enough, AI to translate what I was saying. But the big discussion at breakfast was, what should Raf do in the future?

00:05:05 Speaker_01
He's 18 years old, he's got his career ahead of him, and the decisions he makes is so evident in your story. at this exact moment as to what information and intelligence he acquires for himself will quite clearly define the rest of his life.

00:05:19 Speaker_01
If you were sat at that table with me yesterday, when I was trying to give Raf advice on what knowledge he should acquire at 18 years old, what would you have said? And what are the principles that sit behind that?

00:05:31 Speaker_03
The most important thing is to develop analytical, critical thinking skills. To some level, I don't care how you get there. So, if you like math or science or if you like the law or if you like, you know, entertainment, just think critically.

00:05:45 Speaker_03
In his particular case, as an 18-year-old, what I would encourage him to do is figure out how to write programming, to write programs in a language called Python.

00:05:55 Speaker_03
Python is easy to use, it's very easy to understand, and it's become the language of AI. So the AI systems, when they write code for themselves, they write code in Python. And so you can't lose as developing Python programming skills.

00:06:10 Speaker_03
And the simplest thing to do with an 18-year-old man is say, make a game. Because these are typically gamers, stereotypically, make a game that's interesting using Python.

00:06:21 Speaker_01
It's interesting because I wondered if coding, you know, I think five, ten years ago everyone's advice to an 18-year-old is learn how to code.

00:06:29 Speaker_01
But in a world of AI where these large language models are able to write code and are increasing every month in their ability to write better and better code, I wondered if that's like a dying art form.

00:06:40 Speaker_03
Yeah.

00:06:41 Speaker_01
A lot of people have posed this, and that's not correct.

00:06:44 Speaker_03
It sure looks like these systems will write code. But remember, the systems also have interfaces called APIs, which you can program them.

00:06:53 Speaker_03
So one of the large revenue sources for these AI models, because these companies have to make money at some point, is you build a program and you actually make an API call and ask it a question.

00:07:03 Speaker_03
Typical example is give it a picture and tell me what's in the picture. Now, can you have some fun with that as an 18-year-old? Of course, right?

00:07:12 Speaker_03
So when I say Python, I mean Python using the tools that are available to build something new, something that you're interested in.

00:07:21 Speaker_01
And when you say critical thinking, How does one, what is critical thinking, and how does one go about acquiring that as a skill?

00:07:28 Speaker_03
Well, the first and most important thing about critical thinking is to distinguish between being marketed to, which is also known as being lied to, and being given the argument on your own.

00:07:39 Speaker_03
We have, because of social media, which I hold responsible for a lot of ills as well as good things in life, we've sort of gotten used to people just telling us something and believing it because our friends believe it or so forth.

00:07:52 Speaker_03
And I strongly encourage people to check assertions. So you get people who say all this stuff, and I learned at Google all those years. Somebody says something, I check it on Google. And you then have a question.

00:08:08 Speaker_03
Do you criticize them and correct them, or do you let it go? But you want to be in the position where somebody makes a statement. Like, did you know that only 10% of Americans have passports, which is a widely viewed but false statement?

00:08:23 Speaker_03
It's actually higher than that, although it's never high enough in my view in America. But that's an example of an assertion that you can just say, is that true, right?

00:08:32 Speaker_03
There's a long meme of American politicians where the Congress is basically full of criminals. It may be full of one or two, but it's not full of 90. But again, people believe this stuff because it sounds plausible.

00:08:45 Speaker_03
So if somebody says something plausible, just check it. You have a responsibility before you repeat something to make sure what you're repeating is true. And if you can't distinguish between true and false, I suggest you keep your mouth shut. Right?

00:09:04 Speaker_03
Because you can't run a government, a society, without people operating on basic facts. Like, for example, climate change is real. We can debate over whether it's, how to address it. But there's no question. The climate is changing. It is a fact.

00:09:18 Speaker_03
It is a mathematical fact. And how do I know this? And somebody will say, well, how do you know? And I said, because science is about repeatable experiments and also proving things wrong. So, let's say I said that climate change is real.

00:09:34 Speaker_03
and this was the first time it had ever been said, which is not true, then 100 people would say, that can't be true, I'll see if he's wrong, and then all of a sudden, they'd see I was right and I'd get some big prize.

00:09:45 Speaker_03
So the falsifiability of these assertions is very important. How do you know that science is correct? It's because people are constantly testing it.

00:09:56 Speaker_01
And why is this skill of critical thinking so especially important in a world of AI?

00:10:01 Speaker_03
Well, partly because AI will allow for perfect misinformation. So let's use an example of TikTok. TikTok can be understood, it's called the bandit algorithm in computer science, in the sense of the Las Vegas one-armed bandits.

00:10:15 Speaker_03
Do I stay in the bandit machine and I keep on this slot machine or do I move to another slot machine?

00:10:22 Speaker_03
And the TikTok algorithm basically can be understood as, I'll keep serving you what you tell me you want, but occasionally I'll give you something from the adjacent area. And it's highly addictive.

00:10:35 Speaker_03
So what you're seeing with social media, and TikTok is a particularly bad example of this, is people are getting into these rabbit holes where all they see is confirmatory bias.

00:10:46 Speaker_03
And the ones that are, I mean, if it's fun and entertaining, I don't care. But you'll see, for example, there are plenty of stories where people have ultimately self-harm or suicide because they're already unhappy.

00:10:58 Speaker_03
And then they start picking up unhappy. And then their whole environment online is people who are unhappy, and it makes them more unhappy. because it doesn't have a positive bias.

00:11:09 Speaker_03
So there's a really good example where, let's say in your case, you're the dad. You're going to watch this as the dad with your kid, and you're going to say, you know, it's not that bad. Let me give you some good alternatives. Let me get you inspired.

00:11:23 Speaker_03
Let me get you out of your funk. The algorithms don't do that. unless you force them to. It's because the algorithms are fundamentally about optimizing an objective function, literally mathematically maximize some goal that has been trained to.

00:11:37 Speaker_03
In this case, it's attention. And by the way, part of it, part of why we have so much outrage is because if you're a CEO, you want to maximize revenue. To maximize revenue, you maximize attention.

00:11:51 Speaker_03
And the easiest way to maximize attention is to maximize outrage. Did you know? Did you know? Did you know? Right? And by the way, a lot of the stuff is not true. They're fighting over scarce attention.

00:12:03 Speaker_03
There was a recent article where there's an old quote from 1971 from Herb Simon, an economist at the time at Carnegie Mellon, who said that economists don't understand, but in the future, the scarcity will be about attention.

00:12:18 Speaker_03
So somebody now, 50 years later, went back and said, I think we're at the point where we've monetized all attention. An article this week, two and a half hours of videos consumed by young people every day, right?

00:12:34 Speaker_03
Now, there is a limit to the amount of video you can—because you have to eat and sleep and hang out. But these are significant societal changes that have occurred very, very quickly.

00:12:45 Speaker_03
When I was young, there was a great debate as to the benefit of television. And my argument at the time was, well, yes, we did rock and roll and drugs and all of that, and we watched a lot of television, but somehow we grew up okay.

00:12:59 Speaker_03
So it's the same argument now with a different term. Will those kids grow up okay? It's not as obvious because these tools are highly addictive, much more so than television ever was. Do you think they'll grow up okay?

00:13:14 Speaker_03
I personally do because I'm inherently an optimist. I also think that society begins to understand the problems. A typical example is there's an epidemic of harm to teenage girls. Girls, as we know, are more advanced than boys at those below 18.

00:13:32 Speaker_03
And the girls seem to get hit by social media at 11 and 12 when they're not quite capable of handling the rejection and the emotional stuff. And it's driven emergency room visits, self-harm, and so forth to record levels. It's well-documented.

00:13:47 Speaker_03
So society is beginning to recognize this. Now, schools won't let kids use their phones when they're in the classroom, which is kind of obvious if you ask me.

00:13:56 Speaker_03
So developmentally, one of the core questions about the AI revolution is what does it do to the identity of children that are growing up? Your values, your personal values, the way you get up in the morning and think about life is now set.

00:14:10 Speaker_03
It's highly unlikely that an AI will change your programming, but your child can be significantly reprogrammed. And one of the things that we talk about in the book is what happens when the best friend of your child from birth is a computer.

00:14:25 Speaker_03
What's it like? Now, by the way, I don't know. We've never done it before. You're running an experiment on a billion people without a control, right? And so we have to stumble through this.

00:14:38 Speaker_03
So at the end of the day, I'm an optimist because we will adjust society with biases and values to try to keep us on a moral, high-ground human life.

00:14:49 Speaker_03
And so you should be optimistic for that because these kids, when they grow up, they'll live to 100, their lives will be much more prosperous. I hope and I pray that there'll be much less conflict. Certainly their lifespans are longer.

00:15:02 Speaker_03
The likelihood of them being injured in wars and so forth are much, much lower statistically. It's a good message to kids.

00:15:10 Speaker_01
As someone who's led one of the world's biggest tech companies, if you were the CEO of TikTok, What would you do?

00:15:18 Speaker_01
Because I'm sure that they realise everything you've said is true, but they have this commercial incentive to drive up the addictiveness of the algorithm, which is causing these echo chambers, which is causing the rates of anxiety and depression amongst young girls and young people more generally to increase.

00:15:35 Speaker_03
What would you do? So I have talked to them and to the others as well. And I think it's pretty straightforward. There's sort of good revenue and bad revenue.

00:15:45 Speaker_03
When we were at Google, Larry and Sergey and I, we would have situations where we would improve quality. We would make the product better. And the debate was, do we take that to revenue in the form of more ads, or do we just make the product better?

00:16:01 Speaker_03
And that was a clear choice. And I arbitrarily decided that we would take 50% to one, 50% to the other, because I thought they were both important. And the founders, of course, were very supportive.

00:16:11 Speaker_03
So Google became more moral and also made more money, right? There's plenty of bad stuff on Google, but it's not on the first page. That was the key thing. The alternative model would be to say, let's maximize revenue.

00:16:26 Speaker_03
We'll put all the really bad stuff, the lies and the cheating and the deceiving and so forth, that draws you in and will drive you insane. And we might have made more money, but first it was the wrong thing to do.

00:16:38 Speaker_03
But more importantly, it's not sustainable. There's a law called Gresham's Law. It's a verbal law, obviously, where bad speech drives out good speech.

00:16:50 Speaker_03
And what you're seeing is you're seeing in online communities, which have always been present with bullying and this kind of stuff, now you've got crazy people, in my view, who are building bots that are lying, misinformation.

00:17:04 Speaker_03
Now, why do you do that? And there was a hurricane in Florida. And people are in serious trouble. And you, sitting in the comfort of your home somewhere else, are busy trying to make their lives more difficult? What's wrong with you?

00:17:18 Speaker_03
Like, let them get rescued. You know, human life is important. But there's something about the human psychology where people talk about, there's a German world called Schadenfreude. You know, there's a bunch of things like this that we have to address.

00:17:33 Speaker_03
I want social media and the online world to represent the best of humanity. hope, excitement, optimism, creativity, invention, solving new problems, as opposed to the worst. And I think that that is achievable.

00:17:46 Speaker_01
You arrived at Google at 46 years old, 2001? 2001. You had a very extensive career before then, working for a bunch of really interesting companies. Sun Microsystems is one that I know very well. You've worked with Xerox in California as well.

00:18:02 Speaker_01
Bell Labs was your first sort of real job, I guess, at 20 years old, first sort of big tech job. What did you learn in this journey of your life about what it is to build a great company and what value is as it relates to being an entrepreneur?

00:18:18 Speaker_01
And people in teams, if there were a set of first principles that everyone should be thinking about when it comes to doing something great and building something great, what are those first principles?

00:18:27 Speaker_03
So the first rule I've learned is that you need a truly brilliant person to build a really brilliant product. And that is not me. I work with them.

00:18:38 Speaker_03
So, find someone who's just smarter than you, more clever than you, moves faster than you, changes the world, is better spoken, more handsome, more beautiful, you know, whatever it is that you're optimizing, and ally yourself with them.

00:18:51 Speaker_03
Because they're the people who are going to make the world different. In one of my books, we use the distinction between divas and knaves.

00:19:00 Speaker_03
And a diva, and we use the example of Steve Jobs, who clearly was a diva, opinionated, and strong, and argumentative, and would bully people if he didn't like them, but was brilliant when he was a diva. He wanted perfection.

00:19:13 Speaker_03
Aligning yourself with Steve Jobs is a good idea. The alternative is what we call a knave. And a knave, which you know from British history, is somebody who's acting on their own account. They're not trying to do the right thing.

00:19:27 Speaker_03
They're trying to benefit themselves at the cost of others. And so if you can identify a person in one of these teams, they're just trying to solve the problem in a really clever way. And they're passionate about it, and they want to do it.

00:19:40 Speaker_03
That's how the world moves forward. If you don't have such a person, your company is not going to go anywhere. And the reason is that it's too easy just to keep doing what you were doing, right?

00:19:49 Speaker_03
And innovation is fundamentally about changing what you're doing. Up until this generation of tech companies, most companies seemed to me to be one-shot wonders, right?

00:20:00 Speaker_03
They would have one thing that was very successful, and then it was typically follow an S-curve, and nothing much would happen. And now I think that people are smarter, people are better educated. You now see repeatable waves.

00:20:13 Speaker_03
A good example being Microsoft, which is an older company now, founded in basically 81, 82, something like that. So let's call that 45 years old. But they've reinvented themselves a number of times in a really powerful way.

00:20:29 Speaker_01
We should probably talk about this then before we move on, which is, what you're talking about there is that sort of founder, things people now refer to as founder mode, that founder energy, that high conviction, that sort of disruptive thinking, and that ability to reinvent yourself.

00:20:43 Speaker_01
I was looking at some stats last night, in fact, and I was looking at how long companies stay on the S&P 500 on average now, and it went from 33 years to 17 years to 12 years average tenure.

00:20:55 Speaker_01
And as you play those numbers forward eventually in sort of 2050, And AI told me that it would be about eight years.

00:21:01 Speaker_03
Well, I'm not sure I agree with the founder mode argument. And the reason is that it's great to have a brilliant founder. And it's actually more than great. It's really important. And we need more brilliant founders.

00:21:16 Speaker_03
Universities are producing these people, by the way. They do exist. And they show up every year. Another Michael Dell at the age of 19 or 22. These are just brilliant founders.

00:21:26 Speaker_03
obviously Gates and Ellison and sort of my generation of brilliant founders, Larry and Sergey and so forth.

00:21:33 Speaker_01
For anyone that doesn't know who Larry and Sergey are and doesn't know that sort of early Google story, can you give me a little bit of that backstory, but then also introduce these characters called Larry and Sergey for anyone that doesn't know?

00:21:43 Speaker_03
So Larry Page and Sergey Brin met at Stanford. They were on a grant from, believe it or not, the National Science Foundation as graduate students. And Larry Page invented a algorithm called PageRank, which is named after him.

00:21:59 Speaker_03
And he and Sergey wrote a paper, which is still one of the most cited papers in the world. And it's essentially a way of understanding priority of information.

00:22:10 Speaker_03
And mathematically, it was a Fourier transform of the way people normally did things at the time. And so they wrote this code. I don't think they were that good a set of programmers. They sort of did it. They had a computer.

00:22:22 Speaker_03
They ran out of power in their dorm room. So they borrowed the power from the dorm room next to and plugged it in. And they had the data center in the bedroom, in the dorm. Classic story.

00:22:33 Speaker_03
And then they moved to a building that was owned by the sister of a girlfriend at the time. And that's how they founded the company. Their first investor was the founder of Sun Microsystems.

00:22:48 Speaker_03
His name was Andy Bechtolsheim, who just said, I'll just give you the money because you're obviously incredibly smart. How much did he give them? Yeah, I think maybe it was a million. But in any case, it ultimately became many billions of dollars.

00:23:01 Speaker_03
So it gives you a sense of this early founding is very important. So the founders then set up in this little house in Menlo Park, which ultimately we bought at Google as a museum.

00:23:13 Speaker_03
And they set up in the garage, and they had Google World headquarters in neon made. And they had a big headquarters with the four employees that were sitting below them and the computer that Larry and Sergey had built.

00:23:26 Speaker_03
Larry and Sergey were very, very good software people and obviously brilliant, but they were not very good hardware. And so they built the computers using corkboard to separate the CPUs.

00:23:35 Speaker_03
And if you know anything about hardware, hardware generates a lot of heat, and the corkboard would catch on fire. So eventually, when I showed up, we started building proper hardware with proper hardware engineers.

00:23:46 Speaker_03
But it gives you a sense of the scrappiness that was so characteristic. And, you know, today there are people of enormous impact on society. And I think that will continue for many, many years.

00:23:59 Speaker_01
Why did they call you in? And at what point did they realize that they needed someone like you?

00:24:03 Speaker_03
Well, Larry said to me, now these were, they're very young. He looked at me and he says, we don't need you now, but we'll need you in the future. We'll need you in the future? Yeah.

00:24:15 Speaker_03
So one of the things about Larry and Sergey is that they thought for the long term. So they didn't say Google would be a search company. They said the mission of Google is to organize all the world's information.

00:24:27 Speaker_03
And if you think about it, that's pretty audacious 25 years ago. Like, how are you going to do that? So they started with web search. Larry had studied AI quite extensively, and he began to work.

00:24:41 Speaker_03
Ultimately, he acquired, with all of us obviously, this company called DeepMind here in Britain, which essentially is the the first company to really see the AI opportunity.

00:24:56 Speaker_03
And pretty much all of the things you've seen from AI in the last decade have come from people who are either at DeepMind or competing with DeepMind.

00:25:04 Speaker_01
Going back to this point about principles then, before we move further on, as it relates to building a great company, What are some of those founding principles? We have lots of entrepreneurs that listen to the show.

00:25:15 Speaker_01
One of them you've expressed is this need for the divas, I guess, these people who are just very high conviction and can kind of see into the future. What are the other principles that I need to be thinking about when I'm scaling my company?

00:25:27 Speaker_03
Well, the first is to think about scale. I think a current example is look at Elon. Elon is an incredible entrepreneur and an incredible scientist.

00:25:37 Speaker_03
And if you study how he operates, he gets people by, I think, sheer force of personal will to overperform, to take huge risks. which somehow he has this brilliance where he can make those trade-offs and get it right. So, these are exceptional people.

00:25:55 Speaker_03
Now, in our book with Genesis, we argue that you're going to have that in your pocket. But as to whether you'll have the judgment to take the risks that Elon does, that's another question.

00:26:05 Speaker_03
One of the other ways to think about it is an awful lot of people talk to me about the companies that they're founding, and they're a little widget, like I want to make the camera better, I want to make the dress better, I want to make book publishing cheaper, or so forth.

00:26:19 Speaker_03
These are all fine ideas. I'm interested in ideas which have the benefit of scale. And when I say scale, I mean the ability to go from zero to infinity in terms of the number of users and demand and scale.

00:26:36 Speaker_03
There are plenty of ways of thinking about this, but what would be such a company in the age of AI? Well, we can tell you what it would look like. It would have apps, one on Android, one on iOS, maybe a few others.

00:26:50 Speaker_03
Those apps will use powerful networks, and they'll have a really big computer in the back that's doing AI calculations. So, future successful companies will all have that, right? Exactly what problem it solves, well, that's up to the founder.

00:27:07 Speaker_03
But if you're not using AI at every aspect of your business, you're not going to make it. And the distinction as a programming matter is that when I was doing all of this way back when, you had to write the code. Now AI has to discover the answer.

00:27:26 Speaker_03
It's a very big deal. And of course, a lot of this was invented at Google 10 years ago.

00:27:31 Speaker_03
But basically, all of a sudden, analytical programming, which is sort of what I did my whole life, writing code, and do this, do that, add this, subtract this, call this, so forth and so on, is gradually being replaced by learning the answer.

00:27:45 Speaker_03
So for example, we use the example of language translation. the current large language models are essentially organized around predicting the next word. Well, if you can predict the next word, you can predict the next sequence in biology.

00:28:01 Speaker_03
You can predict the next action. You can predict the next thing the robot should do.

00:28:05 Speaker_03
So all of this stuff around large language models and deep learning, it has come out in the Transformer paper, GPT-3, ChatGPT, which for most people was this huge moment, is essentially about

00:28:20 Speaker_01
predicting the next word and getting it right. In terms of company culture and how important that is for the success and prospects of a company, how do you think about company culture and how significant and important is it?

00:28:30 Speaker_01
And like when and who sets it?

00:28:33 Speaker_03
So I'll give, well, it's almost always set, company cultures are almost always set by the founders. I happen to be on the board of the Mayo Clinic. Mayo Clinic is the largest healthcare system in America. It's also the most highly rated one.

00:28:45 Speaker_03
And they have a rule, which is called the needs of the customer come first. which came out of the Mayo brothers who'd been dead for like 120 years. But that was their principle. And when I initially got on the board, I started wandering around.

00:28:59 Speaker_03
I thought that is kind of a stupid, you know, stupid phrase and nobody really does this. And they really believe it and they repeat it and they repeat it. Right, so it's true in non-technical cultures.

00:29:11 Speaker_03
In that case, it's the healthcare for service delivery. You can drive a culture even in non-tech. In tech, it's typically an engineering culture.

00:29:19 Speaker_03
And if I had to do things over again, I would have even more technical people and even fewer non-technical people and just make the technical people figure out what they have to do.

00:29:29 Speaker_03
And I'm sorry for that bias, because I'm not trying to offend anybody. But the fact of the matter is, the technical people, if you build the right product, your customers will come.

00:29:38 Speaker_03
If you don't build the right product, then you don't need a sales force. Why are you selling an inferior product? So in the How Google Works book, and ultimately in the Trillion Dollar Coach book, which is about Bill Campbell, we talked a lot about how

00:29:52 Speaker_03
the CEO is now the chief product officer, the chief innovation officer, because 50 years ago, you didn't have access to capital, you didn't have access to marketing, you didn't have access to sales, you didn't have access to distribution hours.

00:30:05 Speaker_03
I was meeting today with an entrepreneur who said, yeah, we'll be 95% technical. And I said, why? He said, well, we have a contract manufacturer, and our products are so good that people will just buy them.

00:30:16 Speaker_03
This happened to be a technical switching company. And they said it's only 100,000 times better than its competitors. And I said, it will sell. Unfortunately, it doesn't work yet. That isn't the point.

00:30:28 Speaker_03
But if they achieve their goal, people will be lined up outside the door. So as a matter of culture, you want to build a technical culture with values about getting the product to work, right?

00:30:41 Speaker_03
And working is not another thing you do with engineers is you say, They make a nice presentation to you, and they go, oh, that's very interesting. But you know, I'm not your customer.

00:30:52 Speaker_03
Your customer is really tough, because your customer wants everything to work and free and work right now and never make any mistakes. So give me their feedback. And if their feedback is good, I love you.

00:31:04 Speaker_03
And if their feedback is bad, then you better get back to work and stop being so arrogant. So what happens is that in the invention process within firms, people fall in love with an idea and they don't test it.

00:31:16 Speaker_03
One of the things that Google did, and this was largely Marissa Mayer, way back when, is one day she said to me, I don't know how to judge user interface. Marissa Mayer was the previous CEO.

00:31:29 Speaker_03
She was the CEO of Yahoo, and before that she ran all the consumer products at Google. And she's now running another company in the Bay Area.

00:31:38 Speaker_03
But the important thing about Marissa is she said, I can't, I said, well, you know, the UI, the user interface is great at the time, and it certainly was. And she said, I don't know how to judge the user interface myself, and none of my team do.

00:31:53 Speaker_03
But we know how to measure. And so what she organized were A-B tests. You test one, test another. So remember that it's possible using these networks to actually kind of figure out, because they're highly instrumented, dwell time.

00:32:07 Speaker_03
How long does somebody watch this? How important it is? If you go back to how TikTok works, One of the signals that they use include the amount of time you watch, commenting, forwarding, sharing, all of those kinds of things.

00:32:25 Speaker_03
And you can understand those as analytics that go into an AI engine and makes a decision as to what to do next, what to make viral.

00:32:34 Speaker_01
And on this point of culture at scale, is it right to expect that the culture changes as the company scales?

00:32:42 Speaker_01
Because you came into Google, I believe, when they were doing sort of $100 million in revenue, and you left when they were doing, what, $180 billion or something staggering?

00:32:49 Speaker_01
But is it right to assume that the culture of a growing company should scale from when there was 10 people in that garage to when there's 100?

00:32:56 Speaker_03
So when I go back to Google to visit, and they were kind enough to give me a badge and treat me well, of course, I hear the echoes of this.

00:33:07 Speaker_03
I was at a lunch where there was a lady running search and a gentleman running ads, you know, the successors to the people who worked with me. And I asked them, what's it going? And they said the same problems.

00:33:20 Speaker_03
You know, the same problems have not been solved, but they're much bigger. And so when you go to a company, I suspect, I was not near the founding of Apple, but I was on the board for a while.

00:33:32 Speaker_03
The founding culture you can see today in their obsession about user interfaces, their obsession about being closed, and their privacy and secrecy. It's just a different company, right? I'm not passing judgment. Setting the culture is important.

00:33:46 Speaker_03
The echoes are there. What does happen in big companies is they become less efficient for many reasons. The first thing that happens is they become conservative because they're public and they have lawsuits.

00:33:58 Speaker_03
And a famous example is that Microsoft, after the antitrust case in the 90s, became so conservative in terms of what it could launch that it really missed the web revolution for a long time. They have since recovered.

00:34:12 Speaker_03
And I, of course, was happy to exploit that as a competitor to them when we were at Google. But the important thing is big companies should be faster because they have more money and more scale. They should be able to do things even quicker.

00:34:26 Speaker_03
But in my industry anyway, the tech startups that have a new clear idea tend to win because the big company can't move fast enough to do it. Another example, we had built something called Google Video. I was very proud of Google Video.

00:34:42 Speaker_03
And David Drummond, who was the general counsel at the time, came in and said, you have to look at this YouTube people. I said, like, why? Who cares? And it turns out they're really good, and they're more clever than your team.

00:34:53 Speaker_03
And I said, that can't be true. Typical arrogant Eric. And we sat down, and we looked at it, and they really were quicker, even though we had an incumbent. And why?

00:35:05 Speaker_03
It turns out that the incumbent was operating under the traditional rules that Google had, which was fine. And the competitor, in this case, YouTube, was not constrained by that.

00:35:15 Speaker_03
They could work at any pace, and they could do all sorts of things, intellectual property and so forth. Ultimately, we were sued over all of that stuff, and we ultimately won all those suits.

00:35:24 Speaker_03
But it's an example where there are these moments in time where you have to move extremely quickly. You're seeing that right now with generative technology.

00:35:34 Speaker_03
So the AGI, the generative revolution, generate code, generate videos, generate text, generate everything. All of those winners are being determined in the next 6, 12 months.

00:35:44 Speaker_03
And then once the slope is set, once the growth rate is quadrupling every six months or so forth, it's very hard for somebody else to come in. So it's a race to get there as fast as you can.

00:35:57 Speaker_03
So when you talk to the great venture capitalists, they're fast, right? We'll look at it. We'll make a decision tomorrow. We're done. We're in, and so forth. And we want to be first, because that's where they make the most amount of money.

00:36:13 Speaker_01
We were talking before you arrived, I was talking to Jack about this idea of like harvesting and hunting. So harvesting what you've already sowed and hunting for new opportunities.

00:36:22 Speaker_01
But I've always found it's quite difficult to get the harvesters to be the hunters at the same time.

00:36:28 Speaker_03
So harvesting and hunting is a good metaphor. I'm interested in entrepreneurs.

00:36:33 Speaker_03
And so what we learned at Google was ultimately, if you want to get something done, you have to have somebody who's entrepreneurial in their approach in charge of a small business.

00:36:41 Speaker_03
And so, for example, Sundar, when he became CEO, had a model of which were the little things that he was going to emphasize and which were the big things. Some of those little things are now big things, right? And he managed it that way.

00:36:54 Speaker_03
So one way to understand innovation in a large company is you need to know who the owner is. Larry Page would say over and over again, it's not going to happen unless there's an owner who's going to drive this.

00:37:04 Speaker_03
And he was supremely good at identifying that technical talent That's one of his great founder strengths.

00:37:10 Speaker_03
So when we talk about founders, not only do you have to have a vision, but you also have to have either great luck or great skill as to who is the person who can lead this.

00:37:21 Speaker_03
Inevitably, those people are highly technical in the sense that they can end very quick moving, and they have good management skills. They understand how to hire people and deploy resources. That allows for innovation.

00:37:35 Speaker_03
If I look back in my career, each generation of the tech companies failed, including, for example, Sun, at the point at which it became non-competitive with the future.

00:37:47 Speaker_01
Is it possible for a team to innovate while they still have their day job, which is harvesting, if you know what I mean?

00:37:53 Speaker_01
Or do you have to take those people, put them into a different team, different building, different P&L, and get them to focus on their disruptive innovation?

00:38:00 Speaker_03
There are almost no examples of doing it simultaneously in the same building.

00:38:04 Speaker_03
The Macintosh was famously— Steve, in his typical crazy way, had this very small team that invented the Macintosh, and he put them in a little building next to the big building on Bubb Road and Cupertino. And they put a pirate flag on top of it.

00:38:24 Speaker_03
Now, was that good culturally inside the company? No. because it created resentment in the big building. But was it right in terms of the revenue and path of Apple? Absolutely. Why? Because the Mac ultimately became the platform that established the UI.

00:38:41 Speaker_03
The user interface ultimately allowed them to build the iPhone, which of course is defined by its user interface. Why couldn't they stay in the same building? It just doesn't work. You can't get people to play two roles. The incentives are different.

00:38:54 Speaker_03
If you're going to be a pirate and a disruptor, you don't have to follow the same rules. So, there are plenty of examples where you just have to keep inventing yourself.

00:39:05 Speaker_03
Now, what's interesting about cloud computing and essentially cloud services, which is what Google does, is because the product is not sold to you, it's delivered to you, it's easier to change. But the same problem remains.

00:39:19 Speaker_03
If you look at Google today, it's basically a search box, and it's incredibly powerful. But what happens when that interface is not really textual? Google will have to reinvent that. And it's working on it.

00:39:33 Speaker_03
The system will somehow know what you're asking. It will be your assistant. And again, Google will do very well. So I'm in no way criticizing Google here.

00:39:42 Speaker_03
But I'm saying that even something as simple as the search box will eventually be replaced by something more powerful. It's important that Google be the company that does that. I believe they will.

00:39:52 Speaker_01
And I was thinking about it, you know, the example of Steve Jobs in that building with the pirate flag on it. My brain went,

00:40:02 Speaker_01
There are so many offices around the world that were trying to kill Apple at that exact moment that might not have had the pirate flag, but that's exactly what they were doing in similar small rooms.

00:40:12 Speaker_01
So what Apple had done so smartly there was they owned the people that were about to kill their business model. And this is quite difficult to do. And part of me wonders if, in your experience,

00:40:23 Speaker_01
It's a founder that has that type of conviction that does that.

00:40:28 Speaker_03
It's extremely hard for non-founders to do this in corporations, because if you think about a corporation, what's the duty of the CEO? There's the shareholders, there's the employees, there's the community, and there's a board.

00:40:42 Speaker_03
Trying to get a board of very smart people to agree on anything is hard enough. So imagine I walk in to you and I say, I have a new idea. I'm going to fill our profitability for two years. It's a huge bet, and I need $10 billion.

00:41:00 Speaker_03
Now, would the board say yes? Well, they did to Mark Zuckerberg. He spent all that money on essentially VR of one kind or another. It doesn't seem to have produced very much.

00:41:13 Speaker_03
But at exactly the same time, he invested very heavily in Instagram, WhatsApp, and Facebook, and particularly in the AI systems that power them.

00:41:23 Speaker_03
And today, Facebook, to my surprise, is a very significant leader in AI, having released this version called Lama 400 billion, which is curiously an open source model. Open source means it's available freely for everyone.

00:41:37 Speaker_03
And what Facebook and Meta is saying is, as long as we have this technology, we can maximize the revenue in our core businesses. So, there's a good example. And Zuckerberg is obviously an incredibly talented entrepreneur.

00:41:52 Speaker_03
He's now back on the list of the most rich people. He's feeded it, you know, in everything he was doing. And he managed to lose all that money while making a different bet. That's a unique founder. The same thing is almost impossible with a hired CEO.

00:42:08 Speaker_01
How important here is focus? And what's your sort of opinion of the importance of focus from your experience with Google, but also looking at these other companies?

00:42:16 Speaker_01
Because when you're at Google and you have so much money in the bank, there's so many things that you could do and could build. Like an endless list you can take on anybody and basically win in most markets. How do you think about focus at Google?

00:42:30 Speaker_03
Focus is important, but it's misinterpreted. In Google, we spent an awful lot of time telling people we wanted to do everything. And everyone said, you can't pull off everything. And we said, yes, we can. We have the underlying architectures.

00:42:47 Speaker_03
We have the underlying reach. We can do this if we can imagine and build something that's really transformative.

00:42:53 Speaker_03
And so the idea was not that we would somehow focus on one thing like search, but rather that we would pick areas of great impact and importance to the world, many of which were free, by the way.

00:43:03 Speaker_03
This is not necessarily revenue-driven, and that worked. I'll give you another example.

00:43:07 Speaker_03
There's an old saying in the business school that you should focus on what you're good at, and you should simplify your product lines, and you should get rid of product lines that don't work.

00:43:19 Speaker_03
Intel famously had a, the term is called ARM, it's a RISC chip. And this particular RISC chip was not compatible with the architecture that they were using for most of their products. And so they sold it.

00:43:34 Speaker_03
Unfortunately, this was a terrible mistake because the architecture that they sold off was needed for mobile phones with low memory, with small batteries and heat problems and so forth and so on.

00:43:47 Speaker_03
And so that decision, that fateful decision now 15 years ago, meant that they were never a player in the mobile space.

00:43:54 Speaker_03
And once they made that decision, they tried to take their expensive and complex chips, and they kept trying to make cheaper and smaller versions. But the core decision, which was to simplify, simplified to the wrong outcome.

00:44:09 Speaker_03
Today, if you look at, I'll give you an example, the NVIDIA chips use an ARM CPU and then these two powerful GPUs. It's called the B200. They don't use the Intel chip. They use the ARM chip because it was, for their needs, faster.

00:44:23 Speaker_03
We would never have predicted that 15 years ago. So at the end, maybe it was just a mistake.

00:44:30 Speaker_03
But maybe they didn't understand in the way they were organized as a corporation that ultimately battery power would be as important as computing power, the amount of battery you use. And that was the discriminant.

00:44:43 Speaker_03
So one way to think about it is if you're going to have these sort of simple rules, you better have a model of what happens in the next five years. So the way I teach this, is just write down what it'll look like in five years. Just try.

00:44:57 Speaker_03
What will it look like in five years? Your company? Whatever it is, right? So let's talk about AI. What will be true in five years?

00:45:06 Speaker_01
That it's going to be a lot smarter than it is now.

00:45:08 Speaker_03
It'll be a lot smarter. But how many companies will there be in AI? Will there be five or 5,000 or 50,000? How many big companies will there be? Will there be new companies? What will they do?

00:45:22 Speaker_03
So I just told you my view is that eventually you and I will have our own AI assistant, which is a polymath, which is incredibly smart, which helps us guide through the information overload that it is today. Who's going to build it? Make a prediction.

00:45:39 Speaker_03
What kind of hardware will it be on? Make a prediction. How fast will the networks be? Make a prediction. Write all these things down and then have a discussion about what to do.

00:45:50 Speaker_03
That what is interesting about our industry is that when something like the PC comes along or the internet, I lived through all of these things, they are such broad phenomena that they really do create a whole new lake, a whole new ocean, whatever metaphor you want.

00:46:06 Speaker_03
Now people said, well, wasn't that crypto? No. Crypto is not such a platform. Crypto is not transformative to daily life for everyone. People are not running around all day using crypto tokens rather than currency. Crypto is a specialized market.

00:46:22 Speaker_03
By the way, it's important and it's interesting. It's not a horizontal transformative market. The arrival of alien intelligence in the form of savants that you use is such a transformative thing because it touches everything.

00:46:34 Speaker_03
It touches you as a producer, as a star, as a narrative. It touches me as an executive. It will ultimately help people make money in the stock market. People are working on that. There's so many ways in which this technology is transformative.

00:46:48 Speaker_03
To start, in your case, when you think about your company, whether it's little, itty bitty, or a really big one, it's fundamentally how will you apply AI to accelerate what you're doing.

00:47:00 Speaker_03
In your case, for example, here you have, I think, the most successful show in the UK by far. So, how will you use AI to make it more successful?

00:47:09 Speaker_03
Well, you can ask it to distribute you more, to make narratives, to summarize, to come up with new insights, to suggest, to have fun, to create contests. There are all sorts of ways that you can ask AI. I'll give you a simple example.

00:47:24 Speaker_03
If I were a politician, Thankfully, I'm not. And I knew my district. I would say to the computer, write a program.

00:47:33 Speaker_03
So I'm saying to the computer, you write a program, which goes through all of the constituents in my interest, figures out roughly what they care about, and then send them a video, which is labeled of me digitally.

00:47:47 Speaker_03
So I'm not fake, but it's kind of like my intention, where I explain to them how important I, as their constituent, have made the bridge work. And you sit there and you go, that's crazy. But it's possible.

00:47:59 Speaker_03
Now, politicians have not discovered this yet, but they will. Because ultimately, politicians are on a human connection, and the quickest way to have that communication is to be on their phone, talking to them about something that they care about.

00:48:12 Speaker_01
When ChatGPT first launched and they sort of scaled rapidly to 100 million users, there was all these articles saying that the founders of Google had rushed back in and it was a crisis situation at Google and there was panic.

00:48:24 Speaker_01
And there was two things that I thought. First is, is that true? And second thing was, How did Google not come to market first with a chat GPT style product?

00:48:34 Speaker_03
Well, remember that Google also, that's the old question of why did you not do Facebook? Well, the answer is we were doing everything else, right?

00:48:41 Speaker_03
So my defensive answer is that Google has eight or nine or 10 billion user clusters of activity, which is pretty good, right? It's pretty hard to do, right? I'm very proud of that. I'm very proud of what they're doing now.

00:48:55 Speaker_03
My own view is that what happened was Google was working in the engine room and a team out of OpenAI figured out a technology called RLHF.

00:49:07 Speaker_03
And what happened was when they did GPT-3, and the T is transformer, which was invented at Google, when they did it, they had sort of this interesting idea, and then they sort of casually started to use humans to make it better.

00:49:23 Speaker_03
And RLHF refers to the fact that you use humans at the end to do A-B tests, where humans can actually say, well, this one's better. And then the system learns recursively from human training at the end. That was a real breakthrough.

00:49:39 Speaker_03
And I joke with my OpenAI friends that you were sitting around on Thursday night and you turn this thing on and you go, holy crap, look how good this thing is. It was a real discovery that none of us expected. Certainly I did not.

00:49:54 Speaker_03
And once they had it, the OpenAI people, Sam and Mira and so forth, will talk about this. They didn't really understand how good it was. They just turned it on.

00:50:05 Speaker_03
And all of a sudden, they had this huge success disaster because they were working on GPT-4 at the same time. It was an afterthought.

00:50:12 Speaker_03
And it's a great story because it just shows you that even the brilliant founders do not necessarily understand how powerful what they've done is. Now, today, of course, you have GPT-4.0. basically a very powerful model from open eye.

00:50:28 Speaker_03
You have Gemini 1.5, which is clearly roughly equivalent, if not better, in certain areas. The Gemini is more multimodal, for example. And then you have other players. The llama architecture, L-L-A-M-A, does not stand for llamas.

00:50:45 Speaker_03
It's large language models. out of Facebook and a number of others. There's a startup called Anthropic, which is very powerful, founded by one of the inventors of GPT-3 and a whole bunch of people.

00:50:58 Speaker_03
And they formed their company knowing they were going to be that successful. It's interesting, they actually formed, as part of their incorporation, that they were a public benefit corporation.

00:51:07 Speaker_03
because they were concerned that it would be so powerful that some evil CEO in the future would force them to go for revenue as opposed to world goodness.

00:51:17 Speaker_03
So the teams, when they were doing this, they understood the power of what they were doing, and they anticipated the level of impact, and they were right.

00:51:25 Speaker_01
Do you think if Steve Jobs was at Apple, they would be on that list? How do you think the company would be different?

00:51:33 Speaker_03
Well, Tim has done a fantastic job in Steve's legacy. And what's interesting is normally the successor is not as good as the founder.

00:51:41 Speaker_03
But somehow, Tim, having worked with Steve for so long and having set the culture, having Steve, they've managed to continue the focus on the user with this incredible safety focus in terms of apps and so forth and so on.

00:51:53 Speaker_03
And they've remained a relatively closed culture. I think all of those would have maintained had Steve tragically died. He was a good friend. But the important point is,

00:52:05 Speaker_03
Steve believed very strongly in what are called closed systems where you own and control all your intellectual property. And he and I would battle over open versus closed because I came from the other side and I did this with respect.

00:52:19 Speaker_03
I don't think they would have changed that.

00:52:22 Speaker_01
they've changed that now?

00:52:23 Speaker_03
No, I think still Apple is still basically a single company that's vertically integrated. The rest of the industry is largely more open.

00:52:32 Speaker_01
I think everyone, especially in the wake of the recent launch of the iPhone 16, which I've got somewhere here, has this expectation that Apple would

00:52:41 Speaker_01
if Steve was still alive, taken some big, bold bet in some, and I think about, you know, Tim's tenure, he's done a fantastic job of keeping that company going, running it with the sort of principles of Steve Jobs.

00:52:52 Speaker_01
But has there been many big, bold, successful bets? A lot of people pointed at the AirPods, which have a great product. But I think AI is one of those things where you go,

00:53:02 Speaker_01
I wonder if Steve would have understood the significance of it and... Steve was that smart that he, I would never, you know, he's an Elon level intelligence.

00:53:14 Speaker_03
When Steve and I worked together very closely, which was what, 15 years ago before his death, He was very frustrated at the success of MP4 over MOV format files, and he was really mad about it.

00:53:34 Speaker_03
And I said, well, you know, maybe that's because you were closed and QuickTime was not generally available. He said, that's not true. My team, you know, our product is better and so forth. So his core belief system, he's an artist, right?

00:53:47 Speaker_03
And given the choice, we used to have this debate where do you want to be Chevrolet or do you want to be Porsche? Do you want to be, you know, General Motors or do you want to be BMW? And he said, I want to be BMW.

00:53:59 Speaker_03
And during that time, Apple's margins were twice as high as the PC companies. And I said, Steve, you don't need all that money. You're generating all this cash. You're giving it to your shareholders.

00:54:11 Speaker_03
And he said, the principle of our profitability and our value and our brand is this luxury brand. Right, so that's how he thought. Now, how would AI change that?

00:54:24 Speaker_03
Everything that he would have done with Apple today would be AI inspired, but it would be beautiful. That's the great gift he had.

00:54:33 Speaker_01
I think Siri was almost a glimpse at what AI now kind of looks like. It was a glimpse at what, I guess, the ambition was.

00:54:41 Speaker_01
We've all been chatting to this Siri thing, which I think most people would agree is largely useless unless you're trying to figure out something super, super simple.

00:54:48 Speaker_01
But this weekend, as I said, I sat there with my girlfriend's family there, speaking to this voice-activated device, and it was solving problems for me almost instantaneously that are very complex and translating them into French and Portuguese.

00:55:00 Speaker_01
Welcome to the replacement for Siri.

00:55:03 Speaker_03
And again, would Steve have done that quicker? I don't know. It's very clear that the first thing Apple needs to do is have Siri be replaced by an AI and call that Siri.

00:55:15 Speaker_01
Hiring. We're doing a lot of hiring in our companies at the moment, and we're going back and forward on what the most important principles are when it comes to hiring. Making lots of mistakes sometimes, getting things right sometimes.

00:55:26 Speaker_01
What do I need to know as an entrepreneur when it comes to hiring?

00:55:29 Speaker_03
Startups, by definition, are huge risk takers. You have no history, you have no incumbency, you have all these competitors, by definition, and you have no time.

00:55:39 Speaker_03
So in a startup, you want to prioritize intelligence and quickness over experience and sort of stability. you want to take risks on people.

00:55:52 Speaker_03
And part of the reason why startups are full of young people is because young people often don't have the baggage of executives that have been around for a long time. But more importantly, they're willing to take risks.

00:56:03 Speaker_03
So it used to be that you could predict whether a company was successful by the age of the founders. And in that 20- and 30-year-old period, the company would be hugely successful. Startups wiggle.

00:56:16 Speaker_03
They try something, they try something else, and they're very quick to discard an old idea. Corporations spend years with a belief system that is factually false. And they don't actually change their opinion until after they've lost all the contracts.

00:56:32 Speaker_03
And if you go back, all the signs were there. Nobody wanted to talk to them. Nobody cared about the product. And yet they kept pushing it.

00:56:40 Speaker_03
So if you're a CEO of a larger company, what you want to do is basically figure out how to measure this innovation so that you don't waste a lot of time. Bill Gates had a saying a long time ago.

00:56:51 Speaker_03
which was that the most important thing to do is to fail fast. And from his perspective as the CEO of Microsoft, founder of Microsoft, that he wanted everything to happen and he wanted to fail quickly. And that was his theory.

00:57:05 Speaker_03
And do you agree with that theory? Yeah, I do. Fast failure is important because you can say it in a nicer way. But fundamentally, at Google, we have this 70-20-10 rule that Larry and Sergey came up with.

00:57:17 Speaker_03
70% of the core business, 20% on adjacent business, and 10% on other. What does that mean, sorry? Adjacent business. So core business means search ads. Adjacent business means something that you're trying, like a cloud business or so forth.

00:57:29 Speaker_03
And the 10% is some new idea. So Google created this thing called Google X. The first product it built was called Google Brain, which is one of the first machine learning architectures. This actually precedes DeepMind.

00:57:42 Speaker_03
Google Brain was used to power the AI system. Google Brain's team of 10 or 15 people generated $10, $20, $30, $40 billion of extra profits over a decade. So that pays for a lot of failures.

00:57:55 Speaker_03
Then they had a whole bunch of other ideas that seemed very interesting to me that didn't happen for one or another. And they would cancel them. And then the people would get reconfigured.

00:58:07 Speaker_03
And one of the great things about Silicon Valley is it's possible to spend a few years on a really bad idea and get canceled, if you will, and then get another job having learned all of that. My joke is the best CFO is one who's just gone bankrupt.

00:58:21 Speaker_03
Because the one thing that CFO is not going to let happen is to go bankrupt again.

00:58:26 Speaker_01
Well, on this point of culture as well, Google, as such a big company, must experience a bunch of microcultures.

00:58:34 Speaker_01
One of the things that I've always, I've kind of studied it as a cautionary tale is the story of TGIF at Google, which was this sort of weekly all-hands meeting where employees could ask the executives whatever they wanted to.

00:58:47 Speaker_01
And the articles around it say that it was eventually sort of changed or cancelled because it became unproductive. It's more complicated than that.

00:58:55 Speaker_03
So Larry and Sergey started TGIF, which I obviously participated in, and we had fun. There was a sense of humor. It was all off the record.

00:59:04 Speaker_03
A famous example is the VP of sales, whose name was Omid, was always predicting lower revenue than we really had, which is called sandbagging. So, we got a sandbag and we made him stand on the sandbag in order to present his numbers.

00:59:20 Speaker_03
It was just fun, humorous. You know, we had skits and things like that. At some size, you don't have that level of intimacy and you don't have a level of privacy. And what happened was there were leaks.

00:59:34 Speaker_03
Eventually, there was a presentation, I don't remember the specifics, where the presentation was ongoing and someone was leaking the presentation live to a reporter and somebody came on stage and said, we have to stop now.

00:59:50 Speaker_03
I think that was the moment where the company got sort of too big. Hmm.

00:59:57 Speaker_01
I heard about a story that, because from what I had understood, this might be totally wrong, but it's all just things that Google employees have told me, was that there wasn't many sackings, firings at Google's, wasn't many layoffs, wasn't really a culture of layoffs.

01:00:11 Speaker_01
And I guess in part that's because the company was so successful that it didn't have to make those extremely, extremely tough decisions that we're seeing a lot of companies make today. I reflect on Elon's running of Twitter, when he took over Twitter,

01:00:24 Speaker_01
The story goes that he went to the top floor and basically said, anyone who's willing to work hard, is committed to these values, please come to the top floor, everyone else, you're fired.

01:00:34 Speaker_01
This sort of extreme culture of culling and people being sort of activists at work. And I wanted to know if there's any truth in that. There's some.

01:00:46 Speaker_03
In Google's case, We had a position of, why lay people off? Just don't hire them in the first place. It's much, much easier. And so in my tenure, the only layoff we did was 200 people in the sales structures right after the 2000 epidemic.

01:01:04 Speaker_03
And I remember it as being extremely painful. It was the first time we had done it. So we took the position, which is different at the time, that you shouldn't have an automatic layoff.

01:01:15 Speaker_03
What would happen is that there was a belief at the time that every six months or nine months, you should take the bottom 5% of your people and lay them off. Problem with that is you're assuming the 5% are correctly identified.

01:01:27 Speaker_03
And furthermore, even the lowest performers have knowledge and value to the corporation that we can take it. So we took a very much more positive view of our employees and the employees like that.

01:01:37 Speaker_03
And we obviously paid them very well and so forth and so on.

01:01:40 Speaker_03
I think that the cultural issues ultimately have been addressed, but there was a period of time where there were, because of the freewheeling nature of the company, there were an awful lot of internal distributionalists, which had nothing to do with the company.

01:01:56 Speaker_01
What does that mean?

01:01:57 Speaker_03
They were distributionalists on topics of war, peace, politics, so forth. What's a distributionalist? A distributionalist is like an email, think of it as a message board. Okay. Roughly speaking, think of it as message boards for employees.

01:02:12 Speaker_03
And I remember that at one point, somebody discovered that there were 100,000 such message boards.

01:02:17 Speaker_03
And the company ultimately cleaned that up because companies are not like universities, and that there are, in fact, all sorts of laws about what you can say and what you cannot say and so forth.

01:02:27 Speaker_03
And so, for example, the majority of the employees were Democrats in the American political system.

01:02:33 Speaker_03
And I made a point, even though I'm a Democrat, to try to protect the small number of Republicans because I thought they had a right to be employees too.

01:02:40 Speaker_03
So you have to be very careful in a corporation to establish what does speech mean within the corporation.

01:02:48 Speaker_03
What you are hearing as woke-ism really can be understood as, what are the appropriate topics on work time in a work venue should you be discussing?

01:02:59 Speaker_03
My own view is stick to the business, and then please feel free to go to the bar, scream your views, talk to everybody. I'm a strong believer in free speech, but within the corporation, let's just stick to the corporation and its goals.

01:03:11 Speaker_01
Because I was hearing these stories about, I think in more recent times, in the last year or two of people coming to work just for the free breakfast, protesting outside that morning, coming back into the building for lunch.

01:03:21 Speaker_01
As best I can tell, that's all been cleaned up. I did also hear that it had been cleaned up because I think it was addressed in a very high conviction way, which meant that it was seen to. How do you think about competition?

01:03:36 Speaker_01
For everyone that's building something, how much should we be focusing on our competition?

01:03:40 Speaker_03
I strongly recommend not focusing on competition and instead focusing on building a product that no one else has. And you say, well, how can you do that without knowing the competition? Well, if you study the competition, you're wasting your time.

01:03:50 Speaker_03
Try to solve the problem in a new way and do it in a way where the customers are delighted. Running Google, we seldom looked at what our competitors were doing. What we did, we spent an awful lot of time, was what is possible for us to do?

01:04:03 Speaker_03
What can we actually do from our current situation? And sort of the running ahead of everybody turns out to be really important. What about deadlines? Well, Larry established the principle of OKRs, which were objectives and key results.

01:04:20 Speaker_03
And every quarter, Larry would actually write down all the metrics. And he was tough. And he would say that if you got to 70% of my numbers, that was good. And then we would grade based on, are you above the 70% or are you below the 70%?

01:04:33 Speaker_03
And it was harsh. And it worked. You have to measure to get things done in a big corporation. Otherwise, everyone kind of looks good. makes all sorts of claims, feels good about themselves, but it doesn't have an impact. What about business plans?

01:04:50 Speaker_03
Should we be writing business plans as founders? Google wrote a business plan, it was run by a fellow named Salar, and I saw it years later, and it was actually correct. And I told Salar that...

01:05:01 Speaker_03
This is probably the only business plan ever written for a corporation that was actually correct in hindsight.

01:05:07 Speaker_03
So what I prefer to do, and this is how I teach it at Stanford, is try to figure out what the world looks like in five years, and then try to figure out what you're going to do in one year, and then do it.

01:05:22 Speaker_03
So if you can basically say, this is the direction, these are the things we're going to achieve within one year, and then run against that as hard goals, not simple goals, but hard goals, then you'll get there.

01:05:34 Speaker_03
And the general rule, at least in a consumer business, is if you can get an audience of 10 or 100 million people, you can make lots of money.

01:05:41 Speaker_03
So if you give me any business that has no revenue and 100 million people, I can find a way to monetize that with advertising and sponsorships and donations and so forth and so on. focus on getting the user right and everything else will follow.

01:05:54 Speaker_03
The Google phrase is focus on the user and everything else is handled.

01:06:00 Speaker_01
Sergei and Larry, you worked with them for 20 years, many decades, yeah, two decades. What made them special? Frankly, raw IQ.

01:06:11 Speaker_03
They were just smarter than everybody else.

01:06:12 Speaker_01
Really? Yeah.

01:06:13 Speaker_03
And In Sergei's case, his father was a very brilliant Russian mathematician. His mother was also highly technical. His family is all very technical. And he was clever. He's a clever mathematician. Larry, a different personality, but similar.

01:06:30 Speaker_03
So an example would be that Larry and I are in his office, and we're writing on the whiteboard a long list about what we're going to do. And he says, look, we're going to do this and this. And I said, OK, I agree with you. I don't agree with you.

01:06:40 Speaker_03
We make this very long list. And Sergei is out playing volleyball. And so he runs in in his little volleyball shorts and his little shirt all sweating. He looks at our list and said, this is the stupidest thing I've ever heard.

01:06:52 Speaker_03
And then he suggests five things. And he was exactly right. So we erased the whiteboard. And then he, of course, went back to play volleyball. And that became the strategy of the company.

01:07:02 Speaker_03
So over and over again, it was their brilliance and their ability to see things that I didn't see that I think really drove it. Can you teach that? I don't know. I think you can teach listening. But I think most of us get caught up in our own ideas.

01:07:21 Speaker_03
And we are always surprised that something new happened. Like I've just told you that I've been in AI a long time. I'm still surprised at the rate. My favorite current product is called Notebook LM.

01:07:34 Speaker_03
And for the listeners, Notebook LM is an experimental product out of Google DeepMind, basically Gemini. It's based on the Gemini backend, and it was trained with high-quality podcast voices. It's terrifying.

01:07:48 Speaker_03
So, what I'll do is I'll write something that, again, I don't write very well, and I'll ask Gemini to rewrite it to be more beautiful. I'll take that text, and I'll put it in Notebook LM, and it produces this interview between a man and a woman.

01:08:04 Speaker_03
who don't exist. And for fun, what I do is I play this in front of an audience and I wait and see if anyone figures out that the humans are not human. It's so good, they don't figure it out. We'll play it now.

01:08:15 Speaker_00
So this is the big thing that everyone's making a big fuss about. You can go and load this conversation.

01:08:20 Speaker_00
Now it's gonna go out and create a conversation that's in a podcast style where there's a male voice and a female voice and they're analyzing the content and then coming up with their own kind of just creative content.

01:08:31 Speaker_00
So you could go and push play right here.

01:08:32 Speaker_04
We are back Thursday, get ready for week three.

01:08:36 Speaker_02
The injury report this week was a doozy. It's a long one.

01:08:40 Speaker_04
Yeah, it is.

01:08:41 Speaker_02
And it has the potential to really shake things up.

01:08:44 Speaker_03
So for that, to me, notebook LM is my chat GPT moment of this year.

01:08:52 Speaker_01
It was mine as well. And it's much of the reason that I was deeply confused. Because as a podcaster, who's building a media company, we have an office down the road, 25,000 square feet, we have studios in there.

01:09:04 Speaker_01
We're building audio, video content at this, in the dawn of this pandemic. new world where the cost of production of content goes to like zero or something. And I'm trying to navigate how to play as a media owner.

01:09:20 Speaker_03
So first place, what's really going on is you're moving from scarcity to ubiquity. You're moving from scarcity to abundance. So one way to understand the world I live in is it's scale computing generates abundance and abundance allows new strategies.

01:09:35 Speaker_03
In your case, it's obvious what you should do. You're a really famous podcaster and you have lots of interesting guests. Simply have this fake set of podcasts criticize you and your guests. You're essentially just amplifying your reach.

01:09:50 Speaker_03
They're not going to substitute for your honest brilliance and charisma here, but they're going to accentuate it. They will be entertaining. They will summarize it and so forth. It amplifies your reach.

01:10:02 Speaker_03
If you go back to my basic argument that AI will double the productivity of everybody, or more. So in your case, you'll have twice as many podcasts. What I do, for example, is I'll write something and I'll say, I'll have it respond.

01:10:16 Speaker_03
And then to Gemma and I, I'll say, make it longer. And it adds more stuff. And I think, God, I do this in like 30 seconds. Then how powerful.

01:10:25 Speaker_03
In your case, take one of these lengthy interviews you do, ask the system to annotate it, to amplify it, and then feed that into fake podcasters and see what they say.

01:10:37 Speaker_03
you'll have a whole new set of audiences that love them more than you, but it's all from you.

01:10:43 Speaker_01
That's the key idea here.

01:10:45 Speaker_01
I worry because there's going to be potentially billions of podcasts that are uploaded to RSS feeds all around the world, and it's all going to sort of chip away at, you know, the moat that I've... So, many people have believed that, but I think the evidence is it's not true.

01:11:03 Speaker_03
When I started at Google, there was this notion that celebrity would go away and there would be this very long tale of micro markets, you know, specialists, because finally you could hear the voices of everyone.

01:11:15 Speaker_03
And we're all very democratic and liberal in our view. What really happened was networks accentuated the best people and they made more money. You went from being a local personality to a national personality to a global personality.

01:11:29 Speaker_03
And the globe is a really big thing, and there's lots of money and lots of players. So you, as a celebrity, are competing against a global group of people, and you need all the help you can to maintain your position.

01:11:42 Speaker_03
If you do it well, by using these AI technologies, you will become more famous, not less famous.

01:11:50 Speaker_01
Genesis. I've had a lot of conversations with a lot of people about the subject of AI. And when I read your book, and I've watched you do a series of interviews on this, some of the quotes that you said really stood out to me.

01:12:04 Speaker_01
One of them I wrote down here, which comes from your book, Genesis, it's on page five. The advent of artificial intelligence is, in our view, a question of human survival.

01:12:18 Speaker_03
Yes.

01:12:19 Speaker_01
That is our view. So why is it a question of human survival?

01:12:26 Speaker_03
AI is going to move very quickly. It's moving so much more quickly than I've ever seen. Because the amount of money, the number of people, the impact, the need. What happens when the AI systems are really running key parts of our world?

01:12:43 Speaker_03
What happens when AI is making the decision?

01:12:46 Speaker_03
My simple example, you have a car, which is AI controlled, and you have an emergency or a lady's about to give birth or something like that, and they get in the car and there's no override switch because the system is optimized around the hole as opposed to his or her emergency.

01:13:06 Speaker_03
We as humans accept various forms of efficiency, including urgent ones versus systemic efficiency.

01:13:14 Speaker_03
You could imagine that the Google engineers would design a perfect city that would perfectly operate every self-driving car on every street, but would not then allow for the exceptions that you need in such an important issue.

01:13:28 Speaker_03
So that's a trivial example, and one which is well understood, of how it's important that these things represent human values. right, that we have to actually articulate what does it mean. So my favorite one is all this misinformation.

01:13:44 Speaker_03
Democracy is pretty important. Democracy is by far the best way to live and operate societies. Look at, there are plenty of examples of this. None of us want to work in essentially an authoritarian dictatorship.

01:13:55 Speaker_03
So you better figure out a way where the misinformation components do not screw up proper political examples. Another example is this question about teenagers and their mental development and growing up into these societies.

01:14:11 Speaker_03
I don't want them to be constantly depressed. There's a lot of evidence that dates around 2015 when all the social media algorithms changed from linear feeds to targeted feeds.

01:14:22 Speaker_03
In other words, they went from time to this is what you want, this is what you want. That hyper focus has ultimately narrowed people's political views, as we've discussed. But more importantly, it's produced more depression and anxiety.

01:14:35 Speaker_03
So all the studies indicate that basically if you time it to roughly then, when people are coming to age, they're not as happy with their lives, their behaviors, their opportunities for this. And the best explanation is it was an algorithmic change.

01:14:51 Speaker_03
And remember that these systems, they're not just collections of content. They are algorithmically deciding The algorithm decides what the outcome is for humans. We have to manage that.

01:15:03 Speaker_03
What we say in many different ways in the book is that you have a choice of whether the algorithms will advance. That's not a question. The question is, are we advancing with it and do we have control over it?

01:15:19 Speaker_03
There are so many examples where you could imagine an AI system could do something more efficiently, but at what cost? I should mention that there is this discussion about something called AGI, artificial general intelligence.

01:15:35 Speaker_03
And there's this discussion in the press among many people that AGI occurs on a particular day, right? And this is sort of a popular concept that on a particular day, five years from now or 10 years from now, this thing will occur.

01:15:47 Speaker_03
And all of a sudden, we're going to have a computer that's just like us, but even quicker. That's unlikely to be the path. Much more likely are these waves of innovation in every field. Better psychologists, better writers.

01:16:00 Speaker_03
You see this with ChatGBT already. Better scientists. There's a notion of an AI scientist that's working with AI real scientists to accelerate the development of more AI science.

01:16:11 Speaker_03
People believe all of this will come, but it has to be under human control. Do you think it will be? I do. And part of the reason is I and others have worked hard to get the governments to understand this. It's very strange.

01:16:25 Speaker_03
My entire career, which has gone on for 50 years, We've never asked for government for help because asking the government help is basically just a disaster in the view of the tech industry.

01:16:37 Speaker_03
In this case, the people who invented it collectively came to the same view that there need to be guardrails on this technology because of the potential for harm.

01:16:46 Speaker_03
The most obvious one is how do I kill myself, give me recipes to hurt other people, that kind of stuff. There's a whole community now in this part of the industry, which are called trust and safety groups.

01:16:57 Speaker_03
And what they do is they actually have humans test the system before it gets released to make sure the harm that it might have in it is suppressed.

01:17:08 Speaker_01
It literally won't answer the question. When you play this forward in your brain, you've been in the tech industry for a long time.

01:17:14 Speaker_01
And from looking at your work, it feels like you're describing this as the most sort of transformative, potentially harmful technology that humans have really ever seen.

01:17:22 Speaker_01
You know, maybe alongside the nuclear bomb, I guess, but some would say even potentially worse because of the nature of the intelligence and its autonomy.

01:17:32 Speaker_01
You must have moments where you think forward into the future, and your thoughts about that future aren't so rosy. Because I have those moments.

01:17:40 Speaker_03
Yes, but let's answer the question. I said, think five years. In five years, you'll have two or three more turns of the crank of these large models. These large models are scaling with ability that is unprecedented.

01:17:53 Speaker_03
There's no evidence that the scaling laws, as they're called, have begun to stop. They will eventually stop, but we're not there yet. Each one of these cranks looks like it's a factor of two, factor of three, factor of four of capability.

01:18:08 Speaker_03
So let's just say turning the crank, all of these systems get 50 times or 100 times more powerful. In and of itself, that's a very big deal, because those systems will be capable of physics and math.

01:18:21 Speaker_03
You see this with O.1 and OpenAI, all the other things that are occurring. What are the dangers? Well, the most obvious one is cyber attacks.

01:18:32 Speaker_03
There's evidence that the raw models, these are the ones that have not been released, can do what are called day zero attacks as well or better than humans. A day zero attack is an attack that's unknown. They can discover something new.

01:18:44 Speaker_03
And how do they do it? They just keep trying because they're computers and they have nothing else to do. They don't sleep. They don't eat. They just turn them on and they just keep going. So, cyber is an example where everybody's concerned.

01:18:55 Speaker_03
Another one is biology. Viruses are relatively easy to make, and you can imagine coming up with really bad viruses. There's a whole team. I'm part of a commission. We're looking at this to try to make sure that doesn't happen.

01:19:06 Speaker_03
I already mentioned misinformation. Another, probably negative, but we'll see, is the development of new forms of warfare. I've written extensively on how war is changing.

01:19:19 Speaker_03
And the way to understand historic war is that it's the, stereotypically, the soldier with the gun, you know, on one side, and so forth, World War trenches.

01:19:29 Speaker_03
You see this, by the way, in the Ukraine fight today, where the Ukrainians are holding on valiantly against the Russian onslaught. But it's sort of, you know, mano a mano, you know, man against man, sort of all of the stereotypes of war.

01:19:41 Speaker_03
So in a drone world, which is the sort of the fastest way to build new robots is to build drones, you'll be sitting in a command center in some office building connected by a network, and you'll be doing harm to the other side while you're drinking your coffee.

01:19:56 Speaker_03
That's a change in the logic of war, and it's applicable to both sides. I don't think anyone quite understands how war will change, but I will tell you that in the Russian-Ukraine war, you're seeing a new form of warfare being invented right now.

01:20:11 Speaker_03
Right? Both sides have lots of drones. Tanks are no longer very useful. A $5,000 drone can kill a $5 million tank. So it's called the kill ratio. So basically, it's drone on drone.

01:20:23 Speaker_03
And so now people are trying to figure out how to have one drone destroy the other drone. Right. This will ultimately take over war and conflict in our world in total.

01:20:33 Speaker_01
You mentioned role models. This is a concept that I don't think people understand exists. The idea that there's some other model that's the role model that is capable of much worse than the thing we play with on our computers every day.

01:20:46 Speaker_01
It's important to establish how these things work.

01:20:47 Speaker_03
So the way these algorithms work is they have complicated training things where they suck all the information in. And they, one, we currently believe we've sort of sucked all of the written word that's available.

01:21:00 Speaker_03
It doesn't mean there isn't more, but we've literally done such a good job of sucking everything that humans have ever written. It's all in these big computers. When I say computers, I don't mean computers.

01:21:09 Speaker_03
I mean supercomputers with enormous memories, and the scale is mind-boggling. And of course, there's this company called NVIDIA, which makes the chips, which is now one of the most valuable companies in the world.

01:21:21 Speaker_03
Surprisingly, so incredibly successful because they're so central to this revolution and good for Jensen and his team. So the important thing is, when you do this training, it comes out with a raw model.

01:21:33 Speaker_03
It takes six months, and you wait 24 hours a day. You can watch it. There's a measurement that they use called the loss function. When it gets to a certain number, they say, good enough. So then they go, what do we have? What do we do?

01:21:49 Speaker_03
So the first thing is, let's figure out what it knows. So they have a set of tests. And of course, it knows all sorts of bad things, which they immediately then tell it not to answer. To me, the most interesting question

01:22:02 Speaker_03
is in over a five-year period, these systems will learn things that we don't know they learn. How will you test for things that you don't know they know?

01:22:14 Speaker_03
The answer in the industry is that they have incredibly clever people who sit there and they fiddle, literally fiddle with the networks and say, I'm gonna see if it knows this. I'll see if it can do this.

01:22:29 Speaker_03
And then they make a list and they say, that's good, that's not so good. So all of these transformations, so for example, you can show it a picture of a website and it can generate the code to generate a website. All of those were not expected.

01:22:42 Speaker_03
They just happened. It's called emergent behavior. Scary. Scary, but exciting. And so far, the systems have held. The governments have worked well. These trust and safety groups are working here in the UK.

01:22:59 Speaker_03
One year ago was the first trust and safety conference. The government did a fantastic job. The team that was assembled was the best of all the country teams here in the UK. Now what's happening is these are happening around the world.

01:23:12 Speaker_03
The next one is in France in early February. and I expect a similarly good result.

01:23:17 Speaker_01
Do you think we're going to have to guard? I mean, you talk about this, but do you think we're going to have to guard these role models with with guns and tanks and machinery and stuff? I worked for the Secretary of Defense for a while in my

01:23:31 Speaker_03
In Google, you could spend 20% of your time on other things. So I worked for the Secretary of Defense to try to understand the US military. And one of the things that we did is we visited a plutonium factory.

01:23:43 Speaker_03
Plutonium is incredibly dangerous and incredibly secret. And so this particular base is inside of another base.

01:23:49 Speaker_03
So you go through the first set of machine guns, and then you have normal thing, and then you go into the special place with even more machines, guns, because it's so secure. So the metaphor is, do you fundamentally believe

01:24:01 Speaker_03
that the computers that I'm talking about will be of such value and such danger that they'll have their own data center with their own guards, which of course might be computer guards.

01:24:11 Speaker_03
But the important thing is that it's so special that it has to be protected in the same way that we protect nuclear bombs and programming. An alternative model is to say that this technology will spread pretty broadly and there'll be many such places.

01:24:27 Speaker_03
If it's a small number of groups, the governments will figure out a way to do deterrence, and they'll figure out a way to do non-proliferation. So I'll make something up.

01:24:37 Speaker_03
I'll say there's a couple in China, there's a few in the US, there's one in Britain. Of course, we're all tied together between the US and Britain, and maybe in a few other places. That's a manageable problem.

01:24:47 Speaker_03
On the other hand, let's imagine that that power is ultimately so easy to copy that it spreads globally, and it's accessible to, for example, terrorists. then you have a very serious proliferation problem, which is not yet solved.

01:25:02 Speaker_03
This is, again, speculation.

01:25:04 Speaker_01
Because I think a lot about adversaries in China and Russia and Putin. And I think, I know you talk about them being a few years behind, maybe one or two years behind, but they're eventually going to get there.

01:25:15 Speaker_01
They're eventually going to get to the point where they have these large language models or these AIs that can do these day zero attacks on our nation.

01:25:23 Speaker_01
And they don't have the same sort of social incentive structure if they're a communist country to protect and to guard against these things. Are you not worried about what China's going to do? I am worried.

01:25:37 Speaker_03
And I'm worried because you're going into a space of great power without fully defined boundaries.

01:25:44 Speaker_03
But Kissinger, and we talk about this in the book, the Genesis book is fundamentally about what happens to society with the arrival of this new intelligence. And the first book we did, Age of AI, was right before chat GPT.

01:25:57 Speaker_03
So now everybody kind of understands how powerful these things are. We talked about it. Now you understand it. So once these things show up, who's going to run them? Who's going to be in charge? How will they be used?

01:26:07 Speaker_03
So from my perspective, I believe, at the moment anyway, that China will behave relatively responsibly. And the reason is that it's not in their interest to have free speech. In every case in China,

01:26:22 Speaker_03
when they have a choice of giving freedom to their citizens or not, they choose non-freedom. And I know this because I spent all the time dealing with it.

01:26:33 Speaker_03
So it sure looks to me like the Chinese AI solution will be different from the West because of that fundamental bias against freedom of speech. Because these things are noisy. They make a lot of noise.

01:26:47 Speaker_01
They'll probably still make AI weapons though.

01:26:49 Speaker_03
Well, on the weapons side, you have to assume that every new technology is ultimately strengthened in a war. The tank was invented in World War I. At the same time, you had the initial forms of airplanes.

01:27:03 Speaker_03
Much of the Second World War was an air campaign, which essentially built many, many things. And if you look at the, there's a book called Freedom's Forge about the American structure.

01:27:17 Speaker_03
According to the book, they ultimately got to the point where they could build two or three airplanes a day at scale. So in an emergency, nations have enormous power.

01:27:29 Speaker_01
get asked all the time if anyone's going to have a job left to do, because this is the disruption of intelligence.

01:27:35 Speaker_01
And whether it's people driving cars today, I mean, we saw the Tesla announcement of the robo-taxis, whether it's accountants, lawyers, and everyone in between, or podcasters, are we going to have jobs left?

01:27:47 Speaker_03
Well, this question has been asked for 200 years. There were the Luddites here in Britain way back when. And inevitably, when these technologies come along, there's all these fears about them.

01:28:00 Speaker_03
Indeed, with the Luddites, there were riots and people, you know, destroying the looms and all of this kind of stuff. But somehow we got through it.

01:28:06 Speaker_03
So my own view is that there will be a lot of job dislocation, but there will be a lot more jobs, not fewer jobs. And here's why. We have a demographic problem in the world, especially in the developed world, where we're not having enough children.

01:28:24 Speaker_03
That's well understood. Furthermore, we have a lot of older people, and the younger people have to take care of the older people, and they have to be more productive.

01:28:32 Speaker_03
If you have young people who need to be more productive, the best way to make them more productive is to give them more tools to make them more productive, whether it's

01:28:41 Speaker_03
a machinist that goes from a manual machine into a CNC machine, or in the more modern case of a knowledge worker, who can achieve more objectives. We need that productivity group.

01:28:51 Speaker_03
If you look at Asia, which is the centerpiece of manufacturing, they have all this cheap labor. Well, it's not so cheap anymore. So do you know what they did? They added robotic assembly lines.

01:29:01 Speaker_03
So today when you go to China in particular, it's also true in Japan and Korea, the manufacturing is largely done by robots. Why? Because their demographics are terrible and their cost of labor is too high. So the future is not fewer jobs.

01:29:15 Speaker_03
It's actually a lot of jobs that are unfilled with people who may have a job skill mismatch, which is why education is so important. Now, what are examples of jobs that go away? Automation has always gotten rid of jobs that are dangerous.

01:29:31 Speaker_03
physically dangerous, or ones which are essentially too repetitive and too boring for humans. I'll give you an example. Security guards. It makes sense that security guards would become robotic because it's hard to be a security guard. You fall asleep.

01:29:47 Speaker_03
You don't know quite what to do. And these systems can be smart enough to be very, very good security. Now, these are important sources of income for these people. They're going to have to find another job.

01:29:57 Speaker_03
Another example, in Hollywood, everyone's concerned that AI is going to take over their jobs. All the evidence is the inverse, and here's why. The stars still get money. The producers still make money. They still distribute their movie.

01:30:12 Speaker_03
But their cost of making the movie is lower because they use, for example, synthetic backdrops so they don't have to build the set. They can do synthetic makeup. Now, there are job losses there.

01:30:22 Speaker_03
So the people who make the set and do the makeup are going to have to go back into construction and personal care. By the way, in America, and I think it's true here, there's an enormous shortage of people who can do high-quality craftsmanship.

01:30:36 Speaker_03
Those people will have jobs, they're just different.

01:30:38 Speaker_01
And they may not be in Los Angeles. Am I going to have to interface with this technology? Am I going to have to get a neural link in my brain?

01:30:45 Speaker_01
Because you go over the subject of there being these sort of two species of humans, potentially, ones that do have a way to incorporate themselves more with artificial intelligence and those that don't.

01:30:58 Speaker_01
And if that is the case, what is the time horizon in your view of that happening?

01:31:02 Speaker_03
I think Neuralink is much more speculative because you're dealing with direct brain connection and nobody's going to drill on my brain until it needs it, trust me. I suspect you feel the same.

01:31:12 Speaker_03
I guess my overall view is that you will not notice how much of your world has been co-opted by these technologies because they will produce greater delight. If you think about it, a lot of life is inconvenient.

01:31:34 Speaker_03
It's fix this, call this, make this happen. AI systems should make all that seamless. You should be able to wake up in the morning and have coffee and not have a care in the world and have the computer help you have a great day.

01:31:46 Speaker_03
This is true of everyone. Now, what happens to your profession? Well, as we said, No matter how good the computers are, people are going to want to care about other people.

01:31:57 Speaker_03
Another example, let's imagine you have Formula One, and you have Formula One with humans in it, and then you have a robot Formula One, where the cars are driven by the equivalent of a robot. Is anyone going to go to the robotic Formula One?

01:32:11 Speaker_03
I don't think so. because of the drama, the human achievement, and so forth. Do you think that when they run the marathon here in London, they're going to have robots running with humans? Of course not, right?

01:32:22 Speaker_03
Of course the robots can run faster than humans. It's not interesting. What is interesting is to see human achievement.

01:32:28 Speaker_03
So I think the commentators who say, oh, there won't be any jobs, you won't care, I think they miss the point that we care a great deal about each other as human beings. We have opinions.

01:32:38 Speaker_03
You have a detailed opinion about me having just met me right now, and we just sort of naturally set up your face, your mannerisms, and so forth. We can describe it all, right? The robot shows up, it's like, oh my God, another robot, how boring.

01:32:52 Speaker_01
Why is Sam Altman working on, the founder of OpenAI, one of the co-founders of OpenAI, working on universal basic income projects like WorldCoin then?

01:33:00 Speaker_03
Well, WorldCoin is not the same thing as universal basic income. There is a belief in the tech industry that it goes something like this. The politics of abundance, what we do, is going to create so much abundance

01:33:17 Speaker_03
that most people won't have to work and there'll be a small number of groups that work, who are typically these people themselves, and there'll be so much surplus, everyone can live like a millionaire and everyone will be happy.

01:33:28 Speaker_03
I completely think this is false. I think none of what I just told you is false. But all of these UBI ideas come from this notion that humans don't behave the way we actually do. So I'm a critic of this view.

01:33:41 Speaker_03
I believe that we as humans, so an example is we're going to make the legal profession much, much easier because we can automate much of the technical work of lawyers. Does that mean we're going to have fewer lawyers? No.

01:33:55 Speaker_03
The current lawyers will just do more laws. They'll add more complexity. The system doesn't get easier. The humans become more sophisticated in their application of the principles. We have this thing called basically reciprocal altruism.

01:34:12 Speaker_03
That's part of us, but we also have our bad sides as well. Those are not going away because of AI.

01:34:17 Speaker_01
When I think about AI, there's a simple analogy I often think of is, say my IQ as Stephen Bartlett is 100, and there's this AI that sat next to me whose IQ is 1,000. What on earth would you want to give Stephen to do?

01:34:29 Speaker_03
Because that 1,000 IQ would have really bad judgment in a couple of cases. Because remember that the AI systems do not have human values unless it's added.

01:34:39 Speaker_03
I would much rather talk to you about something involving a moral or human judgment, even with the thousand. I wouldn't mind consulting it. So tell me the history. How was this resolved in the past? How are these?

01:34:51 Speaker_03
But at the end of the day, in my view, the core aspects of humanity, which have to do with morals and judgment and beliefs and charisma, they're not going away.

01:35:00 Speaker_01
Is there a chance that this is the end of humanity?

01:35:03 Speaker_03
No. The way humanity dies, it's much harder to eliminate all of humanity than you think. All the people I've looked with on these biological attacks say it takes more than one horrific pandemic and so forth to eliminate humanity.

01:35:19 Speaker_03
And the pain can be very, very high in these moments. Look at the World War I, World War II, the Holomador in Ukraine in the 1930s, the Nazis. You know, these are horrifically painful things, but we survived, right?

01:35:34 Speaker_03
We as a humanity survived, and we will.

01:35:37 Speaker_01
I wonder if this is the moment where humans couldn't see past round the corner.

01:35:42 Speaker_01
Because, you know, I've heard you talk about how the AIs will turn and there'll be agents and they'll be able to speak to each other and we won't be able to understand the language.

01:35:49 Speaker_03
Well, I have a specific proposal on that. There are points where humans should assert control. And I've been trying to think about where are they? I'll give you an example.

01:35:59 Speaker_03
There's something called recursive self-improvement, where the system just keeps getting smarter and smarter and learning more and more things. At some point, if you don't know what it's learning, you should unplug it.

01:36:11 Speaker_03
But we can't unplug them, can we? Sure you can. There's a power plug and there's a circuit breaker. Go and turn the circuit breaker off.

01:36:17 Speaker_03
Another example, there's a scenario, theoretical, where the system is so powerful it can produce a new model faster than the previous model was checked. Okay, that's another intervention point.

01:36:33 Speaker_03
So, in each of these cases, if agents, and the technical term is called agents, what they really are is large language models with memory, and you can begin to concatenate them.

01:36:43 Speaker_03
You can say, this model does this, and then it feeds into this, and so forth. You build very powerful decision systems. We believe this is the thing that's occurring this year and next year. Everyone's doing them. They will arrive.

01:36:56 Speaker_03
The agents today speak in English. You can see what they're saying to each other. They're not human, but they are communicating what they're doing English to English to English.

01:37:08 Speaker_03
As long as, and it doesn't have to be English, but as long as they're human and understandable. So the thought experiment is one of the agents says, I have a better idea.

01:37:15 Speaker_03
I'm going to communicate in my own language that I'm going to invent that only other agents understand. That's a good time to pull the plug.

01:37:23 Speaker_01
What is your biggest fear about AI?

01:37:26 Speaker_03
My actual fear is different from what you might imagine. My actual fear is that we're not going to adopt it fast enough to solve the problems that affect everybody. And the reason is that if you look at everyone's everyday lives, what do they want?

01:37:40 Speaker_03
They want safety. They want health care. They want great schools for their kids. Why don't we just work on that for a while? Why don't we make people's lives just better because of AI? We have all these other interesting things.

01:37:51 Speaker_03
Why don't we have a teacher? that is an AI teacher that works with existing teachers in the language of the kid, in the culture of the kid, to get the kid as smart as they possibly can.

01:38:04 Speaker_03
Why don't we have a doctor, a doctor's assistant really, that enables a human doctor to

01:38:10 Speaker_03
always know every possible best treatment and then based on their current situation, what the inventory is, which country is, how their insurance works, what is the best way to treat that patient. Those are relatively achievable solutions.

01:38:21 Speaker_03
Why don't we have them? If you just did education and healthcare globally, the impact in terms of lifting human potential up would be so great, right, that it would change everything.

01:38:34 Speaker_03
It wouldn't solve the various other things that we complain about, about this celebrity or this misbehavior or this conflict or even this war, but it would establish a level playing field of knowledge and opportunity at a global level that has been the dream for decades and decades and decades.

01:38:53 Speaker_01
One of the things that I think about all the time, because my life is quite hectic and busy, is how to manage my energy load.

01:38:59 Speaker_01
And as a podcaster, you kind of have to manage your energy in such a way that you can have these articulate conversations with experts on subjects you don't understand.

01:39:07 Speaker_01
And this is why Perfect Ted has become so important in my life, because previously, when it came to energy products, I had to make a trade-off that I wasn't happy with. Typically, if I wanted the energy, I had to deal with high sugar.

01:39:17 Speaker_01
I had to deal with jitters and crashes that come along with a lot of the mainstream energy products. And I also just had to tolerate the fact that if I want energy, I have to put up with a lot of artificial ingredients, which my body didn't like.

01:39:29 Speaker_01
And that's why I invested in Perfect Ted and why they're one of the sponsors of this podcast. It has changed not just my life, but my entire team's life. And for me, it's drastically improved my cognitive performance, but also my physical performance.

01:39:39 Speaker_01
So if you haven't tried Perfect Ted yet, you must have been living under a rock. Now is the time. You can find Perfect Ted at Tesco and Waitrose or online where you can enjoy 40% off with code DIARY40 at checkout. Head to perfectted.com.

01:39:55 Speaker_01
Throughout the pandemic, I've been a big supporter. It was a contrarian view, but I think it's now less of a contrarian view that companies and CEOs need to be clear in their convictions around how they work.

01:40:06 Speaker_01
And one of the things that I've been criticized a lot for is that I'm for having people in a room together. So my companies, we're not remote. We work together in an office, as I said, down the road from here.

01:40:18 Speaker_01
And I believe in that because I think of community and engagement and synchronous work. And I think that work now has a responsibility to be more than just a set of tasks you do in a world where we're lonelier than ever before.

01:40:28 Speaker_01
There's more disconnection. And especially for young people who don't have families and so on, having them work alone in a small white box in a big city like London or New York is robbing them of something which I think is important.

01:40:40 Speaker_01
This was a contrarian view. It's become less contrarian as the big tech companies in America have started to roll back some of their initial knee-jerk reactions to the pandemic.

01:40:49 Speaker_01
A lot of them are asking their team members to come back into the office at least a couple of days a week. What's your point of view on this?

01:40:54 Speaker_03
So I have a strong view that I want people in an office. It doesn't have to be all one office, but I want them in an office. And partly it's for their own benefit. If you're in your 20s, when I was a young executive, I knew nothing of what I was doing.

01:41:07 Speaker_03
I literally was just lucky to be there. And I learned by hanging out at the water cooler, going to meetings, hanging out, being in the hallway.

01:41:14 Speaker_03
Had I been at home, I wouldn't have had any of that knowledge, which ultimately was central to my subsequent promotions. So if you're in your 20s, you want to be in an office because that's how you're going to get promoted.

01:41:25 Speaker_03
And I think that's consistent with the majority of the people who really want to work from home have honest problems with commuting and family and so forth. They're real issues. The problem with our joint view is it's not supported by the data.

01:41:37 Speaker_03
The data indicates that productivity is actually slightly higher when you allow work from home. So you and I really want that company of people sitting around the table and so forth, but the evidence does not support our view. Interesting.

01:41:53 Speaker_03
Is that true? It is absolutely true.

01:41:55 Speaker_01
Why is Facebook and all these companies rolling back their and like Snapchat rolling back their remote working policies?

01:42:00 Speaker_03
Not everyone is. And most companies are doing various forms of hybrids. where it's two days or three days or so forth.

01:42:10 Speaker_03
I'm sure that for the average listener here who works in public security or in a government, they say, well, my God, they're not in the office every day.

01:42:18 Speaker_03
But I'll tell you that at least for the industries that have been studied, there's evidence that allowing that flexibility from work from home increases productivity. I don't happen to like it, but I want to acknowledge the science is there.

01:42:31 Speaker_01
What is the advice that you wish you'd gotten at my age that you didn't get?

01:42:36 Speaker_03
The most important thing is probably keep betting on yourself and bet again and roll the dice and roll the dice. What happens as you get older is you realize that these opportunities were in front of you and you didn't jump for them.

01:42:49 Speaker_03
Why you were in a bad mood or you didn't know who to call or so forth. Life can be understood as a series of opportunities that are put before you and they're time limited.

01:43:01 Speaker_03
I was fortunate that I got the call after a number of people had turned it down to work for and with Larry and Sergey at Google. It changed my life. But that was luck and timing.

01:43:12 Speaker_03
My one friend on the board at the moment said, I was very thankful to him, and he said, but you know, you did one thing right. I said, what? He said, you said yes. So your philosophy in life should be to say yes to that opportunity.

01:43:26 Speaker_03
And yes, it's painful. And yes, it's difficult. And yes, you have to deal with your family. And yes, you have to travel to some foreign place and so forth. Get on the airplane and get it done.

01:43:35 Speaker_01
What's the hardest challenge you've dealt with in your life?

01:43:38 Speaker_03
Well, on the personal side, I've had a set of personal problems and tragedies, like everyone does. I think on a business context, There were moments at Google where we had control over an industry that we didn't execute well.

01:44:00 Speaker_03
The most obvious one is social media. At the time when Facebook was founded, we had a system which we called Orkut, which was really, really interesting. And somehow we did everything well, but we missed that one. Right.

01:44:13 Speaker_03
And I would have preferred, and I'll take responsibility for that.

01:44:16 Speaker_01
We have a closing tradition on this podcast where the last guest leaves a question for the next guest, not knowing who they're going to be leaving it for.

01:44:21 Speaker_01
And the question left for you is, what is your non-negotiable, something you do that significantly improves everyday life?

01:44:29 Speaker_03
Well, what I try to do is I try to be online and I also try to keep people honest. Every day you hear all sorts of ideas and so forth, half of which are right, half of which are wrong. I try to make sure I know the truth as best we can determine it.

01:44:45 Speaker_01
Eric, thank you so much. Thank you. It's such an honor. Your books have shaped my thinking in so many important ways.

01:44:52 Speaker_01
And I think your new book, Genesis, is the single best book I've read on the subject of AI, because you take a very nuanced approach to these subject matters.

01:45:01 Speaker_01
And I think sometimes it's tempting to be binary in your way of thinking about this technology, the pros and the cons. But your writing, your videos, your work takes this really balanced but informed approach to it.

01:45:11 Speaker_01
I have to say, as an entrepreneur, the Trillion Dollar Coach book is one I highly recommend everybody goes and reads, because it's just a really great manual of being a leader in the modern age and an entrepreneur.

01:45:21 Speaker_01
I'm going to link all five of these books in the comments section below. The new book, Genesis, comes out in the US, I believe, on the 19th of November. I don't have the UK date, but I'll find it and I'll put it in.

01:45:34 Speaker_01
But it's a critically important book that nobody should avoid. I've been searching for answers that are contained in this book for a very, very long time. I've been having a lot of conversations on this podcast in search of some of these answers.

01:45:46 Speaker_01
And I feel clearer about myself, my future, but also the future of society because I've read this book. So thank you for writing it.

01:45:53 Speaker_03
And thank you. And let's thank Dr. Kissinger. He finished the last chapter in his last week of life in his deathbed. That's how profound he thought that this book was. And all I'll tell you is that he wanted to set us up for a good next 50 years.

01:46:09 Speaker_03
Having lived for so long and seen both good and evil, he wanted to make sure we continue the good progress we're making as a society.

01:46:18 Speaker_01
Is there anything he would want to say?

01:46:21 Speaker_03
Any answer he gave would take five minutes.

01:46:26 Speaker_01
A remarkable man. Thank you, Eric. Thank you. I'm going to let you into a little bit of a secret.

01:46:36 Speaker_01
And you're probably going to think that I'm a little bit weird for saying this, but our team are our team because we absolutely obsess about the smallest things.

01:46:44 Speaker_01
Even with this podcast, when we're recording this podcast, we measure the CO2 levels in the studio, because if it gets above a thousand parts per million cognitive performance dips, this is the type of 1% improvement we make on our show.

01:46:55 Speaker_01
And that is why the show is the way it is. By understanding the power of compounding 1%, you can absolutely change your outcomes in your life. It isn't about drastic transformations or quick wins.

01:47:07 Speaker_01
It's about the small, consistent actions that have a lasting change in your outcomes. So two years ago, we started the process of creating this beautiful diary. And it's truly beautiful.

01:47:18 Speaker_01
Inside, there's lots of pictures, lots of inspiration and motivation as well, some interactive elements. And the purpose of this diary is to help you identify, stay focused on, develop consistency with the 1% that will ultimately change your life.

01:47:32 Speaker_01
We have a limited number of these 1% diaries, and if you want to do this with me, then join our waiting list. I can't guarantee all of you that join the waiting list will be able to get one, but if you join now you have a higher chance.

01:47:42 Speaker_01
The waiting list can be found at thediary.com. I'll link it below, but that is thediary.com.