Skip to main content

20VC: Sam Altman on The Trajectory of Model Capability Improvements: Will Scaling Laws Continue | Semi-Conductor Supply Chains | What Startups Will be Steamrolled by OpenAI and Where is Opportunity AI transcript and summary - episode of podcast The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch

· 44 min read

Go to PodExtra AI's episode page (20VC: Sam Altman on The Trajectory of Model Capability Improvements: Will Scaling Laws Continue | Semi-Conductor Supply Chains | What Startups Will be Steamrolled by OpenAI and Where is Opportunity) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Go to PodExtra AI's podcast page (The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch) to view the AI-processed content of all episodes of this podcast.

View full AI transcripts and summaries of all podcast episodes on the blog: The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch

Episode: 20VC: Sam Altman on The Trajectory of Model Capability Improvements: Will Scaling Laws Continue | Semi-Conductor Supply Chains | What Startups Will be Steamrolled by OpenAI and Where is Opportunity

20VC: Sam Altman on The Trajectory of Model Capability Improvements: Will Scaling Laws Continue | Semi-Conductor Supply Chains | What Startups Will be Steamrolled by OpenAI and Where is Opportunity

Author: Harry Stebbings
Duration: 00:39:53

Episode Shownotes

Sam Altman is the CEO of OpenAI, one of the most important companies in history. OpenAI is on a mission to ensure that artificial general intelligence benefits all of humanity. Prior to OpenAI, Sam was the President of Y Combinator and an angel investor in Stripe, Airbnb, Reddit and Instacart.

15 Questions with OpenAI CEO Sam Altman: 1. Will the trajectory of model capability improvement keep going at the same rate as it has been? 2. When did Sam doubt the continuance of scaling laws most? What has been the hardest technical research challenge OpenAI have overcome? 3. How worried is Sam about semiconductor supply chains and international tensions around them? 4. What is Sam’s biggest worry today? How has it changed over the last 12 months and 5 years? 5. In what ways does Sam feel he was and is unprepared for the role of CEO of OpenAI? 6. Was Masa Son right to suggest that $9TRN of value will be created every year by AI? 7. Why does Sam disagree with Larry Ellison’s statement that it will cost $100BN to enter the foundation model race? 8. Was Keith Rabois right that the best way to build companies is to hire under 30s? 9. What unmade decision weighs on Sam’s mind most often? 10. What is Sam most grateful to Y Combinator for? 11. What would Sam build if he were a 23 year old starting today with the foundational AI technology that is already in place? 12. What should startups not try and build as OpenAI will steamroll them? What should they try and build where OpenAI will not go? 13. What does Sam believe is the most exciting use of agents that he has not seen created yet? 14. How does Sam believe that human potential is most wasted today? 15. Who does Sam most respect in the world of AI today? Why them?

Full Transcript

00:00:00 Speaker_01
We are going to try our hardest and believe we will succeed at making our models better and better and better.

00:00:05 Speaker_01
If you are building a business that patches some current small shortcomings, if we do our job right, that will not be as important in the future.

00:00:15 Speaker_01
We believe that we are on a pretty, a quite steep trajectory of improvement and that the current shortcomings of the models today will just be taken care of by future generations. I encourage people to be aligned with that.

00:00:27 Speaker_02
This is 20VC with me, Harry Stebbings, and what a discussion we have for you today. I was very honoured to be asked to interview Sam Altman at OpenAI's Dev Day in London, and our episode today is the exclusive of that discussion.

00:00:40 Speaker_02
For those that have been living under a rock, Sam Altman is the CEO of OpenAI, one of the most important companies in history. OpenAI is on a mission to ensure that Artificial General Intelligence benefits all of humanity.

00:00:53 Speaker_02
Prior to OpenAI, Sam was the president of Y Combinator and an angel investor in Stripe, Airbnb, Reddit and Instacart. Insights, you'll love the Sam Altman book summary collection on the Blinkist app.

00:01:07 Speaker_02
With Blinkist, you can access first-class summaries of his favorite books and 7,500 more to read and to listen to in just 15 minutes. Just to mention some, Zero to One by Peter Thiel or The Beginning of Infinity by David Deutscher.

00:01:23 Speaker_02
Being recommended by the New York Times and Apple's CEO Tim Cook, it's no surprise 82% of Blinkist users see themselves as self-optimizers and 65% say it's essential for business and career growth.

00:01:37 Speaker_02
And speaking of growth, over 32 million users have been empowered by Blinkist since 2012. As a 20VC listener, hey, you get a 25% discount on Blinkist.

00:01:49 Speaker_02
that's B-L-I-N-K-I-S-T, just visit Blinkist.com forward slash 2-0-V-C to claim your discount and transform the way you learn.

00:02:00 Speaker_02
And while you're optimizing your learning, let's also optimize your finances with Brex, the financial stack founders can bank on. Brex knows that nearly 40% of startups fail because they run out of cash.

00:02:11 Speaker_02
So they built a banking experience that takes every dollar further. It's such a difference from traditional banking options that leave your cash sitting idle while chipping away at it with fees.

00:02:22 Speaker_02
To help you protect your cash and extend your runway, Brex combined the best things about checking, treasury, and FDIC insurance in one powerhouse account. You can send and receive money worldwide at lightning speed.

00:02:35 Speaker_02
You can get 20x the standard FDIC protection through program banks and you can earn industry-leading yield from your first dollar while still being able to access your funds anytime. Brex is a top choice for startups.

00:02:49 Speaker_02
In fact, hey, it's used by one in every three startups in the US. Just check them out now, brex.com forward slash startups. And talking about building trust, SecureFrame provides incredible levels of trust to your customers through automation.

00:03:04 Speaker_02
SecureFrame empowers businesses to build trust with customers by simplifying information security and compliance through AI and automation.

00:03:13 Speaker_02
thousands of fast-growing businesses including Nasdaq, AngelList, Doodle, and Coda trust SecureFrame to expedite their compliance journey for global security and privacy standards such as SOC 2, ISO 27001, HIPAA, GDPR, and more,

00:03:29 Speaker_02
Backed by top-tier investors and corporations such as Google and Kleiner Perkins, the company is among Forbes' list of Top 100 Startup Employers for 2023 and Business Insider's list of the 34 Most Promising AI Startups for 2023.

00:03:45 Speaker_02
Learn more today at Secureframe.com. It's a must. Hello, everyone. Welcome to OpenAI Dev Day. I am Harry Stebbings of 20VC, and I am very, very excited to interview Sam Altman. Sam, thank you for letting me do this today with you. Thanks for doing it.

00:04:05 Speaker_02
I want to start by diving in. We had a lot of fantastic questions from the audience across a number of different areas. I want to start with the question of, when we look forward,

00:04:16 Speaker_02
Is the future of OpenAI more models like O1, or is it larger models that we would maybe have expected of old? How do we think about that?

00:04:25 Speaker_01
I mean, we want to make things better across the board, but this direction of reasoning models is of particular importance to us. I hope reasoning will unlock a lot of the things that we've been waiting years to do.

00:04:36 Speaker_01
The ability for models like this to, for example, contribute to new science help write a lot more very difficult code that I think can drive things forward to a significant degree.

00:04:46 Speaker_01
So you should expect rapid improvement in the O series of models and it's of great strategic importance to us.

00:04:53 Speaker_02
When we look forward to OpenAI's future plans, how do you think about developing no-code tools for non-technical founders to build and scale AI apps? How do you think about that?

00:05:04 Speaker_01
It'll get there for sure. The first step will be tools that make people who know how to code well more productive. But eventually, I think we can offer really high-quality no-code tools, and already there's some out there that make sense.

00:05:16 Speaker_01
But you can't sort of in a no-code way say, I have like a full startup I want to build. That's going to take a while.

00:05:21 Speaker_02
When we look at where we are in the stack today, OpenAI sits in a certain place. How far up the stack is OpenAI going to go?

00:05:29 Speaker_02
If you're spending a lot of time tuning your RAG system, is this a waste of time because OpenAI ultimately thinks they'll own this part of the application layer? And how do you answer a founder who has that question?

00:05:40 Speaker_01
The general answer we try to give, we are going to try our hardest and believe we will succeed at making our models better and better and better.

00:05:48 Speaker_01
And if you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future.

00:05:58 Speaker_01
If on the other hand, you build a company that benefits from the model getting better and better.

00:06:04 Speaker_01
If an Oracle told you today that O4 was going to be just absolutely incredible and do all of these things that right now feel impossible, and you were happy about that, then maybe we're wrong, but at least that's what we're going for.

00:06:20 Speaker_01
If instead you say, okay, there's this area where there are many, but you pick one of the many areas where a one preview underperforms, and so I'm going to patch this and just barely get it to work, then you're assuming that the next turn of the model crank won't be as good as we think it will be.

00:06:34 Speaker_01
That is the general philosophical message we try to get out to startups. We believe that we are on a pretty a quite steep trajectory of improvement and that the current shortcomings of the models today will just be taken care of by future generations.

00:06:50 Speaker_01
I would encourage people to be aligned with that.

00:06:52 Speaker_02
We did an interview before with Brad. Sorry, it's not quite on schedule, but I think the show has always been successful when we kind of go a little bit off schedule. Please go totally off. But there was this brilliant kind of meme that came out of it.

00:07:03 Speaker_02
You said, wearing this 20 VC jumper, which is an incredibly proud moment for me, for certain segments like the one you mentioned there, there would be the potential to steamroll.

00:07:11 Speaker_02
If you're thinking as a founder today, building, where is open AI going to potentially come and steamroll versus where they're not? Also for me as an investor, trying to invest in opportunities that aren't going to get damaged.

00:07:22 Speaker_02
How should founders and me as an investor think about that?

00:07:26 Speaker_01
There will be many trillions of dollars of market cap that gets created, new market cap that gets created by using AI to build products and services that were either impossible or quite impractical before.

00:07:38 Speaker_01
There is this one set of areas where we're going to try to make it relevant, which is we just want the models to be really, really good such that you don't have to fight so hard to get them to do what you want to do.

00:07:50 Speaker_01
But all of this other stuff, which is building these incredible products and services on top of this new technology, we think that just gets better and better.

00:07:58 Speaker_01
One of the surprises to me early on, and this is no longer the case, but in like the GPT 3.5 days, It felt like 95% of startups, something like that, wanted to bet against the models getting way better.

00:08:11 Speaker_01
And they were doing these things where we could already see GPT-4 coming and we're like, man, it's going to be so good. It's not going to have these problems.

00:08:18 Speaker_01
If you're building a tool just to get around this one shortcoming of the model, that's going to become less and less relevant. And we forget how bad the models were a couple of years ago. It hasn't been that long on the calendar.

00:08:30 Speaker_01
But there were just a lot of things. And so it seemed like these good areas to build a thing to plug a hole rather than to build something to go deliver the great AI tutor or the great AI medical advisor or whatever.

00:08:42 Speaker_01
And so I felt like 95% of people that were betting against the models getting better, 5% of people were betting for the models getting better. I think that's now reversed. I think people have internalized the rate of improvement and

00:08:55 Speaker_01
have hurt us on what we intend to do. It no longer seems to be such an issue, but it was something we used to fret about a lot because we kind of, we saw what was going to happen to all of these very hardworking people.

00:09:06 Speaker_02
You said about the trillions of dollars of value to be created there, and I promise we will return to these brilliant questions. I'm sure you saw, I'm not sure if you saw, but Massa sit on stage and say we will have, I'm not going to do an accent.

00:09:18 Speaker_02
my accents are terrible, but there'll be $9 trillion of value created every single year, which will offset the $9 trillion capex that he thought would be needed. I'm just intrigued. How did you think about that when you saw that?

00:09:32 Speaker_02
How do you reflect on that?

00:09:34 Speaker_01
I think if we can get it right within orders of magnitude, that's good enough for now. There's clearly going to be a lot of capex spent and clearly a lot of value created.

00:09:41 Speaker_01
This happens with every other mega technological revolution of which this is clearly one. But next year will be a big push for us into these next generation systems. You talked about when there could be a no-code software agent.

00:09:56 Speaker_01
I don't know how long that's going to take, but if we use that as an example and imagine forward towards it, think about how much economic value gets unlocked for the world if anybody can just describe a whole company's worth of software that they want.

00:10:08 Speaker_01
This is a ways away, obviously. But when we get there and have it happen, think about how difficult and how expensive that is now.

00:10:15 Speaker_01
Think about how much value it creates if you keep the same amount of value, but make it wildly more accessible and less expensive. That's really powerful. And I think we'll see many other examples like that.

00:10:24 Speaker_01
I mentioned earlier, like health care and education, but those are two that are both trillions of dollars of value to the world to get right. And if AI can really, really, truly enable this to happen in a different way than it has before,

00:10:36 Speaker_01
I don't think big numbers are the point and they're also the debate about whether it's 9 trillion or 1 trillion or whatever. Like, smarter people than me it takes to figure that out. But the value creation does seem just unbelievable here.

00:10:49 Speaker_02
We're going to get to agents in terms of how that value is delivered. In terms of the delivery mechanism for which it's valued, open source is an incredibly prominent method through which it could be.

00:10:58 Speaker_02
How do you think about the role of open source in the future of AI? How does internal discussions look like for you when the question comes, should we open source any models or some models?

00:11:09 Speaker_01
There's clearly a really important place in the ecosystem for open source models. There's also really good open source models that now exist. I think there's also a place for nicely offered, well-integrated services and APIs.

00:11:23 Speaker_01
I think it makes sense that all of this stuff is on offer and people will pick what works for them.

00:11:26 Speaker_02
As a delivery mechanism, we have the open source. As end-prop to customers, and a way to deliver that, we can have agents. I think there's a lot of semantic confusion around what an agent is.

00:11:37 Speaker_02
How do you think about the definition of agents today, and what is an agent to you?

00:11:41 Speaker_01
This is like my off-the-cuff answer. It's not well considered, but something that I can give a long-duration task to and provide minimal supervision during execution for. What do you think people think about agents that actually they get wrong?

00:11:57 Speaker_01
Well, it's more like I don't think any of us yet have an intuition for what this is going to be like in a world gesturing at something that seems important. Maybe I can give the following example.

00:12:07 Speaker_01
When people talk about an AI agent acting on their behalf,

00:12:11 Speaker_01
The main example they seem to give fairly consistently is, you know, you can ask the agent to go book you a restaurant reservation, and either it can use OpenTable or it can call the restaurant. Okay, sure, that's like a mile.

00:12:26 Speaker_01
annoying thing to have to do and it maybe like saves you some work. One of the things that I think is interesting as a world where you can just do things that you wouldn't or couldn't do as a human.

00:12:35 Speaker_01
So what if, what if instead of calling one restaurant to make a reservation, my agent would call me like 300 and figure out which one had the best food for me or some special thing available or whatever.

00:12:46 Speaker_01
And then you would say, well, that's like really annoying if your agent is calling 300 restaurants. But if it's an agent answering each of those 300 places, then no problem. And it can be this like massively parallel thing that a human can't do.

00:12:57 Speaker_01
So that's like a trivial example, but there are these like limitations to human bandwidth that maybe these agents won't have.

00:13:04 Speaker_01
The category I think that was more interesting is not the one that people normally talk about where you have this thing calling restaurants for you.

00:13:12 Speaker_01
but something that's more like a really smart senior co-worker where you can like collaborate on a project with and the agent can go do like a two-day task or two-week task really well and ping you when it has questions but come back to you with like a great work product.

00:13:28 Speaker_02
Does this fundamentally change the way that SaaS is priced? And normally it's on a per seat basis, but now you're actually kind of replacing labor, so to speak.

00:13:36 Speaker_02
How do you think about the future of pricing with that in mind, when you are such a core part of an enterprise workforce?

00:13:42 Speaker_01
I'll speculate here for fun, but we really have no idea. I mean, I could imagine a world where you can say, like, I want one GPU or 10 GPUs or 100 GPUs to just be like churning on my problems all the time.

00:13:53 Speaker_01
You're not paying per seat or even per agent, but it's priced based off the amount of compute that's working on your problems all the time.

00:14:02 Speaker_02
Do we need to build specific models for agentic use, or do we not? How do you think about that?

00:14:09 Speaker_01
There's a huge amount of infrastructure and scaffolding to build for sure, but I think O1 points the way to a model that is capable of doing great agentic tasks.

00:14:18 Speaker_02
On the model side, Sam, everyone says that models are depreciating assets. The commoditization of models is so rife. How do you respond and think about that?

00:14:28 Speaker_02
And when you think about the increasing capital intensity to train models, are we actually seeing the reversion of that where it requires so much money that actually very few people can do it?

00:14:37 Speaker_01
It's definitely true that they're depreciating assets. This thing that they're not worth as much as they cost to train, that seems totally wrong.

00:14:46 Speaker_01
To say nothing of the fact that there's a positive compounding effect as you learn to train these models, you get better at training the next one. But the actual revenue we can make from a model, I think, justifies the investment.

00:14:58 Speaker_01
To be fair, I don't think that's true for everyone, and there's a lot of There are probably too many people training very similar models.

00:15:06 Speaker_01
And if you're a little behind, or if you don't have a product with the sort of normal rules of business that make that product sticky and valuable, then yeah, maybe it's harder to get a return on the investment.

00:15:19 Speaker_01
We're very fortunate to have ChatGPT and hundreds of millions of people that use our models. And so even if it costs a lot, we get to like amortize that cost across a lot of people.

00:15:27 Speaker_02
How do you think about how open AI models continue to differentiate over time and where you most want to focus to expand that differentiation?

00:15:35 Speaker_01
Reasoning is our current most important area of focus. I think this is what unlocks the next massive leap forward in value created. We'll improve them in lots of ways. We will do multimodal work.

00:15:49 Speaker_01
We will do other features in the models that we think are super important to the ways that people want to use these things. How do you think about reasoning in multimodal work like that? I hope it's just going to work.

00:16:00 Speaker_01
I mean, it obviously takes some doing to get done, but people, like when they're babies and toddlers, before they're good at language, can still do quite complex visual reasoning. So clearly this is possible.

00:16:10 Speaker_02
How will vision capabilities scale with new inference time paradigms set by O1?

00:16:17 Speaker_01
Without spoiling anything, I would expect rapid progress in image-based models.

00:16:22 Speaker_02
Going off schedule is one thing. Trying to tease that out might get me in real trouble. How does OpenAI make breakthroughs in terms of core reasoning?

00:16:30 Speaker_02
Do we need to start pushing into reinforcement learning as a pathway or other new techniques aside from the transformer?

00:16:36 Speaker_01
I mean there's two questions in there. There's how we do it and then there's everyone's favorite question which is what comes beyond the transformer. How we do it is like our special sauce. It's easy. It's really easy to copy something you know works.

00:16:47 Speaker_01
And one of the reasons that people don't talk about why it's so easy is you have the conviction to know it's possible. And so after a research lab does something, even if you don't know exactly how they did it, it's...

00:16:58 Speaker_01
I won't say easy, but it's doable to go off and copy it. And you can see this in the replications of GPT-4, and I'm sure you'll see this in replications of O1.

00:17:06 Speaker_01
What is really hard, and the thing that I'm most proud of about our culture, is the repeated ability to go off and do something new and totally unproven.

00:17:18 Speaker_01
A lot of organizations, now I'm not talking about AI researchers generally, a lot of organizations talk about the ability to do this. There are very few that do across any field.

00:17:28 Speaker_01
And in some sense, I think this is one of the most important inputs to human progress.

00:17:33 Speaker_01
One of the retirement things I fantasize about doing is writing a book of everything I've learned about how to build an organization and a culture that does this thing, not the organization that just copies what everybody else has done.

00:17:46 Speaker_01
Because I think this is something that the world could have a lot more of.

00:17:50 Speaker_01
It's limited by human talent, but there's a huge amount of wasted human talent because this is not an organization, style, or culture, whatever you want to call it, that we are all good at building.

00:18:01 Speaker_01
So I'd love way more of that, and that is, I think, the thing most special about us. Sam, how is human talent wasted?

00:18:22 Speaker_01
One of the things I'm most excited about with AI is I hope it'll get us much better than we are now at helping get everyone to their max potential, which we are nowhere, nowhere near.

00:18:31 Speaker_01
There's a lot of people in the world that I'm sure would be phenomenal AI researchers, had their life paths just gone a little bit differently.

00:18:39 Speaker_02
Sam, you've had an incredible journey over the last few years through unbelievable hyper-growth. You say about writing a book there in retirement.

00:18:49 Speaker_02
If you reflect back on the 10 years of leadership change that you've undergone, how have you changed your leadership most significantly?

00:18:57 Speaker_01
Well, I think the thing that has been most unusual for me about these last couple of years is just the rate at which things have changed.

00:19:06 Speaker_01
At a normal company, you get time to go from zero to 100 million in revenue, 100 million to a billion, billion to 10 billion. You don't have to do that in like a two-year period. And you don't have to like build the company.

00:19:18 Speaker_01
We had to research that, but we really didn't have a company in the sense of a traditional Silicon Valley startup that's, you know, scaling and serving lots of customers or whatever.

00:19:25 Speaker_01
Having to do that so quickly, there was just like a lot of stuff that I was supposed to get more time to learn than I got. What did you not know that you would have liked more time to learn? I mean, I would say like, what did I know?

00:19:40 Speaker_01
One of the things that just came to mind out of like a rolling list of 100 is how hard it is, how much active work it takes to get the company to focus not on how you grow the next 10%, but the next 10x.

00:19:54 Speaker_01
And growing the next 10%, it's the same things that worked before will work again. But to go from a company doing, say, like a billion to $10 billion in revenue requires a whole lot of change.

00:20:04 Speaker_01
And it is not the sort of like, let's do last week what we did this week mindset. And in a world where people don't get time to even get caught up on the basics because growth is just so rapid,

00:20:17 Speaker_01
I badly underappreciated the amount of work it took to be able to keep charging at the next big step forward while still not neglecting everything else that we have to do.

00:20:29 Speaker_01
There's a big piece of internal communication around that and how you share information, how you build the structures to get the company to get good at thinking about 10x more stuff or bigger stuff or more complex stuff.

00:20:42 Speaker_01
every eight months, twelve months, whatever.

00:20:44 Speaker_01
There's a big piece in there about planning, about how you balance what has to happen today and next month with the long-lead pieces you need in place for to be able to execute in a year or two years with, you know, build out of compute or even things that are more normal like planning ahead enough for like office space in a city like San Francisco is surprisingly hard at this kind of rate.

00:21:06 Speaker_01
So there was either no playbook for this or someone had a secret playbook they didn't give me. We've all just sort of fumbled our way through this, but there's been a lot to learn on the fly.

00:21:14 Speaker_02
God, I don't know if I'm going to get into trouble for this, but sod it. I'll ask it anyway. And if so, I'll deal with it later. Keith Raboy did a talk and he said about, you should hire incredibly young people under 30.

00:21:27 Speaker_02
And that is what Peter Thiel taught him. And that is the secret to building great companies. I'm intrigued when you think about. this book that you write in retirement.

00:21:35 Speaker_02
And that advice, you build great companies by building incredibly young, hungry, ambitious people who are under 30. And that is the mechanism.

00:21:44 Speaker_01
I think I was 30 when we started OpenAI, or at least thereabouts. So I wasn't that young. Seemed to work okay so far.

00:21:50 Speaker_02
The question is, how do you think about hiring incredibly young under 30s as this Trojan horse of youth, energy, ambition, but less experienced, or the much more experienced, I know how to do this, I've done it before?

00:22:04 Speaker_01
I mean, the obvious answer is you can succeed with hiring both classes of people. I was just like right before this, I was sending someone a Slack message about there was a guy that we recently hired on one of the teams.

00:22:16 Speaker_01
I don't know how old he is, but low 20s, probably doing just insanely amazing work. And I was like, can we find a lot more people like this? This is just like off the charts. I don't get how these people can be so good, so young, but it clearly happens.

00:22:27 Speaker_01
And when you can find those people, they bring amazing, fresh perspective, energy, whatever else.

00:22:33 Speaker_01
On the other hand, when you're like designing some of the most complex and massively expensive computer systems that humanity has ever built, actually like pieces of infrastructure of any sort, then I would not be comfortable taking a bet on someone who is just sort of like starting out where the stakes are higher.

00:22:50 Speaker_01
So you want both. And I think what you really want is just like an extremely high talent bar of people at any age and a strategy that said, I'm only going to hire younger people or I'm only going to hire older people. I believe would be misguided.

00:23:04 Speaker_01
It's not quite the framing that resonates with me, but the part of it that does. And one of the things that I feel most grateful about Y Combinator 4 is inexperience does not inherently mean not valuable.

00:23:17 Speaker_01
And there are incredibly high potential people at the very beginning of their career that can create huge amounts of value.

00:23:25 Speaker_02
We as a society should bet on those people and it's a great thing. I am going to return to some semblance of the schedule as I'm really going to get told off. But anthropics models have been sometimes cited as being better for coding tasks.

00:23:38 Speaker_02
Why is that? Do you think that's fair? And how should developers think about when to pick OpenAI versus a different provider?

00:23:45 Speaker_01
Yeah, they have a model that is great at coding for sure. And it's impressive work. I think developers use multiple models most of the time, and I'm not sure how that's all going to evolve as we head towards this more agentified world.

00:23:59 Speaker_01
But I sort of think there's just going to be a lot of AI everywhere and something about the way that we currently talk about it or think about it feels wrong.

00:24:09 Speaker_01
Maybe if I had to describe it, we will shift from talking about models to talking about systems, but that'll take a while.

00:24:15 Speaker_02
When we think about scaling models, how many more model iterations do you think scaling laws will hold true for? It was a common refrain that it won't last for long, and it seems to be proving to last longer than people think.

00:24:28 Speaker_01
Without going into detail about how it's going to happen, the core of the question that you're getting at is, is the trajectory of model capability improvement going to keep going like it has been going?

00:24:40 Speaker_01
And the answer that I believe is yes for a long time.

00:24:43 Speaker_02
Have you ever doubted that? Totally.

00:24:46 Speaker_01
Why? Well, we've had like behavior we don't understand. We've had failed training runs. We've all sorts of things. We've had to figure out new paradigms when we kind of get towards the end of one and have to figure out the next.

00:24:57 Speaker_01
What was the hardest one to navigate? Well, when we started working on GPT-4, there were some issues that caused us a lot of consternation that we really didn't know how to solve.

00:25:06 Speaker_01
We figured it out, but there was definitely a time period where we just didn't know how we were going to do that model. And then in this shift to O1 and the idea of reasoning models, that was something we had been excited about for a long time.

00:25:19 Speaker_01
But it was like a long and winding road of research to get here.

00:25:22 Speaker_02
Is it difficult to maintain morale when it is long and winding roads, when training runs can fail? How do you maintain morale in those times?

00:25:30 Speaker_01
You know, we have a lot of people here who are excited to build AGI. That's a very motivating thing. And no one expects that to be easy and a straight line to success. But there's a famous quote from history.

00:25:41 Speaker_01
It's something like, I never pray and ask for God to be on my side. You know, I pray and hope to be on God's side. And there is something about betting on deep learning that feels like being on the side of the angels.

00:25:52 Speaker_01
And you kind of just, it eventually seems to work out, even though you hit some big stumbling blocks along the way. And so like a deep belief in that has been good for us.

00:26:00 Speaker_02
Can I ask you a really weird one? I had a great quote the other day, and it was, the heaviest things in life are not iron or gold, but unmade decisions. What unmade decision weighs on your mind most?

00:26:11 Speaker_01
It's different every day. There's not one big one. I mean, I guess there are some big ones like about, are we going to bet on this next product or that next product? Or are we going to like build our next computer this way or that way?

00:26:25 Speaker_01
That are kind of like really high stakes, one way door-ish that like everybody else I probably delay for too long.

00:26:31 Speaker_01
But mostly the hard part is every day it feels like there are a few new 51-49 decisions that come up that kind of make it to me because they were 51-49 in the first place and that I don't feel like particularly likely I can do better than somebody else would have done, but I kind of have to make them anyway.

00:26:51 Speaker_01
It's the volume of them. It is not anyone.

00:26:54 Speaker_02
Is there a commonality in the person that you call when it's 51 49?

00:26:59 Speaker_01
No, I, I think the wrong way to do that is to have one person you lean on for everything.

00:27:04 Speaker_01
And the right way, at least for me, the right way to do it is to have like 15 or 20 people, each of which you have come to believe has good instincts and good context in a particular way.

00:27:13 Speaker_01
And you get to like phone a friend to the best expert rather than try to have just one across the board.

00:27:18 Speaker_02
In terms of hard decisions, I do want to touch on semiconductor supply chains. How worried are you about semiconductor supply chains and international tensions today?

00:27:28 Speaker_01
I don't know how to quantify that. Worried, of course, is the answer. I guess I could quantify it this way. It is not my top worry, but it is in like the top 10% of all worries. Am I allowed to ask what's your top worry?

00:27:41 Speaker_01
It's something about the sort of generalized complexity of all we as a whole field are trying to do. I think it's all going to work out fine. But it feels like a very complex system. Now, this kind of like works frantically at every level.

00:27:55 Speaker_01
So you can say, that's also true, like inside of OpenAI itself. That's also true inside of any one team. But

00:28:02 Speaker_01
An example of this, since we were just talking about semiconductors, is you've got to balance the power availability with the right networking decisions, with being able to get enough chips in time and whatever risk there's going to be there, with the ability to have the research ready to intersect that so you don't either be caught totally flat-footed or have a system that you can't utilize.

00:28:21 Speaker_01
with the right product that is going to use that research to be able to pay the eye-watering cost of that system.

00:28:28 Speaker_01
So supply chain makes it sound too much like a pipeline, but yeah, the overall ecosystem complexity at every level of the fractal scan is unlike anything I have seen in any industry before. And some version of that is probably my top worry.

00:28:44 Speaker_02
You said, unlike anything we've seen before, a lot of people, I think, compare this wave to the internet bubble in terms of the excitement and the exuberance. And I think the thing that's different is the amount that people are spending.

00:28:56 Speaker_02
Larry Ellison said that it will cost $100 billion to enter the foundation model race as a starting point. Do you agree with that statement? And when you saw that, were you like, yeah, that makes sense? No, I think it will cost less than that.

00:29:08 Speaker_01
But there's an interesting point here. which is everybody likes to use previous examples of a technology revolution to talk about, to put a new one into more familiar context.

00:29:19 Speaker_01
And A, I think that's a bad habit on the whole, but I understand why people do it. And B, I think the ones people pick for analogizing AI are particularly bad.

00:29:29 Speaker_01
So the internet was obviously quite different than AI and you brought up this one thing about cost and whether it costs like 10 billion or 100 billion or whatever to be competitive.

00:29:38 Speaker_01
It was very like one of the defining things about the internet revolution was it was actually really easy to get started.

00:29:46 Speaker_01
Now another thing that cuts more towards the internet is mostly for many companies this will just be like a continuation of the internet. It's just like someone else makes these AI models

00:29:56 Speaker_01
and you get to use them to build all sorts of great stuff and it's like a new primitive for building technology. But if you're trying to build the AI itself, that's pretty different.

00:30:05 Speaker_01
Another example people use is electricity, which I think doesn't make sense for a ton of reasons. The one I like the most, caveated by my earlier comment that I don't think people should be doing this, are you trying to like

00:30:16 Speaker_01
use these analogies too seriously is the transistor. It was a new discovery of physics. It had incredible scaling properties. It seeped everywhere pretty quickly.

00:30:26 Speaker_01
You know, we had things like Moore's law in a way that we could now imagine, like a bunch of laws for AI that tell us something about how quickly it's going to get better. And everyone kind of, like the whole tech industry kind of benefited from it.

00:30:39 Speaker_01
And there's a lot of transistors involved in the products and delivery of services that you use, but you don't really think of them as transistor companies.

00:30:48 Speaker_01
There's a very complex, very expensive industrial process around it with a massive supply chain.

00:30:53 Speaker_01
And the incredible progress based off of this very simple discovery of physics led to this gigantic uplift of the whole economy for a long time, even though most of the time you didn't think about it.

00:31:03 Speaker_01
And you don't say, oh, this is a transistor product. It's just like, oh, all right, this thing can like process information for me.

00:31:09 Speaker_02
You don't even really think about that. It's just expected. Sam, I'd love to do a quick fire round with you. So I'm going to say a short statement. You give me your immediate thoughts, OK? OK.

00:31:17 Speaker_02
So you are building today as a whatever, 23, 24 year old with the infrastructure that we have today. What do you choose to build if you started today?

00:31:27 Speaker_01
Uh, some AI enabled vertical, I'll use tutors as an example, but like the, the, the best AI tutoring product or the, you know, that I could possibly imagine to teach people to learn any category like that could be the AI lawyer could be the sort of like AI CAD engineer, whatever.

00:31:43 Speaker_02
You mentioned your book. If you were to write a book, what would you call it?

00:31:46 Speaker_01
I don't have a title ready. I haven't thought about this book other than like, I wish something existed because I think it could unlock a lot of human potential. So maybe I think it would be something about human potential.

00:31:56 Speaker_02
What in AI does no one focus on that everyone should spend more time on?

00:32:01 Speaker_01
What I would love to see, and there's a lot of different ways to solve this problem, but something about an AI that can understand your whole life.

00:32:07 Speaker_01
Doesn't have to like literally be infinite context, but some way that you can have an AI agent that like knows everything there is to know about you, has access to all of your data, things like that.

00:32:16 Speaker_02
What was one thing that surprised you in the last month, Sam? It's a research result I can't talk about. It is breathtakingly good.

00:32:22 Speaker_01
Which competitor do you most respect? Why them? I mean, I kind of respect everybody in the space right now. I think there's like really amazing work coming from the whole field and incredibly talented, incredibly hardworking people.

00:32:35 Speaker_01
I don't mean this to be a question, Dodge. It's like I can point to super talented people doing super great work everywhere in the field. Is there one? Not really. Tell me, what's your favorite OpenAI API?

00:32:46 Speaker_01
I think the new Realtime API is pretty awesome, but we have a big API business at this point, so there's a lot of good stuff in there. Who do you most respect in AI today, Sam? Let me give a shout out to the Cursor team.

00:32:57 Speaker_01
I mean, there's a lot of people doing incredible work in AI, but I think to really have do what they've done and built

00:33:03 Speaker_01
I thought about a bunch of researchers I could name, but in terms of using AI to deliver a really magical experience that creates a lot of value in a way that people just didn't quite manage to put the pieces together, I think it's really quite remarkable.

00:33:17 Speaker_01
How do you think about the trade-off between latency and accuracy? you need a dial to change between them. In the same way that you want to do a rapid fire thing now, and I'm not even going that quick, but I'm trying not to think for multiple minutes.

00:33:30 Speaker_01
In this context, latency is what you want. But if you were like, hey, Sam, I want you to make a new important discovery in physics, you'd probably be happy to wait a couple of years. The answer is, it should be user controllable.

00:33:42 Speaker_02
When you think about insecurity in leadership, I think it's something that everyone has.

00:33:46 Speaker_02
When you think about maybe an insecurity in leadership, an area of your leadership that you'd like to improve, where would you most like to improve as a leader and a CEO today?

00:33:54 Speaker_01
It's a long list. I'm trying to scan for the top one here. The thing I'm struggling with most this week is I feel more uncertain than I have in the past about the details of what our product strategy should be.

00:34:09 Speaker_01
I think that product is a weakness of mine in general. It's something that right now the company needs stronger and clearer vision on from me.

00:34:18 Speaker_01
We have a wonderful head of product and a great product team, but it's an area that I wish I were a lot stronger on. I'm acutely feeling the miss right now.

00:34:26 Speaker_02
You hired Kevin. I've known Kevin for years. He's exceptional.

00:34:30 Speaker_01
Kevin's amazing. What makes Kevin world class as a product leader to you? Discipline was the first word that came to mind.

00:34:36 Speaker_01
Focus, what we're going to say no to, like really trying to speak on behalf of the user about why we would do something or not do something, like really trying to be rigorous about not having like fantastical dreams.

00:34:48 Speaker_02
Sam, you've done a lot of interviews. I want to finish with one, which is we have a five year horizon for open AI and a 10 year.

00:34:55 Speaker_02
If you have a magic wand and can paint that scenario for the five year and the ten year, can you paint that canvas for me for the five and ten year?

00:35:03 Speaker_01
I mean, I can easily do it for like the next two years, but if we are right and we start to make systems that are so good at, you know, for example, helping us with scientific advancement,

00:35:14 Speaker_01
Actually, I will just say, I think in five years, it looks like we have an unbelievably rapid rate of improvement in technology itself. People are like, man, the AGI moment came and went, whatever. The pace of progress is totally crazy.

00:35:31 Speaker_01
And we're discovering all this new stuff, both about AI research and also about all of the rest of science. And that feels like if we could sit here now and look at it, it would seem like it should be very crazy.

00:35:43 Speaker_01
And then the second part of the prediction is that society itself actually changes surprisingly little. An example of this would be that I think if you asked people five years ago if computers were going to pass the Turing test, they would say no.

00:35:55 Speaker_01
And then if you said, well, what if an oracle told you it was going to, they would say, well, it would somehow be like just this crazy breathtaking change for society. And we did kind of satisfy the Turing test, roughly speaking, of course.

00:36:07 Speaker_01
And society didn't change that much. It just sort of went whooshing by. That's kind of example of what I expect to keep happening, which is progress. Scientific progress keeps going, outperforming all expectations.

00:36:19 Speaker_01
And society, in a way that I think is good and healthy, changes not that much.

00:36:23 Speaker_02
You've been amazing. I had this list of questions. I didn't really stick to them. Thank you for putting up with my meandering around different questions. Thank you everyone for coming. I'm so thrilled that we were able to do this today.

00:36:34 Speaker_02
And Sam, thank you for making it happen, man. Thank you. I want to say again a huge thanks to Sam and the team at OpenAI for asking me to do that. I was thrilled and it really is conversations like that which make me so grateful to do what I do.

00:36:46 Speaker_02
If you want to watch the full episode, you can on YouTube by searching for 20VC. That's 2-0-VC. But before we leave you today... Today, we're thrilled to have Sam Altman on the show.

00:36:56 Speaker_02
If you're inspired by his insights, you'll love the Sam Altman book summary collection on the Blinkist app. With Blinkist, you can access first-class summaries of his favorite books and 7,500 more to read and to listen to in just 15 minutes.

00:37:12 Speaker_02
Just to mention some, Zero to One by Peter Thiel or The Beginning of Infinity by David Deutscher.

00:37:18 Speaker_02
Being recommended by the New York Times and Apple's CEO Tim Cook, it's no surprise 82% of Blinkist users see themselves as self-optimizers and 65% say it's essential for business and career growth.

00:37:32 Speaker_02
And speaking of growth, over 32 million users have been empowered by Blinkist since 2012. As a 20VC listener, hey, you get a 25% discount on Blinkist.

00:37:44 Speaker_02
B-L-I-N-K-I-S-T, just visit Blinkist.com forward slash 2-0-V-C to claim your discount and transform the way you learn. And while you're optimizing your learning, let's also optimize your finances with Brex, the financial stack founders can bank on.

00:38:02 Speaker_02
Brex knows that nearly 40% of startups fail because they run out of cash. So they built a banking experience that takes every dollar further.

00:38:09 Speaker_02
It's such a difference from traditional banking options that leave your cash sitting idle while chipping away at it with fees.

00:38:16 Speaker_02
To help you protect your cash and extend your runway, Brex combined the best things about checking, treasury and FDIC insurance in one powerhouse account. you can send and receive money worldwide at lightning speed.

00:38:30 Speaker_02
You can get 20x the standard FDIC protection through program banks and you can earn industry-leading yield from your first dollar while still being able to access your funds anytime. Brex is a top choice for startups

00:38:44 Speaker_02
In fact, hey, it's used by one in every three startups in the US. Just check them out now, brex.com forward slash startups. And talking about building trust, SecureFrame provides incredible levels of trust to your customers through automation.

00:38:59 Speaker_02
SecureFrame empowers businesses to build trust with customers by simplifying information security and compliance through AI and automation.

00:39:08 Speaker_02
Thousands of fast-growing businesses including Nasdaq, AngelList, Doodle, and Coda trust SecureFrame to expedite their compliance journey for global security and privacy standards such as SOC 2, ISO 27001, HIPAA, GDPR, and more,

00:39:24 Speaker_02
Backed by top-tier investors and corporations such as Google and Kleiner Perkins, the company is among Forbes' list of Top 100 Startup Employers for 2023 and Business Insider's list of the 34 Most Promising AI Startups for 2023.

00:39:39 Speaker_02
Learn more today at Secureframe.com. It is a must. As always, I so appreciate all your thoughts and feedback on the show and I would love to see what you think of the YouTube channel at 20VC.