Skip to main content

Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works AI transcript and summary - episode of podcast Dwarkesh Podcast

· 140 min read

Go to PodExtra AI's episode page (Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Go to PodExtra AI's podcast page (Dwarkesh Podcast) to view the AI-processed content of all episodes of this podcast.

View full AI transcripts and summaries of all podcast episodes on the blog: Dwarkesh Podcast

Episode: Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works

Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works

Author: Dwarkesh Patel
Duration: 02:09:57

Episode Shownotes

A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade.Dylan Patel runs Semianalysis, the leading publication and research firm on AI hardware. Jon Y runs Asianometry, the world’s best YouTube channel on semiconductors and business history.* What Xi would do if he became

scaling pilled* $ 1T+ in datacenter buildout by end of decadeWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Sponsors:* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for FPGA programmers, CUDA programmers, and ML researchers. To learn more about their full time roles, internship, tech podcast, and upcoming Kaggle competition, go here.* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.If you’re interested in advertising on the podcast, check out this page.Timestamps00:00:00 – Xi's path to AGI00:04:20 – Liang Mong Song00:08:25 – How semiconductors get better00:11:16 – China can centralize compute00:18:50 – Export controls & sanctions00:32:51 – Huawei's intense culture00:38:51 – Why the semiconductor industry is so stratified00:40:58 – N2 should not exist00:45:53 – Taiwan invasion hypothetical00:49:21 – Mind-boggling complexity of semiconductors00:59:13 – Chip architecture design01:04:36 – Architectures lead to different AI models? China vs. US01:10:12 – Being head of compute at an AI lab01:16:24 – Scaling costs and power demand01:37:05 – Are we financing an AI bubble?01:50:20 – Starting Asianometry and SemiAnalysis02:06:10 – Opportunities in the semiconductor stack Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

Full Transcript

00:00:00 Speaker_06
Today, I'm chatting with Dylan Patel, who runs Semianalysis, and John, who runs the Asianometry YouTube channel. Does he have a last name? No, I do not.

00:00:09 Speaker_03
No, I'm just kidding. I'm John Y. That's right, is it? I'm John Y. Wait, why is it only one letter?

00:00:15 Speaker_00
Because Y is the best letter.

00:00:19 Speaker_03
Why is your face covered?

00:00:21 Speaker_00
Why not? No, seriously, why is it covered?

00:00:25 Speaker_06
Because I'm afraid of looking at myself get older and fatter over the years. But seriously, it's like anonymity, right? Anonymity. Okay. Yeah. By the way, so do you know what Dylan's middle name is? Actually, no. I don't know.

00:00:38 Speaker_00
He just told me.

00:00:38 Speaker_06
What's my father's name?

00:00:40 Speaker_03
I'm not going to say it, but I remember. You could say it. It's fine. Sanjay? Yes. What's his middle name?

00:00:47 Speaker_06
That's right. Wow. So I'm the war cash Sanjay Patel. He's Dylan Sanjay Patel.

00:00:52 Speaker_02
It's like literally my white name It's it's unfortunate my parents decided between my older brother and me to give me a white name and I could have been dorkish They're like, you know, how amazing it would have been if we had the same day Like butterfly effect at all.

00:01:06 Speaker_02
That was probably one of all wouldn't have turned out the same way But like maybe it would have been even closer.

00:01:10 Speaker_06
We would have met each other sooner, you know Yeah, yeah.

00:01:12 Speaker_02
Yeah.

00:01:12 Speaker_06
All right first question. I If you're a Xi Jinping and you're scaling payload, what is it that you do? Don't answer that question, John.

00:01:21 Speaker_00
That's bad for AI safety. I would basically be contacting every foreigner. I would be contacting every Chinese national with family back home and saying, I want information. I want to know your recipes. I want to know, I want contacts.

00:01:32 Speaker_00
What kind of, like AI lab foreigners or hardware foreigners? Honeypotting open AI? I would basically, like, this is totally off cycle, but like, this is off the reservation, but like I was doing a video about Yugoslavia's nuclear program.

00:01:47 Speaker_00
nuclear weapons program started absolutely nothing one guy from Paris uh-huh and then one guy in Paris he showed up and he was like and then he had who knows what he did he knows a little bit about making atomic nuclear weapons but like he was like okay well do I need help and then the state secret police is like I would get to everything and then like I shouldn't do that let's get you everything and for like a span of four years

00:02:11 Speaker_00
They basically, they drew up a list. What do you need? What do you want? What are you going to do? What is it going to be for? And they just, state police just got everything.

00:02:21 Speaker_00
If I was running a country and I needed catch up on that, that's the sort of thing that I would be doing.

00:02:25 Speaker_06
So okay, let's talk about the espionage. What is the most valuable piece of, if you could have this blueprint, this one megabyte of information, do you want it from TSMC? Do you want it from NVIDIA? Do you want it from OpenAI?

00:02:41 Speaker_06
What is the first thing you would try to steal?

00:02:43 Speaker_02
I mean, I guess you have to stack every layer, right? The beautiful thing about AI is because it's growing so freaking fast, every layer is being stressed to some incredible degree.

00:02:54 Speaker_02
Of course, China has been hacking ASML for over five years and, you know, ASML is kind of like, oh, it's fine. The Dutch government's really pissed off, but it's fine, right? I think it's, they already have those files. right, in my view.

00:03:05 Speaker_02
It's just a very difficult thing to build, right? I think the same applies for like Fab Recipes, right? They can poach Taiwanese nationals very like, not that difficult, right? Because TSMC employees do not make absurd amounts of money.

00:03:20 Speaker_02
You can just poach them and give them a much better life. And they have, right? A lot of SMIC's employees are TSMC, you know, Taiwanese nationals. right? A lot of the really good ones, high up ones, especially, right?

00:03:31 Speaker_02
And then you go up like the next layers of the stack. And it's like, I think, I think, yeah, of course, there's tons of model secrets.

00:03:37 Speaker_02
But then, like, you know, how many of those model secrets do you not already have, and you just haven't deployed or implemented, you know, organized, right?

00:03:45 Speaker_02
That's the that's the one thing I would say is like, China just hasn't, they clearly are still not scale build, in my view. So these people are

00:03:54 Speaker_06
I don't know if you could like hire them, probably worth a lot to you, right? Because you're building a fab that's worth 10s of billions of dollars. And this talent is like, they know a lot of shit. How often do they get poached?

00:04:05 Speaker_06
Do they get poached by like foreign adversaries? Or do they just get poached by other companies within the same industry, but in the same country? And then yeah, well, like, why doesn't that like sort of drive up their wages?

00:04:16 Speaker_00
I think it's because it's it's very compartmentalized. And I think like back in the 2000s, prior to TS, before SMIC got big, it was actually much more kind of open, more flat.

00:04:27 Speaker_00
I think after that, there was like, after Daemong Song, and after all the Samsung issues, and after all the SMIC's rise, when you literally saw

00:04:36 Speaker_02
I think you should tell that story, actually, the TSMC guy that went to Samsung and SMIC and all that. I think you should tell that story.

00:04:42 Speaker_00
There are two stories. There's a guy, he ran a semiconductor company in Taiwan called Worldwide Semiconductor. And this guy, Richard Chang, was very religious. I mean, all the TSMC people are pretty religious. But he, in particular, was very fervent.

00:04:54 Speaker_00
And he wanted to bring religion to China. So after he sold his company to TSMC, huge coup for TSMC, he worked there for about eight or nine months. And he was like, all right, I'll go to China.

00:05:04 Speaker_00
Because back then, the relations between China and Taiwan were much more different. And so he goes over there, Shanghai says, we'll give you a bunch of money. And then Richard Chang basically recruits half of like a whole bunch.

00:05:16 Speaker_00
It's like a conga line of like Taiwanese. Just like they get on the plane, they're flying over. And generally that's actually a lot of like acceleration points within China's semiconductor industry. It's from talent flowing from Taiwan.

00:05:29 Speaker_00
And then the second thing was like Liang Meng Song. Liang Meng Song is a nut. And I've met him. I've not met him. I've met people who work with him. And they say he is a nut. He is probably on the spectrum and he does not care about people.

00:05:42 Speaker_00
He does not care about business. He does not care about anything. He wants to take it to the limit. The only thing, that's the only thing he cares about. He worked for TSMC, literal genius, 300 patents or whatever, 285, works all the way to like the top

00:05:56 Speaker_00
top tier, and then one day he decides he loses out on some sort of power game within TSMC and gets demoted. And he was like head of R&D, right, or something? He was like one of the top R&D. He was like second or third place.

00:06:09 Speaker_02
And it was for the head of R&D position, basically.

00:06:11 Speaker_00
Correct. More of the head R&D position. He's like, I can't deal with this. And he goes to Samsung and he steals a whole bunch of talent from TSMC. Literally, again, Conga line goes and just emails people saying, we will pay

00:06:24 Speaker_00
At some point, some of these people were getting paid more than the Samsung chairman, which, not really comparable. But you know what I mean.

00:06:29 Speaker_02
Isn't the Samsung chairman usually part of the family that owns Samsung? Correctamundo. Okay, so it's kind of relevant.

00:06:36 Speaker_00
But he goes over there and he's like, well, we will make Samsung into this monster. We forget everything, forget all of the stuff you've been trying to do, like incremental, toss that out. We are going to the leading edge and that is it.

00:06:50 Speaker_00
They go to the leading edge, the guys like- They win Apple's business. They win Apple's business, they win it back from TSMC, or did they win it back from TSMC?

00:06:59 Speaker_00
They had a portion of the- They had a big portion of it, and then TSMC, Morris Tang, is like, at this time, was running the company, and he's like, I'm not letting this happen, because that guy,

00:07:10 Speaker_00
toxic to work for as well, but also goddamn brilliant, and also very good at motivating people. He's like, we will work literally day or night.

00:07:19 Speaker_00
Sets up what is called the Nightingale Army, where you have, they split a bunch of people and they say, you are working R&D night shift. There is no rest at the TSMC fab. You will go in. There was, as you go in, there'll be a day shift going out.

00:07:35 Speaker_00
They called it the, it's like you're burning your liver. Because in Taiwan, they said, like, if you get old, like, as you work, you're sacrificing your liver. They call it the liver buster.

00:07:45 Speaker_00
So they basically did this Nightingale armory for like a year, two years. They finished FinFET. They basically just blow away Samsung. And at the same time, they sue Liang Meng-sung directly for stealing trade secrets. Samsung,

00:08:03 Speaker_00
Basically separates from Naeumong Song and Naeumong Song goes to SMIC.

00:08:06 Speaker_02
And so Samsung like at one point was better than TSMC. And then yeah, he goes to SMIC and SMIC is now better than, well not better, but they caught up rapidly as well after.

00:08:14 Speaker_00
Very rapid.

00:08:15 Speaker_02
That guy's a genius.

00:08:16 Speaker_00
That's the guy's a genius. I mean, I don't even know what to say about him. He's like 78 and he's like... beyond brilliant, does not care about people.

00:08:24 Speaker_06
Like, yeah, what is research to make the next process node look like? Is it just a matter of like, 100 researchers go in, they do like the next n plus one, then the next morning, the next 100 researchers go in?

00:08:38 Speaker_00
It's experiments. They have a recipe, and what they do, every recipe, a TSMC recipe, is the culmination of a long, long years of like research, right? It's highly secret.

00:08:49 Speaker_00
And the idea is that you're what you're going to do is that you go, you look at one particular part of it and you say, experiment, run an experiment. Is it better? Is it not? Is it better or not? Kind of a thing like that.

00:08:59 Speaker_02
you're basically it's it's it's multivariable problem that each every single tool sequentially you're processing the whole thing you you turn up knobs up and down on every single tool you can increase the pressure on this one specific deposition tool or and what are you trying to measure is it like does it increase the yield or like what is it that it's

00:09:15 Speaker_02
not it's yield, it's performance, it's power. It's not just a one, it's not just better or worse, right? It's a multivariable search space.

00:09:22 Speaker_06
And what do these people know such that they can do this is they understand the chemistry and physics.

00:09:26 Speaker_02
So it's a lot of intuition. But yeah, it's it's PhDs in chemistry, PhDs in physics, PhDs in EE, brilliant geniuses, people and they all just And they don't even know about the end chip a lot of times.

00:09:37 Speaker_02
It's like, oh, I am an etch engineer, and all I focus on is how hydrogen fluoride etches this, right? And that's all I know.

00:09:46 Speaker_02
And if I do it at different pressures, if I do it at different temperatures, if I do it with a slightly different recipe of chemicals, it changes everything.

00:09:52 Speaker_00
I remember, like, someone told me this when I was speaking, like, how did America lose the ability to do this sort of thing, like etch and hydrofluoric and acid, all of that?

00:10:00 Speaker_00
I told them, like, he told me basically was like, it's very apprentice, master apprentice, like, you know, in Star Wars Sith, there's only one, right? Master apprentice, master apprentice.

00:10:11 Speaker_00
It used to be that there is a master, there's an apprentice, and they pass on this secret knowledge. This guy knows nothing but etch, nothing but etch. Over time, the apprentices stopped coming. And then in the end, the apprentices moved to Taiwan.

00:10:24 Speaker_00
And that's the same way it's still run. Like you have the NTU and NTHU, Tsinghua University, National Tsinghua University. There's a bunch of masters, they teach apprentices, and they just pass this secret knowledge down.

00:10:36 Speaker_06
Who are the most AGI-pilled people in the supply chain? Is there anybody in the supply chain?

00:10:40 Speaker_02
I gotta have my phone call with Colette right now. Okay, go for it.

00:10:43 Speaker_06
Sorry, sorry. Could we mention that the podcast at NVIDIA is calling Dylan to update him on the earnings call?

00:10:50 Speaker_02
Well, it's not exactly that, but...

00:10:52 Speaker_06
Go for it, go for it. Dylan is back from his call with Jensen Huang. It was not with Jensen, Jesus. What did they tell you, huh? What did they tell you about next year's earnings?

00:11:02 Speaker_02
No, it was just color around like a Hopper, Blackwell, and like margins. It's like quite boring stuff. I'm sure, I'm sure. For most people, I think it's interesting though.

00:11:10 Speaker_06
I guess we could start talking about Nvidia, but you know what, before we do.

00:11:13 Speaker_02
No, no, no, I think we should go back to China.

00:11:14 Speaker_06
There's like a lot of points there. All right, we covered the trips themselves. How do they get like the 10 gigawatt data center up? What else do they need?

00:11:22 Speaker_02
I think there is a true like question of how decentralized do you go versus centralized, right? And if you look in the US, right, as far as like labs and such,

00:11:33 Speaker_02
The, you know, OpenAI, XAI, you know, Anthropic, and then Microsoft having their own effort, Anthropic having their own efforts despite having their partner, and then Meta.

00:11:43 Speaker_02
And, you know, you go down the list, it's like there's quite a decentralization. And then all the startups, like interesting startups that are out there doing stuff. There's quite a decentralization of efforts.

00:11:53 Speaker_02
Today in China, it is still quite decentralized, right? It's not like Alibaba, Baidu, you are the champions, right? You have like DeepSeek, like, who the hell are you? Does government even support you? Like doing amazing stuff, right?

00:12:05 Speaker_02
If you are Xi Jinping and scale-pilled, you must now centralize the compute resources, right? You have sanctions on how many NVIDIA GPUs you can get in now. They're still north of a million a year, even post-October last year sanctions.

00:12:20 Speaker_02
We still have more than a million H20s and other hopper GPUs getting in through other means, but legally like the H20s. And then on top of that, you have your domestic chips, right? But that's less than a million chips.

00:12:34 Speaker_02
So then when you look at it, it's like, well, we're still talking about a million chips. The scale of data centers people are training on today slash over the next six months is 100,000 GPUs, right? Open AI, XAI, right?

00:12:46 Speaker_02
These are quite well documented, and others. But in China, they have no individual system of that scale yet, right? So then the question is, how do we get there?

00:12:58 Speaker_02
No company has had the centralization push to have a cluster that large and train on it yet, at least publicly well-known. And the best models seem to be from a company that has got like 10,000 GPUs, right? Or 16,000 GPUs, right?

00:13:11 Speaker_02
So it's not quite as centralized as the US companies are, and the US companies are quite decentralized. If you're Xi Jinping and you're scale-pilled, do you just say, XYZ company is now in charge, and every GPU goes to one place.

00:13:26 Speaker_02
And then you don't have the same issues as the US, right? In the US, we have a big problem with being able to build big enough data centers, being able to build substations and transformers and all this that are large enough in a dense area.

00:13:37 Speaker_02
China has no issue with that at all because their supply chain adds as much power as like half of Europe every year, right? Or some absurd statistics, right? So they're building transformer substations or building new power plants constantly.

00:13:52 Speaker_02
So they have no problem with like getting power density. And you go look at like Bitcoin mining, right? Around the Three Gorges Dam, at one point, at least there was like 10 gigawatts of like Bitcoin mining estimated, right?

00:14:05 Speaker_02
Which, you know, we're talking about you know, gigawatt data centers are coming over, you know, 26, 27 in the or 26 years in the US or 27, right? You know, sort of, this is an absurd scale relatively, right?

00:14:17 Speaker_02
We don't have gigawatt data centers, you know, ready, but like China could just build it in six months, I think. around the Three Gorges Dam or many other places, right?

00:14:25 Speaker_02
Because they have the ability to do the substations, they have the power generation capabilities. Everything can be done like a flip of a switch, but they haven't done it yet. And then they can centralize the chips like crazy, right?

00:14:35 Speaker_02
Now, oh, a million chips that NVIDIA's shipping in Q3 and Q4, the H20. Let's just put them all in this one data center. They just haven't had that centralization effort.

00:14:45 Speaker_00
Well, you can argue that like the more you centralize it, the more you start building this monstrous thing within the industry, you start getting attention to it.

00:14:53 Speaker_00
And then suddenly, you know, lo and behold, you have a little bit of a little worm in there. Suddenly, while you're doing your big training run, oh, this GPU off. Oh, this GPU. Oh, no. Oh, no. Oh, no.

00:15:06 Speaker_01
I don't know if it's like that easy to hack.

00:15:08 Speaker_00
Is that a Chinese accent, by the way?

00:15:10 Speaker_03
Just to be clear, John is East Asian.

00:15:12 Speaker_02
He's Chinese. I am of East Asian descent. Half Taiwanese, half Chinese. Right, that is right. But like I think, I don't know if that's like as simple as that to like, because training systems are like fire, like they're water, is it water gated?

00:15:26 Speaker_02
Fire walled? What is it called? Not fire walled.

00:15:29 Speaker_06
There's a word for that where they're not like they're what air gaps air gapped you're going through like the all the like a four elements Firebenders, you know

00:15:48 Speaker_02
We got the avatar, right? Like, you have to build the avatar. Okay. I think that's possible. The question is, like, does that slow down your research?

00:15:56 Speaker_02
Do you, like, crush, like, cracked people like DeepSeek, who are, like, clearly, like, not being, you know, influenced by the government, and put some, like, idiot, like, you know, idiot bureaucrat at the top.

00:16:08 Speaker_00
Suddenly, he's all thinking about, like, you know, all these politics, and he's trying to deal with all these different things. Suddenly, you have a single point of failure. And that's bad.

00:16:19 Speaker_02
But I mean, on the flip side, right? Like, there is like, obviously immense gains from being centralized because of the scaling loss, right? And then the flip side is compute efficiency is obviously going to be hurt, because

00:16:31 Speaker_02
you can't do you can't experiment and like have different people lead and try their efforts as much if you're less centralized or more, more centralized. So it's like, there is a balancing act there.

00:16:40 Speaker_06
The fact that they can centralize I didn't think about this, but that is actually like, because, you know, even if America as a whole, is getting millions of GPUs a year.

00:16:49 Speaker_06
The fact that any one company is only getting hundreds of thousands or less means that there's no one person who can do a trading run as big in America as if like China as a whole decides to do one together.

00:17:01 Speaker_06
The 10 gigawatts you mentioned near the Three Gorges Dam Is it like literally like, how widespread is it? Like a state?

00:17:08 Speaker_02
Is it like one wire? Like how? I think like between not just the dam itself, but like also all of the coal, there's some nuclear reactors there, I believe as well.

00:17:18 Speaker_02
Between all of, and like renewables like solar and wind, between all of that in that region, there is an absurd amount of concentrated power that could be built.

00:17:27 Speaker_02
I don't think it's like, I'm not saying it's like one button, but it's like, hey, within X mile radius, right? is more of the correct way to frame it. And that's how the labs are also framing it, right? In the US.

00:17:41 Speaker_06
If they started right now, how long does it take to build the biggest AI data center in the world?

00:17:47 Speaker_02
Actually, I think the other thing is, could we notice it? I don't think so, because the amount of factories that are being spun up, the amount of other construction, manufacturing, et cetera, that's being built,

00:18:00 Speaker_02
A gigawatt is actually like a drop in the bucket, right? Like a gigawatt is not a lot of power. 10 gigawatts is not an absurd amount of power, right? It's okay, yes, it's like hundreds of thousands of homes, right?

00:18:09 Speaker_02
Yeah, millions of people, but it's like, you got 1.4 billion people, you got like most of the world's like extremely energy intensive, like refining and like, you know, rare earth refining and all these manufacturing industries are here.

00:18:23 Speaker_02
It would be very easy to hide it. Really? It would be very easy to just shut down. I think the largest aluminum mill in the world is there, and it's north of 5 gigawatts alone.

00:18:31 Speaker_02
It's like, oh, could we tell if they stopped making aluminum there and instead started making AIs there or making AI there? I don't know if we could tell, right?

00:18:41 Speaker_02
Because they could also just easily spawn 10 other aluminum mills, make up for the production, and be fine, right? So there's many ways for them to hide compute as well.

00:18:49 Speaker_06
to the extent that you could just take out a five gigawatt aluminum refining center and like build a giant data center there, then I guess the way to control Chinese AI has to be the chips, because like everything else they so like, how do you like, just like walk me through how many chips do they have now?

00:19:06 Speaker_06
How many will they have in the future? What will the like? How many is that in comparison to us and the rest of the world?

00:19:11 Speaker_02
Yeah, so in the world, I mean, the world we live in is they are not restricted at all in like the physical infrastructure side of things in terms of power, data centers, et cetera, because their supply chain is built for that, right?

00:19:22 Speaker_02
And it's pretty easy to pivot that. Whereas the U.S. adds so little power each year and Europe loses power every year, the Western sort of industry for power is non-existent in comparison, right?

00:19:34 Speaker_02
But on the flip side is, quote unquote, Western, including Taiwan, chip manufacturing is way, way, way, way, way larger than China.

00:19:41 Speaker_02
especially on leading edge where China theoretically has, depending on the way you look at it, either zero or a very small percentage share, right? And so there you have equipment, wafer manufacturing, and then you have advanced packaging capacity.

00:19:58 Speaker_02
And where the US can control China, right? So advanced packaging capacity is kind of a shot because the largest advanced packaging company in the world was Hong Kong headquartered.

00:20:07 Speaker_02
They just moved to Singapore, but that's effectively in a realm where the US can't sanction it, right? A majority of these other companies are in similar places, right? So advanced packaging capacity is very hard, right?

00:20:20 Speaker_02
Advanced packaging is useful for stacking memory, stacking chips on co-ops, right? Things like that. And then the step down is wafer fabrication. There is immense capability to restrict China there.

00:20:32 Speaker_02
And despite the US making some sanctions, China in the most recent quarters was like 48% of ASML's revenue, right? So, you know, and like 45% of like applied materials and you just go down the list.

00:20:44 Speaker_02
So it's like, obviously it's not being controlled that effectively, but it could be on the equipment side of things. The chip side of things is actually being controlled Quite effectively, I think, right?

00:20:54 Speaker_02
Yes, there is shipping GPUs through Singapore and Malaysia and other countries in Asia to China, but the amount you can smuggle is quite small.

00:21:03 Speaker_02
And then the sanctions have limited the chip performance to a point where it's like, this is actually kind of fair, but there is a problem with how everything is restricted, right?

00:21:14 Speaker_02
Because you want to be able to restrict China from building their own domestic chip manufacturing industry that is better than what we ship them. You want to prevent them from having chips that are better than what we have.

00:21:25 Speaker_02
And then you want to prevent them from having AIs better. The ultimate goal being, you know, and if you read the restrictions, like very clear, it's about AI. Yeah.

00:21:33 Speaker_02
Even in 2022, which is amazing, like at least the Commerce Department was kind of AI-pilled, was like, is you want to restrict them from having AIs worse than us, right?

00:21:40 Speaker_02
So starting on the right end, it's like, OK, well, if you want to restrict them from having better AIs than us, you have to restrict chips. OK, if you want to restrict them from having chips, you have to let them have at least some level of chip

00:21:51 Speaker_02
that is better than what they can build internally. But currently, the restrictions are flipped the other way, right? They can build better chips in China than we restrict them in terms of chips that NVIDIA or AMD or an Intel can sell to China.

00:22:07 Speaker_02
And so there's sort of a problem there in terms of the equipment that is shipped can be used to build chips that are better than what the Western companies can actually ship them.

00:22:15 Speaker_06
John, Dylan seems to think the expert controls are kind of a failure. Do you agree with him, or?

00:22:20 Speaker_00
That is a very interesting question, because I think it's like... Why, thank you.

00:22:24 Speaker_04
Like, what do you... Dorkish, you're so good.

00:22:28 Speaker_00
Yeah, Dorkish, you're the best. I think failure is a tough word to say, because I think it's like, what are we trying to achieve, right? Like, they're talking about AI, right? Yeah. When you do sanctions like that,

00:22:43 Speaker_00
It's you need like such deep knowledge of the technologies.

00:22:46 Speaker_02
You know, just taking lithography, right? If your goal is to restrict China from building chips and you just like boil it down to like, hey, lithography is 30 percent of making a chip. So or 25 percent. Cool. Let's let's sanction lithography.

00:22:58 Speaker_02
OK, where do we draw the line? OK, let me ask. Let me ask. Let me figure out what where the line is. And if I'm a bureaucrat, if I'm a lawyer at the Commerce Department or what have you.

00:23:07 Speaker_02
Well, obviously I'm going to go talk to ASML and ASML is going to tell me this is the line because they know like, hey, well, you know, this, this, this is, you know, there's like some blending over.

00:23:15 Speaker_02
There's like, they're, they're like looking at like what's going to cost us the most money. Right. And then they constantly say like, if you restrict us, then China will have their own industry. Right.

00:23:23 Speaker_02
And, and the way I like to look at it is like chip manufacturing is like, like 3D chess or like, you know, a massive jigsaw puzzle in that if you take away one piece, China can be like, oh, yeah, that's the piece, let's put it in, right?

00:23:37 Speaker_02
And currently, this export restrictions, year by year by year, they keep updating them ever since like 2018 or so, 19, right, when Trump started, and now Biden's, you know, accelerated them.

00:23:47 Speaker_02
They've been like, they haven't just like, take a bat to the table and like break it, right? Like, it's like, let's take one jigsaw puzzle out, walk away, oh, shit, let's take two more out. oh shit, right?

00:23:58 Speaker_02
Like, you know, it's like instead if they like, you either have to go kind of like full back to the fricking like table slash wall or chill out, right? Like, and like, you know, let them do whatever they want.

00:24:10 Speaker_02
Cause the alternative is everything is focused on this thing and they make that. And then now when you take out another two pieces, like, well, I have my domestic industry for this. I can also now make a domestic industry for these.

00:24:21 Speaker_02
Like you go deeper into the tech tree or what have you.

00:24:23 Speaker_00
It's a very, it's art, right? In a sense that, there are technologies out there that can compensate. Like, if you believe, the belief that lithography is a linchpin within the system is, it's not exactly true, right?

00:24:37 Speaker_00
At some point, if you keep pulling, keep pulling a thread, other things will start developing to kind of close that loop. And like, I think it's, it is, that's why I say it's an art, right?

00:24:47 Speaker_00
I don't think you can stop Chinese semiconductor industry, the semiconductor industry from progressing. I think that's basically impossible. So the question is, the Chinese government believes in the primacy of semiconductor manufacturing.

00:25:02 Speaker_00
They believed it for a long time, but now they really believe it, right?

00:25:05 Speaker_02
To some extent, the sanctions have made China believe in the importance of the semiconductor industry more than anything else.

00:25:13 Speaker_06
So from an AI perspective, what's the point of export controls then? Because even if like, if they're going to be able to get these, like if you're like concerned about AI, and they're going to be able to build- Well, they're not centralized though.

00:25:22 Speaker_02
Right. So that's the big question is, are they centralized? And then also, you know, there's the belief.

00:25:26 Speaker_02
I don't, I don't really, I'm not sure if I really believe it, but like, you know, prior podcasts, there have been people who talked about nationalization, right. In which case, okay, now you're talking about. Why are you referring to this ambiguously?

00:25:37 Speaker_02
Well, I think there's a couple. My opponent. No, but I think there have been a couple where people have talked about the nationalization, right? But like if you have, you know nationalization then all of a sudden you aggregate all the flops.

00:25:50 Speaker_02
It's like now there's no fucking way, right? Yeah, China can be centralized enough to compete with each individual US lab They could have just as many flops in 25 and 26 if they decided they were scale built right just from foreign chips

00:26:03 Speaker_06
for an individual model. And like in 2026, they can train a 1E27, like they can release a 1E27 model by 2026.

00:26:10 Speaker_02
Yeah, and then a 28 model, you know, 1E28 model in the works, right? Like, they totally could just with foreign chip supply, right? Just a question of centralization.

00:26:18 Speaker_02
Then the question is like, do you have as much innovation and compute efficiency wins or what have you get developed when you centralize? Or does like Anthropic and OpenAI and XAI and Google like all develop things and then like,

00:26:30 Speaker_02
secrets kind of shift a little bit in between each other and all that, like, you know, you end up with that being a better outcome in the long term, versus like the nationalization of the US, right?

00:26:39 Speaker_02
If that's possible, and like, or, you know, and what happens there, but China could absolutely have it in 2627, if they just have the desire to, and that's just from foreign chips, right? And then domestic chips are the other question, right?

00:26:53 Speaker_02
600,000 of the Ascend 910B, which is roughly like 400 teraflops or so. So if they put them all in one cluster, they could have a bigger model than any of the labs next year. Right? I have no clue where all the send 910 bees are going. Right.

00:27:11 Speaker_02
But I mean, well, there's like rumors about like, some they are being divvied up between the like, major Alibaba, ByteDance, Baidu, etc. And next year, more than a million.

00:27:19 Speaker_02
And it's possible that they actually do have, you know, one E30 before the US because data center is not as big of an issue. 10 gigawatt data center is going to be I don't think anyone is even trying to build that today in the US, even out to 27, 28.

00:27:34 Speaker_02
Really, they're focusing on linking many data centers together. So there's a possibility that, hey, come 2028, 2029, China can have more flops delivered to a single model. even ignoring sort of, even once the centralization question is solved, right?

00:27:49 Speaker_02
Because that's clearly not happening today for either party. And I would bet if AI is like as important as you and I believe that they will centralize sooner than the West does. So there is a possibility, right? Yeah.

00:28:04 Speaker_06
It seems like a big question then is how much could SMIC either increase the product, like increase the amount of wafers, like how many more wafers could they make? And how many of those wafers could be dedicated to the night?

00:28:15 Speaker_06
Because I assume there's other things they want to do with these semiconductors.

00:28:18 Speaker_02
So there's like two points parts there too, right? Like, so the way the US has sanctioned SMIC is really like stupid, kind of, is that in that they've like sanctioned a specific spot, rather than the entire company.

00:28:29 Speaker_02
And so therefore, right, SMIC is still buying a ton of tools that can be used for their 7 nanometer and their, call it 5.5 nanometer process or 6 nanometer process for the 910C which releases later this year, right?

00:28:42 Speaker_02
They can build as much of that as long as it's not in Shanghai. And Shanghai has anywhere from 45 to 50 high-end immersion lithography tools is what's believed by intelligence as well as many other folks.

00:28:58 Speaker_02
That roughly gives them as much as 60,000 wafers a month of seven nanometer, but they also make their 14 nanometer in that fab, right?

00:29:07 Speaker_02
And so the belief is that they actually only have about 25 to 35,000 of seven nanometer capacity wafers a month, right?

00:29:15 Speaker_02
Doing the math, right, of the chip die size and all these things, because Tupelo also uses chiplets and stuff so they can get away with using less leading edge wafers, but then their yields are bad.

00:29:25 Speaker_02
You can roughly say something like 50 to 80 good chips per wafer with their bad yield, right? With their bad yield. Why do they have bad yield?

00:29:35 Speaker_00
because it's hard, right? You know, you're- Even if it was like, you know, everyone knows the number, right? So like if it's a thousand steps, even if you're 99% for each, like 98 or 99%, like in the end, you'll still get a 40% yield overall.

00:29:48 Speaker_02
Interesting. I think it's like, even if it's like 99, if I think it's like, I think it's, if it's six sigma of like, or of like perfection, and you have your 10,000 plus steps, you end up with like, yield is still dog shit by the end, right?

00:30:00 Speaker_02
Like, yeah.

00:30:01 Speaker_00
That is a scientific measure, dog shit percent.

00:30:06 Speaker_02
Yeah, yeah, as a multiplicative effect, right? So yields are bad because they have hands tied behind their back, right? They are not getting to use EUV, whereas on 7nm Intel never used EUV, but TSMC eventually started using EUV.

00:30:23 Speaker_02
Initially they used DUV, right? Doesn't that mean the expert control succeeded?

00:30:26 Speaker_06
Because they have bad yields because they have to use like- Successes, again, they still are determined.

00:30:33 Speaker_00
Successes mean they stop. They're not stomping.

00:30:36 Speaker_02
Going back to the yield question, right? Like, oh, theoretically 60,000 wafers a month times 50 to 100 dies per wafer with yielded dies. Holy shit, that's millions of GPUs, right? Now, what are they doing with most of their wafers?

00:30:49 Speaker_02
They still have not become skill-filled, so they're still throwing them out like, let's make 200 million Huawei phones. Right, like, oh, OK, cool. I don't care. Right.

00:30:56 Speaker_02
Like as as the West, you don't care as much, even though like Western companies will get screwed, like Qualcomm and like, you know, and MediaTek Taiwanese companies. So so obviously there's that. And the same applies to the US.

00:31:07 Speaker_02
But when you when you flip to like Sorry, I don't fucking know what I was gonna say.

00:31:16 Speaker_01
Nailed it!

00:31:18 Speaker_06
We're keeping this in.

00:31:19 Speaker_01
That's fine, that's fine, that's fine.

00:31:20 Speaker_06
Hey everybody, I am super excited to introduce our new sponsors, Jane Street. They're one of the world's most successful trading firms.

00:31:28 Speaker_06
I have a bunch of friends who either work there now or have worked there in the past, and I have very good things to say about those friends, and those friends have very good things to say about Jane Street.

00:31:38 Speaker_06
Jane Street is currently looking to hire its next generation of leaders. As I'm sure you've noticed, recent developments in AI have totally changed what's possible in trading.

00:31:49 Speaker_06
They've noticed this too, and they've stacked a scrappy, chaotic new team with tens of millions of dollars of GPUs to discover signal that nobody else in the world can find. Most new hires have no background in trading or finance.

00:32:03 Speaker_06
Instead, they come from math, CS, physics, and other technical fields. Of particular relevance to this episode, their deep learning team is hiring CUDA programmers, FPGA programmers, and ML researchers.

00:32:17 Speaker_06
Go to janesstreet.com slash dwarkesh to learn more. And now back to Dylan and John. 2026, if they're centralized, they can have as big training runs as any one US company.

00:32:32 Speaker_02
Oh, the reason why I was bringing up Shanghai, they're building seven nanometer capacity in Beijing, they're building five nanometer capacity in Beijing, but the US government doesn't care. And they're importing dozens of tools into Beijing.

00:32:42 Speaker_02
And they're saying to the US government and ASML, this is for 28 nanometer, obviously, right? This is not bad. And then obviously, you know, like in the background, you know, we're making five nanometer here, right?

00:32:51 Speaker_06
Are they doing it because they believe in AI or because they want to make Huawei phones?

00:32:55 Speaker_02
Huawei was the largest TSMC customer for a few quarters, actually, before they got sanctioned. Huawei makes most of the telecom equipment in the world, right? Phones, of course, modems, but of course, accelerators, networking equipment.

00:33:08 Speaker_02
You go down the whole video surveillance chips, right? You kind of go through the whole gambit. A lot of that could use seven and five nanometer. Do you think the dominance of Huawei is actually a bad thing for the rest of the Chinese tech industry?

00:33:21 Speaker_02
I think Huawei is so fucking cracked. that like, it's hard to say that, right? Like, Huawei out-competes Western firms regularly with two hands tied behind their back.

00:33:32 Speaker_02
Like, you know, like, what the hell is Nokia and like Sony Ericsson, like trash, right?

00:33:38 Speaker_02
Like compared to Huawei and Huawei is not allowed to ship sell to like European companies or American companies and they don't have TSMC and yet they still destroy them, right? And same applies to like the new phone, right?

00:33:51 Speaker_02
It's like, oh, it's like as good as like a year old Qualcomm phone on a process node that's equivalent to like four years old, right? Or three years old. So it's like, Wait, so they actually out-engineered us with a worst process node.

00:34:02 Speaker_02
You know, so it's like, oh, wow, okay. Like, you know, Holloway is like crazy cracked.

00:34:07 Speaker_00
Why do you think that culture comes from?

00:34:09 Speaker_02
The military, because it's the PLA.

00:34:11 Speaker_00
It is. It is generally seen as an arm of the PLA. But like, How do you square that with the fact that sometimes the PLA seems to mess stuff up? Oh, like filling water on rockets? I don't know if that was true. I'm not denying it.

00:34:27 Speaker_02
There is like that like crazy conspiracy. You don't know what the hell to believe in China, especially as a not Chinese person.

00:34:35 Speaker_00
Even Chinese people don't know what's going on in China.

00:34:37 Speaker_02
There's all sorts of stuff like, oh, they're filling water in their rockets. Clearly, they're incompetent.

00:34:42 Speaker_02
It's like, look, if I'm the Chinese military, I want the Western world to believe I'm completely incompetent, because one day, I can just destroy the fuck out of everything, right, with all these hypersonic missiles and all this shit, right, like drones.

00:34:54 Speaker_02
Like, no, no, no, no, we're filling water in our missiles. These are all fake.

00:34:57 Speaker_02
We don't actually have 100,000 missiles that we manufacture in a facility that's super hyper-advanced and Raytheon is stupid as shit because they can't make missiles nearly as fast. right?

00:35:07 Speaker_02
Like, I think like that's also like a flip side is like how much false propaganda is there, right? Because there's a lot of like, no, Smith could never Smith could never they don't have the best tools, blah, blah, blah. And then it's like,

00:35:18 Speaker_02
Motherfucker, they just shipped 60 million phones last year with this chip that performs only one year worse than like what Qualcomm has. It's like, proof is in the pudding, right? Like, you know, there's a lot of like cope, if you will.

00:35:30 Speaker_00
I just wonder where it comes from. I do really do just wonder where that culture comes from. Like there's something crazy about them where they're kind of like everything they touch, they seem to succeed in. And like, I kind of wonder why.

00:35:40 Speaker_00
They're making cars. I wonder if it's good on there. I think, like, if, like, supposedly, like, if we kind of imagine, like, historically, like, do you think they're getting something from somewhere? What do you mean? Espionage, you mean? Yeah.

00:35:53 Speaker_00
Well, obviously. Like, East Germany and the Soviet industry was basically, it was like a conveyor belt of, like, secrets coming in, and they just used that to run everything. But the Soviets were never good at it. They could never mass-produce it.

00:36:03 Speaker_06
How would espionage explain how they can make things with different processes?

00:36:07 Speaker_02
I don't think it's just espionage.

00:36:09 Speaker_06
I think they're just, like, literally crack.

00:36:10 Speaker_02
It has to be something else. They have the espionage. without a doubt, right? Like, ASML has been known to have been hacked a dozen times, right? Or at least a few times, right?

00:36:18 Speaker_02
And they've been known to have people sued who made it to China with a bunch of documents, right? Not just ASML, but every fucking company in the supply chain. Cisco Code was literally in, like, early Huawei, like, routers and stuff, right?

00:36:29 Speaker_02
Like, you go down the list, it's like, everything is, but then it's like, no, architecturally, the Ascend 910B looks nothing like a GPU. It looks nothing like a TPU. It is like its own independent thing.

00:36:39 Speaker_02
Sure, they probably learned some things from some places, but, like, It is just like they're good at engineering. It's 996.

00:36:44 Speaker_00
Like wherever that culture comes from, they they do good.

00:36:47 Speaker_06
Yeah, they do very good. Well, another thing I'm curious about is like, yeah, where their culture comes from, but like, how does it stay there?

00:36:53 Speaker_06
Because with American firms or any other firm, you can have a company that's very good, but over time, it gets worse, right? Like Intel or many others. I guess Huawei just isn't that old of a company.

00:37:03 Speaker_06
But like, it's hard to like be a big company and like stay good. That is true.

00:37:08 Speaker_00
I think it's like, I think it's like, but I think a lot, a word that I hear a lot in with regards to Huawei is a struggle, right? And China has a culture of like the Communist Party is like really big on struggle.

00:37:19 Speaker_00
I think like Huawei in the sense they sort of brought that culture into the way they do it. Like you said before, right? They go crazy because they think that in five years that they're going to fight the United States.

00:37:32 Speaker_00
And literally everything they do, every second is like their country depends on it, right?

00:37:37 Speaker_02
It's it's like it's the Andy grovian mindset right like shout out to like the based Intel But like only the paranoid survive right like paranoid Western companies do well Why did why did Google like really screw the pooch on a lot of stuff?

00:37:49 Speaker_02
And then why are they like resurging kind of now is because they got paranoid as hell right, but they weren't paranoid for a while

00:37:55 Speaker_02
If Huawei is just constantly paranoid about like the external world and like oh fuck we're gonna die Oh fuck like, you know, they're gonna beat us Our country depends on it We're gonna get the best people from the entire country that are like, you know The best at whatever they do and tell them you will if you do not succeed

00:38:12 Speaker_00
You will die. Not you will die. Your family will die. Your family will be enslaved and everything. It will be terrible. By the evil Western pigs, right? Exactly. Capitalists, not capitalists. They don't believe in capitalism. They don't say that anymore.

00:38:23 Speaker_00
But it's more like everyone is against China. China is being defiled. And they're saying, that is all on you, bro.

00:38:33 Speaker_02
If you can't do that, then you... If you can't get that fucking radio to be slightly less noisy and transmit 5% more data, we are fucked.

00:38:42 Speaker_00
It's like the Great Palace Fire all over again. The British are coming, and they will steal all the trinkets and everything. That's on you.

00:38:50 Speaker_06
Why isn't there more vertical integration in the semiconductor industry? Why are there like, this subcomponent requires this other subcomponent from this other company, which requires a subcomponent from another company.

00:38:59 Speaker_06
Why is more of it not done in-house?

00:39:01 Speaker_02
The way to look at it today is it's super, super stratified and every industry has anywhere from one to three competitors.

00:39:07 Speaker_02
And pretty much the most competitive it gets is like 70% share, 25% share, 5% share in any layer of like manufacturing chips, anything, anything, chemicals, different types of chips. But it used to be vertically integrated.

00:39:21 Speaker_00
Well, at the very beginning it was integrated, right? Where did that stop? what happened was, you know, the funniest thing was like, you know, you had companies that used to do it all in the one.

00:39:31 Speaker_00
And then suddenly, sometimes a guy would be like, I hate this. I think I know, I know how to do better. Spins off, does his own thing, starts his company, goes back to his old company, says, I can sell you a product that's better, right?

00:39:42 Speaker_00
And that's the beginning of what we call the semiconductor manufacturing and equipment industry. Like basically- Like in the 70s, right? Like everyone made their own equipment. 60s and 70s, like you spin off all these people.

00:39:50 Speaker_00
And then what happened was that the companies that accepted you know, these outside products and equipment got better stuff. They did better.

00:39:58 Speaker_00
Like, you can talk about a whole bunch, like, there are companies that were totally vertically integrated in semiconductor manufacturing for decades, and they are, they're still good, but they're nowhere near competitive.

00:40:07 Speaker_06
One thing I'm confused about is, like, the actual foundries themselves, there's, like, fewer and fewer of them every year, right?

00:40:13 Speaker_06
So, there's, like, maybe more companies overall, but, like, the final people, like, who make the wafers, there's less and less.

00:40:21 Speaker_06
And then it's interesting in a way it's similar to like the AI foundation models where you need to use like the revenues from like a previous model in order or like your market share to like fund the next round of ever more expensive development.

00:40:37 Speaker_00
When TSMC launched the foundry industry, right, and when they started, there was a whole wave of like Asian companies that funded semiconductor foundries of their own.

00:40:46 Speaker_00
You had Malaysia with Silterra, you have Singapore with Chartered, you had, there was one, there's Worldwide, there's Worldwide Semiconductor where I talked about earlier, there's one from Hong Kong. Bunch in Japan.

00:40:56 Speaker_00
Bunch in Japan, like they all sort of did this thing, right?

00:40:59 Speaker_00
And I think the thing was that when you're going to leading edge, when the thing is that, like, it got harder and harder, which means that you had to aggregate more demand from all the customers to fund the next node, right?

00:41:10 Speaker_00
So technically, in the sense that what it's kind of do is aggregating all this money, all this profit to kind of fund this next node to the point where now, like, there's no room in the market for an N2 or N3. Like, technically, you could argue that

00:41:25 Speaker_00
Economically, you can make an argument that like N2 is a monstrosity that doesn't make sense. Economically, it should not exist in some ways without the immense single concentrated spend of like five players in the market.

00:41:39 Speaker_02
I'm sorry to like completely derail you but like there's this video where it's like there's an holy concoction of meat slurry. Yes!

00:41:47 Speaker_04
What?

00:41:49 Speaker_02
So there's like a video that's like ham is disgusting. It's an unholy concoction of like meat with no bones or collagen I'm like, I don't know like to use like the way he was describing to dandymeter is kind of like that, right?

00:42:00 Speaker_00
It's like the guy who pumps his right arm so much and he's like not super muscular

00:42:05 Speaker_00
the human body was not meant to be so muscular like what's the point like why is two nanometer not justified i'm not saying n2 is like n2 specifically but say n2 as a concept the next node should technically like right now

00:42:20 Speaker_00
There will come a point where economically, the next node will not be possible, like at all, right?

00:42:25 Speaker_02
Unless more technology spawn, like AI now makes one nanometer or whatever. There was a long period of time. 16a viable, right? So like, right before AI spawned. It makes it viable, as in like, it makes it worth it?

00:42:37 Speaker_02
So every two years, you get a shrink, right? Like clockwork, Moore's law. And then, five nanometer happened. It took three years, holy shit. And then 3nm happened, it took three, or no, sorry, is it 3nm or 5nm? It took three years.

00:42:51 Speaker_02
Holy shit, like, is Moore's Law dead, right? Like, because TSMC didn't, and then what did Apple do? Even on the third year of, sorry, when 3nm finally launched, they still only, Apple only moved half of the iPhone volume to 3nm.

00:43:05 Speaker_02
So this is like, now they did a fourth year of 5nm for a big chunk of iPhones. And it's like, oh, is the mobile industry petering out? Then you look at 2 nanometer and it's going to be a similar, very difficult thing for the industry to pay for this.

00:43:20 Speaker_02
Apple, of course, because they get to make the phone, they have so much profit that they can funnel into more and more expensive chips. But finally, that was running out. How economically viable is 2 nanometer just for one player?

00:43:33 Speaker_02
TSMC, you know, ignore Intel, ignore Samsung, just because in Samsung is paying for it with memory, not with their actual profit.

00:43:40 Speaker_02
And then Intel is paying it from it from their former CPU monopoly, private equity money, and now private equity money and debt and subsidies, people's salaries.

00:43:51 Speaker_00
Yeah.

00:43:52 Speaker_02
But like, anyways, like, you know, there's there's a strong argument that like,

00:43:56 Speaker_02
funding the next node would not be economically viable anymore, if it weren't for AI taking off, right, and then generating all this humongous demand for the most leading edge chip.

00:44:06 Speaker_06
So how much? How big is the difference between seven to five to three nanometer? Like, is it like, does it matter? Is it a huge deal in terms of like, who can build the biggest cluster?

00:44:14 Speaker_02
So there's this simplistic argument that like, oh, moving a process node only saves me X percent in power, right? And that has been petering out, right? You know, when you move from like 90 nanometer to 80 something, right? Or 70 something, right?

00:44:28 Speaker_02
It was like, oh, you got 2X, right? Dennard scaling was still intact. But now when you move from 5 nanometer to 3 nanometer, first of all, you don't double density. S-RAM doesn't scale at all. Logic does scale, but it's like 30%.

00:44:39 Speaker_02
So all in all, you only save like 20% in power per transistor.

00:44:43 Speaker_02
But because of data locality and movement of data, you actually get a much larger improvement in power efficiency by moving to the next node than just the individual transistor's power efficiency benefit.

00:44:54 Speaker_02
Because, for example, you're multiplying a matrix that's 8,000 by 8,000 by 8,000. And then you can't fit that all on one chip. But if you could fit more and more, you have to move off chip less. You have to go to memory less, et cetera.

00:45:06 Speaker_02
So the data locality helps a lot, too. But the AI really, really, really wants new processed nodes. Because of, A, power used is a lot less now. Higher density, higher performance, of course.

00:45:19 Speaker_02
But the big deal is, well, if I have a gigawatt data center, I can now, how much more flops can I get? If I have two gigawatt data center, how much more flops can I get? If I have a 10 gigawatt data center, how much more flops can I get, right?

00:45:28 Speaker_02
And you look at the scaling, it's like, well, no, everyone needs to go to the most recent process node as soon as possible.

00:45:34 Speaker_06
I wanna ask the normie question for everybody's. I want to phrase it that way. Okay, I want to ask a question that's like- Fuck it, Nori.

00:45:43 Speaker_02
Not for you nerds. I think John and I could communicate to the point where you wouldn't even know what the fuck we're talking about.

00:45:52 Speaker_06
Okay, suppose Taiwan is invaded or Taiwan has an earthquake, nothing is shipped out of Taiwan from now on. What happens next? The rest of the world, how would it feel its impact? A day in, a weekend, a month in, a year in? I mean, it's a terrible thing.

00:46:08 Speaker_00
It's a terrible thing to talk about. I think it's like, Can you just say it's all terrible? Everything's terrible? Because it's not just like leading edge. Leading edge, people will focus on leading edge.

00:46:18 Speaker_00
But there's a lot of trailing edge stuff that people depend on every day. I mean, we all worry about AI. The reality is you're not going to get your fridge. You're not going to get your cars. You're not going to get everything. It's terrible.

00:46:28 Speaker_00
And then there's the human part of it, right? It's all terrible. It's depressing. And I live there.

00:46:34 Speaker_02
I think day one, market crashes a lot, right? You gotta think about like, I think the big six biggest companies, Magnificent, Seven, whatever that gets called, are like 60, 75% of the S&P 500, and their entire business relies on chips, right?

00:46:49 Speaker_02
Google, Microsoft, Apple, Nvidia, you go down the list, right? They're all meta, right? They all entirely rely on AI. And you would have a tech reset, like extremely insane tech reset, by the way, right?

00:47:03 Speaker_02
So market would crash a week, a day in, a couple of weeks in, right? People are preparing now. People are like, oh shit, let's start building fabs. Fuck all the environmental stuff. War's probably happening.

00:47:14 Speaker_02
But the supply chain is trying to figure out what the hell to do to refix it. But six months in? Disapply of chips for making new cars gone or sequestered to make military shit, right?

00:47:26 Speaker_02
You can no longer make cars And we don't even know how to make non Semiconductor induced cars right like this unholy concoction with all these like chips, right? You are it's like 40% chips now.

00:47:37 Speaker_02
Yeah, like it's just chips on in the tire There's like there's like 2,000 plus chips, right? Yeah, every Tesla door handle has like four chips in it's like what the fuck like why?

00:47:46 Speaker_02
Like like but like it's like it's like shitty like microcontrollers and stuff But like there's like 2,000 plus chips even in an in an ice vehicle like internal combustion engine vehicle, right? And every engine has dozens of dozens of chips, right?

00:47:58 Speaker_02
anyways, this all shuts down because Not all of the production. There's some in Europe. There's some in the US. There's some in Japan. They're gonna bring in a guy to work in on Saturday until 4 Yeah, I mean, yeah.

00:48:10 Speaker_02
So you have like, TSMC always builds new fabs. That old fab, they tweak production up a little bit more and more, and new designs move to the next, next, next node, and old stuff fills in the old nodes, right?

00:48:21 Speaker_02
So, you know, ever since TSMC has been the most important player, And not just TSMC, there's UMC there, there's PSMC there, there's a number of other companies there. Taiwan's share of total manufacturing has grown every single process node.

00:48:33 Speaker_02
So in 130 nanometer, there's a lot, including many chips from Texas Instruments or analog devices or NXP, all these companies. 100% of it is manufactured in Taiwan by either TSMC or UMC or whatever.

00:48:47 Speaker_02
But then you step forward and forward and forward, like 28 nanometer. 80% of the world's production of 28 nanometers in Taiwan. Oh, fuck, right? Like, you know, and everything in 28 nanometers, like, what's made on 28 nanometer today?

00:48:59 Speaker_02
Tons of microcontrollers and stuff, but also, like, every display driver I see, like, cool, like, even if I can make my Mac chip, I can't make the chip that drives the display.

00:49:07 Speaker_02
Like, you know, you just go down the list, like, everything, no fridges, no automobiles, no weed whackers, because that shit has, my toothbrush has fucking Bluetooth in it, right? Like, why?

00:49:16 Speaker_02
I don't know, but, like, you know, there's, like, so many things that, like, just, like, poof, we're tech reset.

00:49:20 Speaker_06
We were supposed to do this interview like many months ago and then I kept like delaying because I'm like, ah, I don't understand any of this shit. But like it is like a very difficult thing to understand where I feel like with AI, it's like.

00:49:31 Speaker_06
It's not that no, you've just spent time. It's your time.

00:49:34 Speaker_06
I also feel like it's like less conflict It feels like it's a kind of thing where like in an amateur kind of way you can like, you know Pick up what's going on in the field in this feel like the thing about is like how?

00:49:46 Speaker_06
How does one learn the layers of the stack? Because the layers of stack are like there's not just the papers online You can't just like look up the the tutorial on how the transformer works or whatever.

00:49:55 Speaker_06
It's like yes I mean like many layers of really different like

00:49:58 Speaker_02
18-year-olds who are just cracked at AI, right? Already, right? And there's high school dropouts that get jobs at open AI. This existed in the past, right? Pat Gelsinger, current CEO of Intel.

00:50:10 Speaker_02
Went straight to work, he grew up in the Amish area of Pennsylvania and he went straight to work at Intel, right? Because he's just cracked, right? That is not possible in semiconductors today.

00:50:18 Speaker_02
You can't even get a job at a tool company without at least a freaking master's in chemistry, right? And probably a PhD, right? Of the 75,000 TSMC workers, it's like 50,000 have a PhD or something insane, right? It's like, okay.

00:50:34 Speaker_02
This is like, there's like some, there's like a next level amount of like, how specialized everything's gotten. Whereas today, like, you can take like, you know, Sholto, you know, he, when did he start working on AI? Not that long ago.

00:50:45 Speaker_02
Not to say anything bad about Sholto. No, no, no, but he's cracked. He's like Omega cracked at like what he does. What he does, you could pick him up and drop him into another part of the AI stack. First of all, he understands it already.

00:50:56 Speaker_02
And then second of all, he could probably become cracked at that too, right? Whereas that is not the case in semiconductors, right? One, you specialize like crazy. Two, you can't just pick it up. You know, like, Schultor, I think, what did he say?

00:51:11 Speaker_06
He, like, just started, like— He was a consultant at McKinsey, and at, like, night, he would, like, read papers about robotics.

00:51:16 Speaker_02
Right.

00:51:16 Speaker_06
And, like, run experiments and whatever.

00:51:18 Speaker_02
Yeah, and then, like, he, like, was, like— Like, people noticed. It was like, who the hell is this guy, and why is he posting this? Like, I thought everyone who knew about this was at Google already, right? It's like, come to Google. Right?

00:51:29 Speaker_02
That can't happen in semiconductors, right? Like, it's just not, like, conducively— Like, it's not possible, right? One archive is like a free thing.

00:51:37 Speaker_02
The paper publishing industry is like abhorrent everywhere else and you just like cannot download IEEE papers or like SPIE papers or like other organizations.

00:51:46 Speaker_02
And then two, at least up until like late 2022 or really early 2023 in the case of Google, right? I think what the Palm Inference paper, up until the Palm Inference paper, before that, all the good, best stuff was just posted on the internet.

00:52:00 Speaker_02
After that, you know, it's kind of a little bit clamping down by the labs, but there's also still all these other companies making innovations in the public. And like, what is state of the art is public. That is not the case in semiconductors.

00:52:11 Speaker_00
Semiconductors have been shut down since the 1960s, 1970s basically. I mean, like, it's kind of crazy how little information has been formally transmitted from one country to another.

00:52:21 Speaker_00
Like, the last time you could really think of this was like 19, maybe the Samsung era, right? So then how do you guys keep up with it? Well, we don't know it. I don't personally. I don't think I know it.

00:52:30 Speaker_00
I don't- I mean, I- If you don't know it, what are you making videos about? It's crazy because like, there's a guy, there's like- I spoke to one guy, he's like a PhD in Etch or something.

00:52:38 Speaker_00
The world- one of the top people in Etch and he's like, man, you really know, like, lithography, right? And I'm just like, I don't feel like I know lithography. But then you've talked to people who know lithography and say, you-

00:52:48 Speaker_00
You've done pretty good work in packaging, right? Nobody knows anything.

00:52:51 Speaker_06
They all have Gelman amnesia.

00:52:53 Speaker_00
They're all in this single well, right? They're digging deep. They're digging deep for what they're getting at. But they don't know the other stuff well enough. And in some ways, I mean, nobody knows the whole stack. Nobody knows the whole stack.

00:53:06 Speaker_02
The stratification of just manufacturing is absurd. The tool people don't even know exactly what Intel and TSMC do in production, and vice versa. They don't know exactly how the tool is optimized like this.

00:53:17 Speaker_02
And it's like, how many different types of tools there are? Dozens. And each of those has an entire tree of all the things that we've built, all the things we've invented, all the things that we continue to iterate upon.

00:53:28 Speaker_02
And then here's the breakthrough innovation that happens every few years in it, too.

00:53:31 Speaker_06
So if that's the case, if like nobody knows the whole stack, then how does the industry coordinate to be like, you know, in five in two years, we want to go to the next process, which has gate all around.

00:53:43 Speaker_06
And for that we need x tools and next technologies developed by whatever.

00:53:46 Speaker_00
It's really fascinating. It's a fascinating social kind of phenomenon, right? You can feel it. I went to Europe earlier this year. Dylan was like, had allergies. But like, I was like, talking to those other people. And you can just, it's like gossip.

00:54:01 Speaker_00
It's gossip. You start feeling the, you start feeling people coalescing around like a something, right? Early on, we used to have like Sematech

00:54:09 Speaker_00
where people, all these American companies came together and talked and they came and they hammered out, right? But Sematech in reality was dominated by a single company, right? But then, you know, nowadays it's a little more dispersed, right?

00:54:21 Speaker_00
You feel like it's a blue moon arising kind of thing. Like they are going towards something, they know it, and then suddenly the whole industry is like, this is it, let's do it.

00:54:33 Speaker_02
I think it's like God came and proclaimed it. We will shrink density 2x every two years. Gordon Moore, he made an observation and then like it didn't go nowhere.

00:54:42 Speaker_02
It went way further than he ever expected because it was like, oh, there's line of sight to get to here and here. And like, and he predicted like seven, eight years out, like multiple orders of magnitude of increases in transistors. And it came true.

00:54:54 Speaker_02
But then by then the entire industry was like, This is obviously true. This is the word of God. And every engineer in the entire industry, tens of millions of people, like literally, this is what they were driven to do.

00:55:05 Speaker_02
Now, not every single engineer didn't believe it, but people were like, yes, to hit the next shrink, we must do this, this, this, right? And this is the optimizations we make.

00:55:12 Speaker_02
And then you have this abstratification, every single layer, and abstraction layers, every single layer through the entire stack to where people It's an unholy concoction.

00:55:22 Speaker_02
I mean, you keep saying this word, but no one knows what's going on because there's an abstraction layer between every single layer. And on this layer, the people below you and the people above you know what's going on.

00:55:33 Speaker_02
And then beyond that, it's like, okay, I can try to understand, but not really.

00:55:38 Speaker_06
But I guess that doesn't answer the question of like, when IRDS or whatever, I don't know, was it 10, 20 years ago? I watched your video about it where they're like, we are, EUV is like, we're gonna do EUV instead of the other thing.

00:55:50 Speaker_06
And this is the path forward. How do they do that if they don't have the whole sort of picture of like different constraints, different trade-offs, different blah, blah, blah.

00:55:59 Speaker_00
They kinda, they argue it out. They get together and they talk and they argue. And basically at some point, a guy somewhere says, I think we can move forward with this.

00:56:09 Speaker_02
Semiconductors are so siloed and the data and knowledge within each layer is A, not documented online at all.

00:56:16 Speaker_00
Right, documentation.

00:56:18 Speaker_02
Because it's all siloed within companies.

00:56:20 Speaker_02
B, it is, there's a lot of human element to it because a lot of the knowledge like as John was saying is like apprentice master, apprentice master type of knowledge or I've been doing this for 30 years and there's an amazing amount of intuition on what to do just when you see something.

00:56:36 Speaker_02
to where like AI can't just learn semiconductors like that, but at the same time there's a massive amount of talent shortage and ability to move forward on things, right? So like the technology used on like

00:56:51 Speaker_02
most of the equipment and semiconductor tool, fabs, runs on Windows XP, right? Each tool has a Windows XP server on it. Or all the chip design tools have CentOS version six, right? And that's old as hell, right?

00:57:07 Speaker_02
So there's so many areas where, why is this so far behind? At the same time, it's so hyper-optimized. The tech stack is so broken in that sense. They're afraid to touch it. They're afraid to touch it. Yeah, because it's an unholy amalgamation.

00:57:20 Speaker_00
It's unholy.

00:57:21 Speaker_02
It should not be work. It should not work. This thing should not work. It's literally a miracle.

00:57:26 Speaker_02
So you have all the abstraction layers, but then it's like, one is there's a lot of breakthrough innovation that can happen now stretching across abstraction layers. But two is because there's so much inherent knowledge in each individual one,

00:57:38 Speaker_02
What if I can just experiment and test at 1000x velocity or 100,000x velocity? And so some examples of where this is already shown true is some of NVIDIA's AI layout tools, right? And Google as well.

00:57:52 Speaker_02
Laying out the circuits within a small blob of the chip with AI. Some of these RL design things. There's a lot of various simulation things. But is that design or is that manufacturing? It's all design, right? Most of it's design.

00:58:05 Speaker_02
Manufacturing has not really seen much of this yet, although it's starting to come in. Inverse lithography, maybe. Yeah, ILT, yeah, maybe. I don't know if that's AI. That's not AI.

00:58:14 Speaker_02
Anyways, there's tremendous opportunity to bring breakthrough innovation simply because there is so many layers where things are unoptimized, right? So you see all these, oh, single digit, low,

00:58:30 Speaker_02
double-digit, like, advantages just from, like, RL techniques from, like, AlphaGo type stuff, like, or, like, not from AlphaGo, but, like, five, six, seven, eight-year-old RL techniques being brought in, but, like, generative AI being brought in could, like, really revolutionize the industry, you know, although there's a massive data problem.

00:58:48 Speaker_06
And can you give those? Can you give the possibilities here in numbers in terms of maybe like a flop per dollar or whatever the? Relevant thing here is like how much do you expect in the future to come from?

00:58:59 Speaker_06
Process node improvements how much from just like how the hardware is designed because of AI If you like how to decide we're talking specifically for like GPUs. Yeah, like if you had to disaggregate future improvements, I think I think I

00:59:14 Speaker_02
You know, first, it's important to state that semiconductor manufacturing and design is the largest search base of any problem that humans do because it is the most complicated industry that anything that humans do.

00:59:25 Speaker_02
And so, you know, when you think about it, right, there's 1E10, 1E11, right, 100 billion transistors on leading edge chips, right? Blackwell has 220 billion transistors or something like that.

00:59:40 Speaker_02
So what is, and those are just on-off switches, and then think about every permutation of putting those together, contact ground, et cetera, drain source, blah, blah, blah, with wires, right? There's 15 metal layers, right?

00:59:51 Speaker_02
Connecting every single transistor in every possible arrangement. This is a search space that is literally almost infinite, right? You could like, the search space is much larger than any other search space that humans know of.

01:00:00 Speaker_06
And what is the nature of the search? Like, what are you trying to optimize over?

01:00:05 Speaker_02
useful compute, right? If the goal is optimize intelligence per picajoule, right? And intelligence is some nebulous nature of what the model architecture is. Yeah, yeah. And then picajoule is like a unit of energy, right? How do you optimize that?

01:00:20 Speaker_02
So there's humongous innovations possible in architecture, right? Because vast majority of the power on a H100 does not go to compute. And there are more efficient, like, compute, you know, ALUs, Arithmetic Logic Unit, like designs, right?

01:00:39 Speaker_02
But even then, the vast majority of the power doesn't go there, right? The vast majority of the power goes to moving data around, right? And then when you look at what is the movement of data, it's either networking or memory, you know, you have

01:00:52 Speaker_02
You have a humongous amount of movement relative to compute and a humongous amount of power consumption relative to compute. And so how can you minimize that data movement and then maximize the compute? There are 100x gains from architecture.

01:01:08 Speaker_02
Even if we literally stopped shrinking, I think we could have 100x gains from architectural advancements. Over what time period? The question is how much can we advance the architecture, right?

01:01:17 Speaker_02
The challenge, the other challenge is like the number of people designing chips has not necessarily grown in a long time, right? Yeah, like company to company it shifts, but like within like the semiconductor industry in the U.S. and the U.S.

01:01:30 Speaker_02
makes, you know, designs the vast majority of leading edge chips, the number of people designing chips has not grown much. What has happened is the output per individual has soared because of EDA, electronic design assistance tooling.

01:01:44 Speaker_02
Now, this is all still classical tooling. There's just a little bit of inkling of AI in there yet. What happens when we bring this in is the question, and how you can solve this search space somehow.

01:01:55 Speaker_02
with humans and AI working together to optimize this so it's not most of the power is data movement. And then the logic, the compute is actually very small.

01:02:04 Speaker_02
To flip side, the compute is, first of all, compute can get like 100x more efficient just with like design changes. And then you could minimize that data movement massively, right?

01:02:14 Speaker_02
So you can get a humongous gain in efficiency just from architecture itself. And then process node helps you innovate that there, right? And power delivery helps you innovate that. System design, chip-to-chip networking helps you innovate that, right?

01:02:26 Speaker_02
Memory technologies, there's so much innovation there. And there's so many different vectors of innovation that people are pursuing simultaneously to where, NVIDIA gen to gen to gen will do more than 2x performance per dollar.

01:02:40 Speaker_02
I think that's very clear. And then like hyperscalers are probably going to try and shoot above that, but we'll see if they can execute.

01:02:46 Speaker_06
There's like two narratives you can tell here of how this happens. One is that these AI companies who are training the foundation models

01:02:55 Speaker_06
who understand the trade offs of like how much is the marginal increase in compute versus memory work to them and what trade offs do they want between different kinds of memory, they understand this and so therefore the accelerators they build, they can make these sort of trade offs in a way that's like most optimal or and also design like the architecture of the the model itself in a way that reflects like what are the hardware trade offs.

01:03:21 Speaker_06
Another is Nvidia because it has like, I don't know how this works.

01:03:26 Speaker_06
Presumably they have some sort of like know how like they're accumulating all this like, knowledge about how to better design this architecture and like also better search tools for so on.

01:03:36 Speaker_06
Who has basically like, better moat here in terms of will Nvidia keep getting better at design getting this 100x improvement?

01:03:44 Speaker_06
Or will it be like OpenAI and Microsoft and Amazon and Anthropic who are designing their accelerators will keep getting better at like designing the accelerator?

01:03:52 Speaker_02
I think that there's a few vectors to go here, right? One is, you mentioned, and I think it's important to note, is that hardware has a huge influence on the model architecture that's optimal.

01:04:02 Speaker_02
And so it's not a one-way street that better chip equals, you know, the optimal model for Google to run on TPUs, given a given amount of dollars, a given amount of compute, is different architecturally than what it is for OpenAI with NVIDIA stuff.

01:04:16 Speaker_02
Right, it is like absolutely different.

01:04:18 Speaker_02
And then like, even down to like networking decisions that different companies do and data center design decisions that people do the optimal, like if you were to say, you know, x amount of compute of TPU versus GPU, compute optimally, what is the best thing you will diverge and what the architecture is.

01:04:33 Speaker_02
I think that's important to know, right?

01:04:34 Speaker_06
We can ask about that real quick. The show earlier, we're talking about how China has the H 20s or B 20s. And there, there's like much less compute per memory bandwidth and like the amount of memory, right?

01:04:50 Speaker_06
Does that mean that Chinese models will actually have like very different architecture and characteristics than American models in the future.

01:04:55 Speaker_02
So you can take this to like a very like large conclude like leap and it's like all you know neuromorphic computing or whatever is like the optimal path and that looks very different than like what a transformer does right?

01:05:06 Speaker_02
Or you could take it to like a simple thing which is like the level of sparsity and like coarse-grained sparsity like experts and all this sort of stuff. The arrangement of what exactly the tension mechanism is because there are a lot of tweaks

01:05:19 Speaker_02
It's not just like pure transformer attention, right? Or like, hey, how wide versus tall the model is, right? That's very important, like D-mod versus number of layers, right?

01:05:29 Speaker_02
These are all things that would be different, and I know they're different between, say, a Google and an open AI, and what is optimal.

01:05:37 Speaker_02
But it really starts to get like, hey, if you were limited on a number of different things, like China invests humongously in compute and memory.

01:05:46 Speaker_02
Um, you know, which is like basically the memory cell is directly coupled or is the, uh, the compute cell, right?

01:05:54 Speaker_02
So these are like things that like China's investing hugely and you go to conferences like, oh, there's 20 papers from Chinese companies slash universities about compute and memory.

01:06:02 Speaker_02
Or hey, because the flop limitation is here, maybe NVIDIA pumps up the on-chip memory and changes the architecture, because they still stand to benefit tens of billions of dollars by selling chips to China.

01:06:14 Speaker_02
Today, it's just neutered American chips, or neutered chips that go to the US, but it'll start to diverge more and more architecturally, because they'd be stupid not to make chips for China. And Huawei, obviously, again, has their constraints, right?

01:06:27 Speaker_02
Where are they limited on memory? Oh, they have a lot of networking capabilities, and they could move to certain optical networking technologies directly onto the chip much sooner than we could, right?

01:06:38 Speaker_02
Because that is what's optimal for them within their search space of solutions, right? Because this whole area is blocked off.

01:06:44 Speaker_00
It's kind of really interesting to see, to think about like the development of how Chinese AI models will differ from American AI models because of these changes or these constraints.

01:06:54 Speaker_02
And it applies to use cases, it applies to data, right? Like American models are very important about like, let me learn from you, right? Let me be able to use you directly as a random consumer.

01:07:05 Speaker_02
That is not the case for Chinese model, I assume, because there's probably very different use cases for them. China crushes the West at video and image recognition.

01:07:15 Speaker_02
At ICML, like Albert Gu of Cartesia, like state space models, every single Chinese person was like, can I take a selfie with you? Man was harassed.

01:07:23 Speaker_02
In the US, you see Albert and he's like, it's awesome, he invented state space models, but it's not like state space models are like,

01:07:28 Speaker_02
like here, but that's because state space models potentially have like a huge advantage in like video and image and audio, which is like stuff that China does more of, and that is further along and has better capabilities in, right?

01:07:40 Speaker_00
So it's like- Because of all the surveillance cameras there. Sorry? Because of all the surveillance cameras there.

01:07:44 Speaker_02
Yeah, that's the quiet part out loud, right? But there's already divergence in capabilities there, right? If you look at image recognition, China destroys American companies on that, right? Because the surveillance.

01:07:57 Speaker_02
You have this divergence in tech tree, and people can start to design different architectures within the constraints you're given. And everyone has constraints, but the constraints different companies have are even different, right?

01:08:10 Speaker_02
Google's constraints have shown them that they built, they built a genuinely different architecture.

01:08:14 Speaker_02
But now if you look at like Blackwell, and then what's like said about TPV6, right, they're, I'm not gonna say they're like converging, but they are getting a little bit closer in terms of like, how big is the

01:08:27 Speaker_02
Matmul unit size and like some of the like topology and like world size of like the scale up versus scale out network like there is some like Convergence slightly like not saying they're similar yet, but like already they're starting to but then there's different architectures that people could go down and path so you see stuff like from all these startups that are trying to go down different tech trees because maybe that'll work and

01:08:46 Speaker_02
But there's a self-fulfilling prophecy here too, right?

01:08:48 Speaker_02
All the research is in transformers that are very high arithmetic intensity because the hardware we have is very high arithmetic intensity and transformers run really well on GPUs and TPUs and you sort of have a self-fulfilling prophecy if all of a sudden you have an architecture which is...

01:09:02 Speaker_02
Theoretically, it's way better, but you can get only half of the usable flops out of your chip. It's worthless, because even if it's 30% compute efficiency win, it's half as fast on the chip, right?

01:09:15 Speaker_02
So there's all sorts of trade-offs and self-fulfilling prophecies of what path do people go down.

01:09:21 Speaker_06
John and Dylan have talked a lot in this episode about how stupefyingly complex the global semiconductor supply chain is. The only thing in the world that approaches this level of complexity is the Byzantine web of global payments.

01:09:35 Speaker_06
You're stitching together legacy tech stacks and regulations that differ in every jurisdiction. In Japan, for example, a lot of people pay for online purchases by taking a code to their corner store and punching it into a kiosk.

01:09:50 Speaker_06
Stripe abstracts all this complexity away from businesses. You can offer customers whatever payment experience they're most likely to use, wherever they are in the world. And Stripe is how I invoice advertisers for this very podcast.

01:10:03 Speaker_06
I doubt that they're punching in codes at a kiosk in Japan, but if they are, Stripe will handle it. Anyways, you can head to stripe.com to learn more.

01:10:11 Speaker_06
If you are made head of compute of a new AI lab, if like SSI came to you, the alias that's covering your lab and they're like, Dylan, we give you $1 billion, you're our head of compute. Like help us get on the map.

01:10:25 Speaker_06
We're gonna compete with the frontier labs. What is your first step?

01:10:28 Speaker_02
Okay, so the constraints are you're a U.S. slash Israeli firm, because that's what SSI is, right? And your researchers are in the U.S. and Israel.

01:10:38 Speaker_02
You probably can't build data centers in Israel, because power is expensive as hell, and it's probably risky maybe, I don't know. So still in the U.S. most likely. Most of the researchers are here, or a lot of them are in the U.S., right?

01:10:50 Speaker_02
Like Palo Alto or whatever. So I guess you need a significant chunk of compute. Obviously, the whole pitch is you're going to make some research breakthrough that's like compute efficiency win, data efficiency win, whatever it is.

01:11:03 Speaker_02
You're going to make some breakthrough, but you need compute to get there, right? Because your GPUs per researcher is your research velocity, right? Obviously, data centers are very tapped out.

01:11:15 Speaker_02
Not in terms of tapped out, but every new data center that's coming up, most of them have been sold, which has led people like Elon to go through this insane thing in Memphis. I'm just trying to square the circle.

01:11:26 Speaker_06
On that question, I kid you not, in my group chat, There have been two separate people who have been like, I have a cluster of H100s and I have a long lease on them, but I'm trying to sell them off. Is it like a buyer's market right now?

01:11:43 Speaker_06
Because it does seem like people are trying to get rid of them.

01:11:44 Speaker_02
So I think for the Ilia question, a cluster of 256 GPUs or even 4K GPUs is kind of just kind of cope, right? It's not enough, right?

01:11:54 Speaker_02
Yes, you're going to make compute efficiency wins, but with a billion dollars, you probably just want the biggest cluster in one individual spot. And so, like, small amounts of GPUs, probably not, like, you know, possible to use, right?

01:12:06 Speaker_02
Like, for them, right? Like, and that's what most of the sales are, right? Like, you go and look at, like, GPU list or, like, Vast or, like, Foundry, like, or a hundred different GPU resellers, the cluster sizes are small.

01:12:20 Speaker_02
Is it a, is it a buyer's market? Yeah. Last year you would buy H one hundreds for like $4 or $3. Like if you, you know, an hour, an hour, right.

01:12:27 Speaker_02
At first, shorter term or midterm deals right now, it's like, if you want a six month deal, you can get like $2 15 cents or less. Right. Like, and like the natural cost, if I, as if I have a data center, right.

01:12:37 Speaker_02
And I'm paying like standard data center pricing,

01:12:39 Speaker_02
to purchase the GPUs and deploy them is like $1.40, and then you add on the debt, because I probably took debt to buy the GPUs, or cost equity, cost of capital, gets up to like $1.70 or something, right?

01:12:51 Speaker_02
And so you see deals that are like, the good deals, right? Like Microsoft renting from CoreWeaver, like $1.90 to $2, right? So people are getting closer and closer to like, there's still a lot of profit, right?

01:13:01 Speaker_02
Because the natural rate, even after debt and all this is like $1.70. So like, there's still a lot of profit when people are selling in the low twos.

01:13:07 Speaker_02
like GPU companies, people are deploying them, but it is a buyer's market in a sense that it's gotten a lot cheaper, but cost of compute is going to continue to tank, right?

01:13:15 Speaker_02
Because it's like sort of like, I don't remember the exact name of the law, but it's effectively Moore's law, right? Every two years, the cost of transistors halved, and yet the industry grew, right?

01:13:27 Speaker_02
every six months or three months, the cost of intelligence. You know, like OpenAI and GPD, GPD 4, what, February 2023, right? $120 per million tokens or something like that was roughly the cost, and now it's like 10, right?

01:13:43 Speaker_02
It's like the cost of intelligence is tanking partially because of compute, partially because the model's compute efficiency wins, right?

01:13:49 Speaker_02
I think that's a trend we'll see, and then that's gonna drive adoption as you scale up and make it cheaper and scale up and make it cheaper. Right, right, right. Anyways, what you were saying, if you're a head of computer of SSI.

01:13:57 Speaker_02
Okay, head of computer of SSI. That was very intense. There's obviously no free data center lunch, right? In terms of, and you can just take that based on the data we have shows that there's no free lunch, per se.

01:14:11 Speaker_02
Immediately today, you need to compute for a large cluster size, or even six months out, right? There's some, but not a huge amount, because of what X did, right? XAI is like,

01:14:21 Speaker_02
Oh, shit, we're going to go buy a Memphis factory, put a bunch of generators outside, mobile generators usually reserved for natural disasters, a Tesla battery pack, drive as much power as we can from the grid, tap the natural gas line that's going to the natural gas plant two miles away, the gigawatt natural gas plant, just send it and get a cluster built as fast as possible.

01:14:44 Speaker_02
Now you're running 100K GPUs, right?

01:14:46 Speaker_05
I know.

01:14:46 Speaker_02
And that cost about $5 billion, right? $4 billion, right? Not $1 billion. So the scale that SSI has is much smaller, by the way, right? So their size of cluster will be maybe one-third or one-fourth of the size, right?

01:15:02 Speaker_02
So now you're talking about 25K to 32K cluster, right? You still don't have that, right? No one is willing to rent you a 32k cluster today, no matter how much money you have, right? Even if you had more than a billion dollars.

01:15:13 Speaker_02
So you now, it makes the most sense to build your own cluster one, instead of renting it or get a very close relationship like a OpenAI Microsoft with CoreWeave or OpenAI Microsoft with Oracle slash Crusoe. The next step is Bitcoin, right?

01:15:29 Speaker_02
So OpenAI has a data center in Texas, right? Or it's going to be their data center. It's like they've kind of contracted all that.

01:15:37 Speaker_02
CoreWeave, there is a 300 megawatt natural gas plant on site, powering these crypto mining data centers from the company called Core Scientific. And so they're just converting that.

01:15:50 Speaker_02
There's a lot of conversion, but the power's already there, the power infrastructure's already there. So it's really about converting it, getting it ready to be water-cooled, all that sort of stuff, and convert it to a 100,000 GB 200 cluster.

01:15:59 Speaker_02
And they have a number of those going up across the country. But that's also like tapped out to some extent because NVIDIA is doing the same thing in Plano, Texas for a 32,000 GPU cluster that they're building. Is NVIDIA doing that?

01:16:11 Speaker_02
Well, they're going through partners, right? Because this is the other interesting thing is the big tech companies can't do crazy shit like Elon did. Why? ESG. Oh, interesting.

01:16:21 Speaker_06
They can't just do crazy shit like, because this- Actually, do you expect Microsoft and Google and whoever to like drop their net zero commitments as the scaling picture intensifies?

01:16:33 Speaker_02
So, like, what XAI is doing, right, is like, it's not that polluting, you know, on the scheme of things, but it's like, you have 14 mobile generators and you're just burning natural gas on site on these, like, mobile generators that sit on trucks, right?

01:16:47 Speaker_02
And then you have, like, power directly two miles down the road. There's no unequivocal way to say any of the power is because two miles down the road is a natural gas plant as well, right? There's no way to say this is, like, green.

01:16:57 Speaker_02
You go to the Corbe thing, it's a natural gas plant. It's literally on site. from core scientific and all that, right? And then the data centers around it are horrendously inefficient, right?

01:17:06 Speaker_02
There's this metric called PUE, which is basically how much power is brought in versus how much gets delivered to the chips, right? And like the hyperscalers, because they're so efficient or whatever, right? Their PUE is like 1.1 or lower, right?

01:17:19 Speaker_02
I.e., if you get a gigawatt in, 900 megawatts or more gets delivered to chips, right? Not wasted on cooling and all these other things. This like core scientific one is going to be like 1.5, 1.6, i.e.

01:17:32 Speaker_02
even though I have 300 megawatts of generation on site, I only deliver like 180, 200 megawatts to the chips.

01:17:39 Speaker_06
Given how fast solar is getting cheaper, and also the fact that like, you know, how the reason solar is difficult elsewhere is like, you know, you're like, you got to like power the homes at night.

01:17:49 Speaker_06
Here, I guess it's like theoretically possible to like, figure out, you know, only like, run the clusters in the in the day or something.

01:17:56 Speaker_02
Absolutely not. That is that that's not possible.

01:17:59 Speaker_06
Because because it's so expensive to have these GPUs.

01:18:01 Speaker_02
Yeah, so when you look at the power cost of a large cluster, it's trivial to some extent, right? The meme that, oh, you can't build a data center in Europe or East Asia because the power is expensive, that's not really relevant.

01:18:16 Speaker_02
Or power is so cheap in China and the US, that's why the only place is you can build data centers. That's not really the real reason. It's the ability to generate new power for these activities is why it's really difficult.

01:18:27 Speaker_02
and the economic regulation around that.

01:18:29 Speaker_02
But the real thing is, like, if you look at the cost of ownership of an H100, let's just say you gave me, you know, a billion dollars, and I already have a data center, I already have all this stuff, I'm paying regular rates for the data centers, I'm not paying through the nose or anything, paying regular rates for power, not paying through the nose, power is sub-15% of the cost.

01:18:47 Speaker_02
And it's sub 10% of the cost, actually, right? The biggest, like 75 to 80% of the cost is just the servers, right? And this is on a multi-year, including debt financing, including cost of operation, all that, right?

01:18:59 Speaker_02
Like when you do a TCO, total cost of ownership, it's like 80% is the GPUs, 10% is the data center, 10% is the power, rough numbers, right? So it's kind of irrelevant, right, whether or not you, like, how expensive the power is, right?

01:19:13 Speaker_02
You'd rather do what Taiwan does, right? When power, what do they do when there's droughts, right? They force people to not shower.

01:19:21 Speaker_00
They basically reroute the power from, when there was a power shortage in Taiwan, they basically rerouted power from the residentials.

01:19:27 Speaker_02
And this will happen in a capitalistic society as well, most likely, because like, fuck you, you're not gonna pay X dollars per kilowatt hour, because to me, the marginal cost of power is irrelevant.

01:19:36 Speaker_02
Really, it's all about the GPU cost, and the ability to get the power. I don't wanna turn it off eight hours a day.

01:19:42 Speaker_06
Maybe let's discuss what would happen if the training regime changes and if it doesn't change.

01:19:46 Speaker_06
So like, you could imagine that the training regime becomes much more parallelizable, where it's like, about like coming up with some sort of like search or synthetic, like most of the compute for training is used to come up with synthetic data, or do some kind of search.

01:19:59 Speaker_06
And that can happen across a wide area. In that world, how fast could we scale? Like, we're just like, let's go through the numbers on like year after year. And then we'll suppose it actually has to be

01:20:12 Speaker_06
you would know more than me, but like, suppose it has to be the current regime and like, just explain what that would mean in terms of like, how distributed that would have to be.

01:20:19 Speaker_06
And then how plausible it is to get clusters of certain sizes over the next few years.

01:20:25 Speaker_02
I think I think it like is not too difficult for Ilya's company to get a cluster of like 32k. And like of Blackwell. Okay, fair enough.

01:20:36 Speaker_06
Like 2025, 2026, 2027.

01:20:37 Speaker_02
2025, 2026, there's, before I like talk about like the US, I think it's like important to note that there's like a gigawatt plus of data center capacity in Malaysia next year.

01:20:47 Speaker_02
Now that's like mostly by dance, but like, there's like, you know, in power wise, there's like, there's the humongous damming of the Nile in Ethiopia and the country uses like one third of the power that that dam generates.

01:20:57 Speaker_02
So there's like a ton of power there.

01:20:59 Speaker_06
How much power does that dam generate?

01:21:00 Speaker_02
Like it's like over a gigawatt. And the country consumes like 400 megawatts or something trivial and who is like our people bidding for that power I think people just don't think they can build a data center in fucking Ethiopia. Why not?

01:21:12 Speaker_02
The dam is filled yet. Is it? No, I mean they have to like the the dam could generate that power They just don't okay, right like there's a little bit more equipment required, but that's like not too hard. Um, I Why don't they?

01:21:24 Speaker_02
I think there's true security risks, right? If you're China or if you're the US lab, to build a fucking data center with all your IP in fucking Ethiopia. You want AGI to be in Ethiopia? You want it to be that accessible?

01:21:38 Speaker_02
People you can't even monitor being the technicians in the fucking data center or whatever, right? Or powering the data center, all these things. There's so many...

01:21:47 Speaker_02
you know, things you could do to like, you could just destroy every GPU in a data center if you want, if you just like fuck with the grid, right? Like pretty, pretty like easily, I think. People talk a lot about it in the Middle East.

01:21:56 Speaker_02
There's a 100K GB 200 cluster going up in the Middle East, right? And the U.S. like, there's like clearly like stuff the U.S. is doing, right? Like the, you know, G42 is the UAE data center company, cloud company.

01:22:09 Speaker_02
Their CEO is a Chinese national or not a Chinese, he's Chinese.

01:22:13 Speaker_02
basically Chinese allegiance, but I think OpenAI wanted to use a data center from them, but instead, the US forced Microsoft to, I feel like this is what happened, is forced Microsoft to do a deal with them so that G42 has a 100K GPU cluster, but Microsoft is administering and operating for security reasons, right?

01:22:32 Speaker_02
And there's Omniva in Kuwait, the Kuwait super rich guy spending five plus billion dollars on data centers, right? You just go down the list, all these countries, Malaysia has,

01:22:43 Speaker_02
know, 10 plus billion dollars of like data center, you know, AI data center build outs over the next couple years, right? Like, and, you know, go to every country, it's like this, this stuff is happening.

01:22:52 Speaker_02
But on the grand scheme of things, the vast majority of the compute is being built in the US, and then China, and then like Malaysia, Middle East, and like rest of the world.

01:23:00 Speaker_02
And if you're in there, you know, going back to your point, right, like you have synthetic data, you have like the search stuff, you have like, you have all these push training techniques, you have all this, you know, all this ways to soak up flops, or you just figure out how to train across multiple data centers, which I think they have, at least Microsoft and OpenAI have figured, OpenAI has figured it out.

01:23:21 Speaker_06
What makes you think they figured it out?

01:23:23 Speaker_02
Their actions. Microsoft has signed deals north of $10 billion with fiber companies to connect their data centers together. There are some permits already filed to show people are digging between certain data centers.

01:23:37 Speaker_02
So we think with fairly high accuracy, we think that there's five data centers, massive, not just five data centers, sorry, five regions that they're connecting together, which comprises of many data centers, right?

01:23:48 Speaker_02
What will be the total power usage of the? Depends on the time, but easily north of a gigawatt, right?

01:23:53 Speaker_06
Which is like close to a million GPUs.

01:23:56 Speaker_02
Well, the each GPU is getting more power higher power consumption to write like it's like, you know the rule of thumb is like GPU h100 is like 700 watts but then like total power per GPU all-in is like 1200 1300 watts 1400 watts, but next-generation Nvidia GPUs are It's 1200 watts for the GPU, but then it actually ends up being like 2,000 watts all-in right like so there's a little bit of scaling of power per GPU and

01:24:19 Speaker_02
But you already have 100k cluster, right? OpenAI in Arizona, XAI in Memphis, and many others already building 100k clusters of H100s.

01:24:29 Speaker_02
You have multiple, at least five, I believe, GB200, 100k clusters being built by Microsoft slash OpenAI slash their partners for them. And then potentially even more, 500k GB200s, right? a gigawatt, right? And that's like online next year, right?

01:24:47 Speaker_02
And like the year after that, if you aggregate all the data center sites, and like how much power, and you only look at net ads since 2022, instead of like the total capacity at each data center, then you're still like north of multi gigawatt, right?

01:24:59 Speaker_02
And so they're spending 10 plus billion dollars on these fiber deals with a few fiber companies, Lumen, Zayo, a couple other companies. And then they've got all these data centers that they're clearly building 100K clusters on, right?

01:25:11 Speaker_02
Like old crypto mining site with Corweave in Texas or like this Oracle Crusoe in Texas and then like in Wisconsin and Arizona and a couple other places. There's a lot of data centers being built up and providers, right, QTS and

01:25:26 Speaker_02
Cooper and like, you know, you go down the list. There's like so many different provide and self-build right data centers. I'm building myself.

01:25:31 Speaker_06
So Let's just like give the number on like, okay 2025 Elon's cluster is gonna be the big like it doesn't matter who it is So so then there's a definition game, right?

01:25:44 Speaker_02
like Elon claims he has the largest cluster at 100 K GPUs because they're all fully connected and who it is like I just want to know like how many like I don't know if it's better to denominate and 100 views this year

01:25:57 Speaker_06
For the biggest cluster.

01:25:58 Speaker_02
For the biggest cluster. Next year. Next year, 300 to 500,000, depending on whether it's one side or many, right? 300 to 700,000 I think is the upper bound of that.

01:26:07 Speaker_02
But anyways, it's about when they tier it on, when they can connect them, when the fiber's connected together. Anyways, 300 to 500,000, let's say, but those GPUs are 2 to 3x faster. Right? Versus the 100K cluster.

01:26:21 Speaker_02
So on an H100 equivalent basis, you're at a million chips next year. In one cluster? By the end of the year, yes. No, no, no. So one cluster is like the wishy-washy definition, right? Multi-site, right? Can you do multi-site?

01:26:34 Speaker_02
What's the efficiency loss when you go multi-site? Is it possible at all? I truly believe so. What's the efficiency loss is the question, right? Okay, would it be like 20% loss, 50% loss? Great question.

01:26:46 Speaker_02
This is where like, you know, this is where you need like the secrets, right? Of like, and Anthropic's got similar plans with Amazon and you go down the list, right? Like people- And then the year after that. The year after that is where- This is 2026.

01:26:58 Speaker_02
2026, there is a single gigawatt site, and that's just part of the multiple sites, right? For Microsoft. The Microsoft 5 gigawatt thing happens in 2026. One gigawatt one site in 2026, but then you have a number of others.

01:27:13 Speaker_02
You have five different locations, some with multiple sites, some with single site. You're easily north of two, three gigawatts. And then the question is, can you start using the old chips with the new chips?

01:27:24 Speaker_02
And the scaling, I think, is you're going to continue to see flop scaling much faster than people expect, I think, as long as the money pours in, right?

01:27:32 Speaker_02
That's the other thing, is there's no fucking way you can pay for the scale of clusters that are being planned to be built next year for open AI unless they raise $50 to $100 billion.

01:27:43 Speaker_02
Which I think they will raise that, like end of this year, early next year. 50 to 100 billion? Yes. Are you kidding me? No. Oh my God. This is like, you know, like Sam has a superpower, no? Like, it's like, it's like recruiting and like raising money.

01:27:56 Speaker_02
That's like what he's like a god at. Will ships themselves be a bottleneck to the scaling? Not in the near term, it's more getting back to the concentration versus decentralization point. Because like, the largest cluster is 100,000 GPUs.

01:28:09 Speaker_02
NVIDIA's manufactured close to 6 million hoppers, right? Across last year and this year, right? So like, what? That's fucking tiny, right?

01:28:16 Speaker_06
So then why is Sam talking about a 7 trillion to build foundries and whatever?

01:28:20 Speaker_02
Well, this is this, you know, like, draw the line, right? Like log, log lines. Let's fuck a number goes up, right? You know, if you do, if you do that, right?

01:28:27 Speaker_02
Like you're going from a hundred K to 300 to 500 K where the equivalent is a million, you just 10 X year on year. Do that again, do that again or more, right? If you increase the pacing, what is do that again?

01:28:38 Speaker_02
So like 20, 26, like the number of H1N1, you know, if you increase the globally produced flops by like 30 X,

01:28:46 Speaker_02
year-on-year, or 10X year-on-year, and the cluster size grows by three to five to 7X, and then you get multi-site going better and better and better, you can get to the point where multi-million chip clusters, even if they're regionally not connected right next to each other, are right there.

01:29:06 Speaker_02
And in terms of flops, it would be 1E what?

01:29:10 Speaker_06
1E28, 29?

01:29:10 Speaker_02
I think 1E30 is very possible, like 28, 29.

01:29:14 Speaker_06
Wow. Yeah. And 1e30 you said by 2829? Yeah. And so that is literally six orders of magnitude. That's like 100,000 times more compute than GPT-4.

01:29:26 Speaker_02
Yes. The other thing to say is like the way you count flops on a training run is really stupid. You can't just do active parameters times tokens times six, right?

01:29:36 Speaker_02
That's really dumb because the paradigm, as you mentioned, and you've had many great podcasts on this, synthetic data and RL stuff, post-training, verifying data, and all these things generating and throwing it away, all sorts of stuff.

01:29:48 Speaker_02
Search, inference time compute, all these things. aren't counted in the training flops.

01:29:53 Speaker_02
So you can't say 1E30 is a really stupid number to say because by then the actual flops of the pre-training may be X, but the data to generate for the pre-training may be way bigger, or the search inference time may be way, way bigger, right?

01:30:07 Speaker_06
Right. But also the like, because you're doing the sort of adversarial synthetic data, where like the thing you're weakest that you can make synthetic data for that, it might be way more sample efficient.

01:30:17 Speaker_02
So like, even though the pre-training flops will be irrelevant, right? Like, I actually don't think pre-training flops will be 1e30. I think more reasonably, it'll be like the total summation of the flops that you deliver to the model Right.

01:30:28 Speaker_02
Across pre-training, post-training, synthetic data for that pre-training data and post-training data, as well as some of the inference time compute efficiencies. It's more like 1E30, right? Interesting.

01:30:41 Speaker_06
So suppose you really do get to the world where it's worth investing. Okay, actually, if you're doing 1E30, is that like a trillion dollar cluster, a hundred billion dollar cluster?

01:30:51 Speaker_02
I think it'll be like multi-hundred billion dollars, but then like, it'll be like, I like truly believe people are gonna be able to use their prior generation clusters alongside their new generation clusters.

01:31:05 Speaker_02
And obviously like, you know, smaller batch sizes or whatever, right? Like, or use that to generate and verify data, all these sorts of things.

01:31:10 Speaker_06
And then for 1e30, right now, I think 5% of TSMC's N5 is NVIDIA or like whatever percent it is. By 2028, what percentage will it be?

01:31:23 Speaker_02
Um, again, this is like a question of like how skillful you are and how much money will flow into this and how you think progress works. Like, will models continue to get better or does the line like not? Does the line slope over?

01:31:34 Speaker_02
I believe it'll like continue to like skyrocket in terms of capability in that world.

01:31:39 Speaker_02
Why wouldn't like, of not a five nanometer, but like of two nanometer, A16, A14, these are the nodes that'll be in that timeframe of 2028, used for AI, I could see like 60, 70, 80% of it.

01:31:50 Speaker_06
Like, yeah, no problem. Given the fabs that are like currently planned and are currently being built, is that enough for the 1E30 or will more be needed? I think so, yeah. So then like the chip code doesn't make any sense. Sorry.

01:32:02 Speaker_06
Like the chip code stuff about like we don't have enough compute doesn't make any sense.

01:32:05 Speaker_02
So no, I think like, The plans of TSMC on two nanometer and such are like quite aggressive for a reason, right? Like to be clear. Apple, which has been TSMC's largest customer, does not need how much 2 nanometer capacity they're building.

01:32:21 Speaker_02
They will not need A16. They will not need A14, right? Like, you go down the list, it's like, Apple doesn't need this shit, right?

01:32:28 Speaker_02
Although they did just hire one of Google's head of system design for TPU, but, you know, so they are going to make an accelerator. But, you know, that's besides the point, an AI accelerator, but that's besides the point.

01:32:38 Speaker_02
Apple doesn't need this for their business, which they have been 25% or so of TSMC's business for a long time.

01:32:43 Speaker_02
And when you just zone in on just the leading edge, they've been like more than half of the newest node or 100% of the newest node almost constantly. That paradigm goes away. Right?

01:32:54 Speaker_02
If you believe in scaling and you believe in like the models get better, the new models will generate, you know, infinite, not infinite, but like amazing productivity gains for the world and such on so on and so forth.

01:33:04 Speaker_02
And if you believe in that world, then like TSMC needs to act accordingly. And the amount of silicon that gets delivered needs to be there. So 2526 TSMC is like definitely there. And then on a longer timescale, the industry industry can be ready for it.

01:33:19 Speaker_02
But It's going to be a constant game of like, you must convince them constantly that they must do this. It's not like a simple game of like, oh, you know, if people work silently, it's not going to happen, right?

01:33:30 Speaker_02
Like they have to see the demonstrated growth over and over and over and over again across the industry. And do you see it in investors or companies or who? More so like TSMC needs to see NVIDIA volumes continue to grow straight up, right?

01:33:44 Speaker_02
And oh, and Google's volumes continue to grow straight up and go down the list. Chips in the near term, right, next year, for example, are less of a constraint than data centers, right? And likewise for 2026.

01:33:56 Speaker_02
The question for 27, 28 is like, you know, always when you grow super rapidly, people want to say, that's the one bottleneck, because that's the convenient thing to say. And in 2023, there was a convenient bottleneck, COAS, right?

01:34:13 Speaker_02
The picture's gotten much, much cloudier, not cloudier, but we can see that, you know, HBM is a limiter too, COAS is as well, COASL especially, right? Data centers, transformers, substations, power generation, batteries, UPSs,

01:34:28 Speaker_02
CRHs, like water cooling stuff, like all of this stuff is now limitations next year and the year after. FABs are in 26, 27, right?

01:34:34 Speaker_02
Like, you know, things will get like cloudy because like the moment you unlock one, oh, like only 10% higher, the next one is the thing. And only 20% higher, the next one is the thing. So today, like data centers are like four to 5% of total US.

01:34:48 Speaker_02
When you think about like as a percentage of U.S. power, that's not that much, but when you think U.S.

01:34:51 Speaker_02
power has been like this and now you're like this, but then you also flip side, you're like, oh, all this coal has been curtailed all these like, oh, there's so many like different things.

01:34:58 Speaker_02
So like power is not that crazy on a like, on a national basis, on a localized basis it is because it's about the delivery of it. Same with the substation transformer supply chains, right?

01:35:08 Speaker_02
It's like, these companies have operated in an environment where the US power is like this or even slightly down, right?

01:35:14 Speaker_02
And it's kind of been like that because of efficiency gains, because of, so anyways, there have been humongous weakening of the industry. But now, all of a sudden, if you tell that industry, your business will triple next year,

01:35:28 Speaker_02
If you can produce more, oh, but I can only produce 50% more. Okay, fine. A year after that, now we can produce 3X as much, right?

01:35:34 Speaker_02
You do that to the industry, the US industrial base, as well as the Japanese, as well as all across the world can get revitalized much faster than people realize, right? I truly believe that people can innovate when given the like need to.

01:35:49 Speaker_02
It's one thing if it's like this is a shitty industry where my margins are low and we're not growing really and like, you know, blah, blah, blah, blah, blah to all of a sudden, oh, this is the sex.

01:36:00 Speaker_02
I'm in power and I'm like, this is the sexiest time to be alive.

01:36:03 Speaker_02
And like, we're, we're going to do all these different plans and projects and people have all this demand and they're like begging me for another percent of efficiency advantage because that gives them another percent to deliver to the chips.

01:36:12 Speaker_02
Like all these things where 10% or whatever it is, like you see all these things happen and, innovation is unlocked.

01:36:18 Speaker_02
And, you know, you also bring in like AI tools, you bring in like all these things, innovation will be unlocked, production capacity can grow, not overnight, but it will on six months, 18 months, three-year time scales, it will grow rapidly.

01:36:32 Speaker_02
and you see the revitalization of these industries.

01:36:34 Speaker_02
So, but I think like getting people to understand that, getting people to believe, because, you know, if we pivot to like, yeah, I'm telling you that Sam's going to raise 50 to a hundred billion dollars because he's telling people he's going to raise this much, right?

01:36:45 Speaker_02
Like literally having discussions with sovereigns and like Saudi Arabia and like the Canadian pension fund and like, not these specific people, but like the biggest investors in the world.

01:36:56 Speaker_02
And of course Microsoft as well, but like, he's literally having these discussions because they're going to drop their next model or they're going to show it off to people and raise that money. But because this is their plan.

01:37:05 Speaker_06
If these sites are already planned and like the money is not there, right? So how do you plan? How do you like plan a site without today?

01:37:11 Speaker_02
Microsoft is taking on immense credit risk, right? Like they've signed these deals with all these companies to do this stuff. But Microsoft doesn't have, I mean, they could pay for it, right? Microsoft could pay for it on the current timescale, right?

01:37:25 Speaker_02
Their CapEx going from $50 billion to $80 billion direct CapEx, and then another 20 billion across Oracle, CoreWeed, and then another 10 billion across their data center partners. They can afford that, right, to next year, right?

01:37:41 Speaker_02
That doesn't, you know, like, this is because Microsoft truly believes in OpenAI. They may have doubts like, holy shit, we're taking a lot of credit risk.

01:37:47 Speaker_02
You know, obviously, they have to message Wall Street, all these things, but they are not like, that's like affordable for them because they believe they're a great partner to OpenAI, that they'll take on all this credit risk.

01:37:57 Speaker_02
Now, obviously OpenAI has to deliver. They have to make the next model, right? That's way better. And they also have to raise the money. And I think they will, right?

01:38:03 Speaker_02
I truly believe from like how amazing 4.0 is, how small it is relative to 4, the cost of it is so insanely cheap. It's much cheaper than the API prices. Lead you to believe and you're like, oh, what if you just make a big one?

01:38:15 Speaker_02
It's like very clear what's going to happen to me on the next jump that they can then raise this money and they can raise this capital from the world. This is intense, don't worry. It's very intense.

01:38:26 Speaker_06
John, actually, if he's right, or I don't know, not him, but in general, if the capabilities are there, the revenue is there. Revenue doesn't matter. Revenue matters.

01:38:37 Speaker_06
Is there any part of that picture that still seems wrong to you in terms of like displacing so much of TSMC production wafers and like power and so forth? Does any part of that seem wrong to you?

01:38:48 Speaker_00
I can only speak to the semiconductor part, even though I'm not an expert, but I think the thing is like TSMC can do it. Like they'll do it. I just wonder. Though he's right in that in a sense of 24, 25, that's covered.

01:39:00 Speaker_00
But 26, 27, that's that secret point where you have to say, can the semiconductor industry and the rest of the industry be convinced that this is where the money is? Like, where's money is? And that means, is there money? Is there money by 24, 25?

01:39:14 Speaker_06
How much revenue do you think the AI industry as a whole needs by 25 in order to keep scaling? Doesn't matter. Compared to smartphones. Compared to smartphones. But I know he says it doesn't matter.

01:39:23 Speaker_00
I'll get to a lie.

01:39:24 Speaker_06
You keep, I know. Hey, what is smartphones? Like Apple's revenue is like 200 something billion dollars, so like.

01:39:28 Speaker_00
Yeah, it needs to be another smartphone size opportunity, right? Like even the smartphone industry didn't drive this sort of growth. Like it's kind of crazy, don't you think? So today, so far, right?

01:39:37 Speaker_00
The only thing I can really perceive, yeah, girlfriend. But like, but you know what I mean.

01:39:43 Speaker_02
It's not there. No, I want a real one, Devin. So, like, a few things, right? The return on invested capital for all of the big tech firms is up since 2022. And therefore, it's clear as day that them investing in AI has been fruitful so far, right?

01:40:02 Speaker_02
For the big tech firms. Return on invested capital. Like financially, you look at Meta's, you look at Microsoft's, you look at Amazon's, you look at Google's. The return on invested capital is up since 2022, so it's- On AI in particular?

01:40:16 Speaker_02
No, just generally as a company. Now obviously there's other factors here, like what is Meta's ad efficiency? How much of that is AI, right? Super messy. That's a super messy- Super messy thing. But here's the other thing. This is Pascal's wager, right?

01:40:25 Speaker_02
This is a matrix of like, do you believe in God? Yes or no? If you believe in God, yes or no, like hell or heaven, right? So if you believe in God and God's real and you go to heaven, that's great, that's fine, whatever.

01:40:37 Speaker_02
If you don't believe in God and God is real, then you're going to hell.

01:40:41 Speaker_04
This is the deep technical analysis you'll subscribe to Semianalysis for.

01:40:47 Speaker_00
Can you imagine what happens to the stock if Satya starts talking about Pascal's wager?

01:40:52 Speaker_02
No, no, but this is psychologically what's happening, right? This is a, if I don't, and Satya said it on his earnings call, the risk of under-investing is worse than the risk of over-investing. He has said this word for word, this is Pascal's wager.

01:41:04 Speaker_02
This is, I must believe I am AGI-filled because if I'm not and my competitor does it, I'm absolutely fucked. Okay, other than Zuck. No, no, no. Sundar said this on the Earns call.

01:41:15 Speaker_02
So Zuck said it, Sundar said it, Satya's actions on credit risk for Microsoft do it. He's very good at PR and messaging, so he hasn't said it so openly, right? Sam believes it, Dario believes it. You look across these tech titans, they believe it.

01:41:29 Speaker_02
And then you look at the capital holders. The UAE believes it. Saudi believes it. How do you know the UAE and Saudi believe it? Blackstone believes it.

01:41:36 Speaker_02
Like, all these major companies and capital holders also believe it because they're putting their money here. But, but that's like, how can, like, it won't last. It can't last unless there's money coming in somewhere. Correct, correct.

01:41:47 Speaker_02
But then the question is, The simple truth is, like, GPT-4 costs like $500 million to train. I agree. And it has generated billions in reoccurring revenue.

01:41:57 Speaker_02
But in that meantime, OpenAI raised $10 billion or $13 billion and is building a model that costs that much, effectively, right? Right. And so then, obviously, they're not making money. So what happens when they do it again? They release and show GPT-5.

01:42:13 Speaker_02
with whatever capabilities that make everyone in the world like, holy fuck, obviously the revenue takes time after you release the model to show up.

01:42:20 Speaker_02
You still have only a few billion dollars or, you know, five billion dollars of revenue run rate, you just raise 50 to 100 billion dollars because everyone sees this like, holy fuck, this is gonna generate tens of billions of revenue.

01:42:30 Speaker_02
But that tens of billions takes time to flow in, right? It's not an immediate click. But the time where Sam can convince, and not just Sam, but like people's decisions to spend the money are being made, are then, right?

01:42:41 Speaker_02
So therefore, you look at the data centers people are building, you don't have to spend most of the money to build the data center.

01:42:45 Speaker_02
Most of the money is the chips, but you're already committed to like, oh, I'm just gonna have so much data center capacity by 2027 or 2026 that it's, I'm never gonna need to build a data center again for like three, four, five years if AI is not real, right?

01:42:57 Speaker_02
That's like basically what all their actions are. Or I can spend over $100 billion on chips in 26. And I can spend over $100 billion on chips and 27. Right.

01:43:06 Speaker_02
So this is the, these are the actions people are doing and the lag on revenue versus when you spend the money or raise the money, raise the money, spend the money built, you know, there's like a lag on this.

01:43:16 Speaker_02
So this is like, you don't necessarily need the revenue in 2025 to support this. You don't need the revenue in 2026 to support this.

01:43:24 Speaker_02
You need the revenue in 2526 to support the $10 billion that OpenAI spent in 23, or Microsoft spent in 23 slash early 24 to build the cluster, which then they trained the model in mid-20, you know, for early 24, mid-24, which they then released at the end of 24, which then started generating revenue in 2526.

01:43:40 Speaker_00
I mean, like, the only thing I can say is that you look at a chart with three points on a graph, GPT-1, 2, 3, and then you're like,

01:43:48 Speaker_06
And even that graph is like, the investment you have to make in GPT four or GPT three is 100x the investment you had to make in GPT five or GPT four is 100 like so revenue if like, currently the ROI could be positive, but like, and this very well could be true, I think it will be true.

01:44:03 Speaker_06
But like, the revenue has to like increase exponentially, not just like,

01:44:09 Speaker_02
Yeah, of course, of course.

01:44:10 Speaker_00
I agree with you, but I also, I agree with Dylan that it can be achieved. ROI, like TSMC does this. Invest $16 billion, it expects ROI does that, right? I understand that, that's fine. Lag, all that.

01:44:23 Speaker_00
The thing that I don't expect is that GPT-5 is not here. It's all dependent on GPT-5 being good. If GPT-5 sucks, if GPT-5 looks like it doesn't blow people's socks off, this is all void.

01:44:38 Speaker_02
What kind of socks are you wearing, bro? Show them.

01:44:41 Speaker_00
AWS. Show them. AWS. Show them. AWS. TP25 is not here. It's late. We don't know. I don't think it's late.

01:44:49 Speaker_06
I think it's late. I want to zoom out and like go back to the end of the decade picture again. So if you're if this picture we've already lost John.

01:44:59 Speaker_00
We've already accepted GPT-5 would be good, hello?

01:45:02 Speaker_05
But yeah, you got it, you know?

01:45:03 Speaker_02
Yeah, you got it. Bro, like, life is so much more fun when you just, like, are delusionally, like, you know?

01:45:10 Speaker_00
We're just ripping bong fits, hey, aren't we?

01:45:14 Speaker_02
You feel the AGI, you feel your soul. This is why I don't live in San Francisco. I have tremendous belief in like GBD5. Why? Area. Because like what we've seen already. I think the public signs all show that this is like very much the case, right?

01:45:29 Speaker_02
What we see with And beyond that is more questionable, and I'm not sure because I don't know, right? I don't know. We'll see how much they progress.

01:45:39 Speaker_02
But if things continue to improve, life continues to radically get reshaped for many people, it's also like every time you increment up the intelligence,

01:45:50 Speaker_02
The amount of usage of it grows hugely every time you increment the cost down of that amount of intelligence. The amount of usage increases massively. As you continue to push that curve out, that's what really matters, right?

01:46:04 Speaker_02
And it doesn't need to be today. It doesn't need to be a revenue versus how much CapEx. in any time in the next few years, it just needs to be did that last humongous chunk of capex make sense for open AI or whoever the leader was?

01:46:17 Speaker_02
Or and then how does that flow through? Right? Or were they able to convince enough people that they need to they can raise this much money? Right? Like you think Elon's tapped out of his network with raising $6 billion? No.

01:46:27 Speaker_02
XAI is going to be able to raise 30 plus, right? Easily, right? I think so. You think Sam's tapped out? You think Anthropic's tapped out? Anthropic's barely even diluted the company relatively, right?

01:46:37 Speaker_02
Like, you know, there's a lot of capital to be raised in just from like, call it FOMO if you want, but like during the dot-com bubble, people were spending, the private industry flew through like $150 billion a year. We're nowhere close to that yet.

01:46:52 Speaker_02
We're not even close to the dot-com bubble, right? Why would this bubble not be bigger, right?

01:46:56 Speaker_02
And if you go back to the prior bubbles, PC bubble, semiconductor bubble, mechatronics bubble throughout the U.S., each bubble was smaller, you know, you call it a bubble or not, why wouldn't this one be bigger?

01:47:05 Speaker_02
How many billions of dollars a year is this bubble right now? For private capital? Yeah. It's like 55, 60 billion so far for this year. It can go much higher, right? And I think it will next year. Okay, so let me think. Need another bong rip.

01:47:26 Speaker_02
You know, at least like finishing up and looping into the next question was like, you know, prior bubbles also didn't have the most profitable companies that humanity has ever created investing, and they were debt financed.

01:47:35 Speaker_02
This is not debt financed yet, right? So that's the last like little point on that one. Whereas the 90s bubble was like very debt financed.

01:47:41 Speaker_00
This is like cash flow. It was disastrous for those companies.

01:47:43 Speaker_02
Yeah, sure. so many so much was built, right? You got to blow a bubble to get real stuff to be built.

01:47:49 Speaker_06
It is interesting analogy where like, with even though the dotcom bubble obviously burst and like a lot of companies went bankrupt, they in fact did lay out the infrastructure that enabled the web and everything.

01:47:58 Speaker_06
So you could imagine an AI it's like some a lot of the foundation model companies or whatever the like a bunch of companies will like go bankrupt, but like they will You couldn't argue with the singularity.

01:48:09 Speaker_00
During the 1990s, at the turn of the 1990s, there was an immense amount of money invested in like MEMS and like optical technologies because everyone expected the fiber bubble to continue, right? That all ended at 2003, right? And that started in 94?

01:48:23 Speaker_00
Hasn't been a revitalization since, right? Like that's...

01:48:26 Speaker_02
You could risk the possibility of a- Lumen, one of the companies that's doing the fiber build out for Microsoft, the stock like fucking 4x last month, or this month. And then how's it done from 2002 to 2024? Oh no, horrible, horrible.

01:48:37 Speaker_02
But like, we're going to rip, baby. You could- Rip that bone, baby. You could breeze AI for another two decades.

01:48:44 Speaker_02
You sure sure possible or people can see a badass demo from GPD 5 slight release raise a fuckload of money It could even be like a Devon like demo right where it's like complete bullshit, but like it's fine right like should I should? Care

01:49:04 Speaker_02
You know, the capital is going to flow in, right? Now, whether they're deflates or not is like an irrelevant concern on the near term because you operate in a world where it is happening.

01:49:15 Speaker_02
And being, you know, what is the Warren Buffett quote, which is like, you can be, I don't even know if it's Warren Buffett.

01:49:21 Speaker_00
You don't know who's going to be naked until the tide goes out?

01:49:24 Speaker_02
No, no, no. The one about like, the market is delusional far longer than you can remain solvent or something like that.

01:49:30 Speaker_00
That's not Buffett.

01:49:30 Speaker_02
That's not Buffett?

01:49:31 Speaker_00
Yeah, yeah.

01:49:31 Speaker_02
That's John Maynard Keynes. Oh shit, that's that old? Yeah. Okay. Um, okay. So Kane said it, right? It's like you can be, yeah. So this is the world you're operating in. Like it doesn't matter, right?

01:49:43 Speaker_02
Like what, what exactly happens or will be ebbs and flows, but like that's the world you're operating in.

01:49:47 Speaker_00
Um, I reckon that if an AI bubble, if the AI bubble pops, each one of these CEOs lose their jobs.

01:49:54 Speaker_02
Sure. Or if you don't invest and you lose, it's a Pascalian wager and you're, uh, that's much worse across decades. The largest company at the end of each decade, like the largest companies, that list changes a lot.

01:50:06 Speaker_02
And these companies are the most profitable companies ever. Are they going to let that list, are they going to let themselves like lose it or are they going to go for it?

01:50:13 Speaker_02
They have one shot, one opportunity, you know, to make themselves into, you know, the whole Eminem song, right?

01:50:19 Speaker_06
I want to hear like the story of how both of you started your businesses or you're like the thing you're doing now John like how to like what how did it begin? What were you doing? But when you started the textile company?

01:50:35 Speaker_01
No way, please, please Okay, I guess if he doesn't want to I'll talk about it later, okay sure

01:50:41 Speaker_00
I think like I used to, I mean, the story's famous. I've told it a million times. It's like Asianometry started off as a tourist channel.

01:50:47 Speaker_04
Yeah.

01:50:47 Speaker_00
So I would go around kind of like, I was, I moved to Taiwan for work and then- Doing what? I was, I was working in cameras.

01:50:56 Speaker_03
And then like I told- What was the other company you started?

01:51:02 Speaker_00
It tells too much about me. Oh, come on. I worked in cameras and then basically I went to Japan with my mom and mom was like, hey, you know, what are you doing in Taiwan? I don't know what you're doing.

01:51:14 Speaker_00
I was like, all right, mom, I will go back to Taiwan and I'll make stuff for you. And I made videos. I would like go to the Chiang Kai-shek Park and be like, hi mom, this park was this, this.

01:51:24 Speaker_00
Eventually at some point you run out of stuff, but then it's like a pretty smooth transition from that into like, you know, history of Chinese history, Taiwanese history. And then people started calling me Chinanometry.

01:51:35 Speaker_00
I didn't like that, so I moved to other parts of Asia. And now, and then- So what year did you like start, like what year was like people started watching your videos? Let's say like a thousand views per video or something.

01:51:46 Speaker_00
Oh my gosh, that was not, I started the channel in 2017 and it wasn't until like 2018 that, 2019 that it actually, I labored on for like three years, first three years with like no one watching.

01:51:57 Speaker_00
Like I had got like 200 views and I'd be like, oh, this is great.

01:52:00 Speaker_06
And then were you were the videos basically like the ones you have? But so sorry, backing up for the audience who might not.

01:52:05 Speaker_06
I imagine basically everybody knows Asian amateur, but if you don't, like the most popular channel about semiconductors, Asian business history, business history in general, even like geopolitics, history and so forth. And

01:52:20 Speaker_06
Yeah, I mean, it's like, honestly, I've done like research for like different AI guests and different, like whatever thing I'm trying to be. I'm trying to understand like, how does hardware work? How does AI work?

01:52:30 Speaker_06
It's like, this is like my- How does a zipper work?

01:52:32 Speaker_02
Did you watch that video? No, I haven't watched that one. It was like, I think it was a span of three videos. It was like, Russian oil industry in the 1980s and how it like funded everything. And then when it collapsed, they were absolutely fucked. Yeah.

01:52:42 Speaker_02
And then it was like, the next video was like, the zipper monopoly in Japan. Not a monopoly. Not a monopoly.

01:52:48 Speaker_00
Yeah. Strong, strong holding in a mid-tier size. There's like the luxury zipper makers. Astronometry is always just kind of like stuff I'm interested in. And I'm like interested in a whole bunch of different stuff.

01:52:59 Speaker_00
And I like, like, and then the channel, for some reason, people started watching the stuff I do. And I still have no idea why. To be honest, I still feel like it's, I still feel like a fraud. I sit in front of like Dylan and he's, I feel like a fraud.

01:53:13 Speaker_00
legit fraud, especially when you start talking about 60,000 wafers and all that.

01:53:16 Speaker_00
I'm just like, I feel like I should be know, I should know this, but like, you know, in the end it's, but, but that, you know, I just try my best to kind of bring interesting stories out.

01:53:26 Speaker_06
How do you make a video every single week? Cause these are like two a week. You know how long he had a full-time job?

01:53:32 Speaker_02
Five years, six years. Or sorry, a textile business.

01:53:35 Speaker_00
And a, yes.

01:53:36 Speaker_02
And a full-time job. Wait, no, full-time job, textile business, and Asianometry until like for a long, long time. Yeah.

01:53:41 Speaker_00
I literally just gave up the textile business this year.

01:53:44 Speaker_06
And like, how are you doing research and doing like making a video and like twice a week? I don't know. I like do these fucking, I'm like fucking talking. This is all I do. And I like do these like once every two weeks.

01:53:54 Speaker_02
See, the difference is Dwarkesh, you go to SF Bay Area parties constantly, and Dwarkesh is, I mean, John is like locked in. He's like locked in 24-7.

01:54:03 Speaker_05
He's got like the GSMC work ethic, and I've got like the Intel work ethic. If I don't, I got the Huawei ethic.

01:54:10 Speaker_00
If I do not finish this video, my family will be pillaged.

01:54:15 Speaker_02
He actually gets really stressed about it, I think, like not doing something like on his schedule, yeah.

01:54:21 Speaker_00
It's very much like, I do two videos per week. I write them both simultaneously.

01:54:26 Speaker_06
And how are you scouting out future topics you want to do? You just pick up random articles, books, whatever, and then you just, if you find it interesting, you make a video about it?

01:54:34 Speaker_00
Sometimes what I'll do is I'll Google a country, I'll Google an industry, and I'll Google what a country is exporting now and what it used to export, and I compare that and I say, that's my video.

01:54:44 Speaker_00
Or I'll be like, but then sometimes also just as simple as like, I should do a video about YKK. And then it's also just, but then it's also just as simple. The zipper is nice. I should do a video about it. I do, I do.

01:54:55 Speaker_06
It literally is. Do you like keep a list of like, here's the next one. Here's the one after that.

01:55:00 Speaker_00
I have a long list of like ideas. Sometimes it's as vague as like Japanese whiskey. No idea what Japanese whiskey is about. I heard about it before I watched that movie. And then so I was just like, okay, I should do a video about that.

01:55:15 Speaker_06
And then eventually, you know, you get to, you get- How many research topics do you have in the back burner, basically? Like you're like, I'm just kind of reading about it constantly. And then like in a month or so, I'll make a video about it.

01:55:24 Speaker_00
I just finished a video about how IBM lost the PC. So right now, I'm unstressing about that. But then I'll kind of move right on to, like the videos do kind of lead into others. Like right now, this one is about IBM PC, how IBM lost the PC.

01:55:39 Speaker_00
Now it's next to how Compaq collapsed, how the wave destroyed Compaq. So technically, I'll do that. At the same time, I'm dual lining a video about

01:55:48 Speaker_00
Cubits, I'm dual lining a video about directed self-assembly for semiconductor manufacturing, which I'll read a lot of Dylan's work for. But then like, like a lot of that is kind of like, it's just, it's in the back of my head.

01:56:03 Speaker_00
And I'm like, producing it as I as I go.

01:56:06 Speaker_06
Dylan, how do you work? How does one go from Reddit shitposter to running a semiconductor research and consulting firm? Let's start with the shitposting.

01:56:16 Speaker_02
It's a long line, right? So immigrant parents grew up in rural Georgia. So when I was seven, I begged for an Xbox. And when I was eight, I got it. 360, right? They had a manufacturing defect called the Red Ring of Death. There were a variety of fixes.

01:56:30 Speaker_02
I tried them, like putting a wet towel around the Xbox, something called the Penny Trick. Those all didn't work. My Xbox still didn't work. My cousin was coming the next weekend, and he's two years older than me. I look up to him.

01:56:41 Speaker_02
He's in between my brother and I, but I'm like, oh, no, no. We're friends. You don't like my brother as much as you like me. My brother's more like a jockey type, so it didn't matter. Like he didn't really care that I broke that the Xbox is broken.

01:56:53 Speaker_02
He's like you better fix it though, right?

01:56:55 Speaker_02
Otherwise parents will be pissed so I figure out how to fix it online It ends up and I tried a variety of fixes ended up shorting the temperature sensor And that worked for long enough until Microsoft did the recall, right?

01:57:05 Speaker_02
But in that, you know, I stay I learned how to do it out of necessity on the forums I was a nerdy kid. So I like games, but whatever but then like I There was no other outlet once I was like, holy shit, this is Pandora's box, what just got opened up?

01:57:17 Speaker_02
So then I just shit posted on the forums constantly, right? And for many, many years, and then I ended up moderating all sorts of Reddits when I was a tween teenager.

01:57:29 Speaker_02
And then as soon as I started making money, grew up in a family business but didn't get paid for working, of course, like yourself, right? But as soon as I started making money, and I got my internship, and I was like 18, 19, right?

01:57:41 Speaker_02
I started making money. I started investing in semiconductors, right? I was like, of course, this is shit I like, right? Everything from, and by the way, the whole way through, as technology progressed, especially mobile, right?

01:57:53 Speaker_02
It goes from very shitty chips in phones to very advanced. Every generation, they'd add something, and I'd read every comment. I'd read every technical post about it.

01:58:03 Speaker_02
And also all the history around that technology and then like, you know, who's in the supply chain and just kept building and building and building. Went to college, did data science-y type stuff.

01:58:12 Speaker_02
Went to work on like hurricane, earthquake, wildfire simulation and stuff for a financial company. But before that, like, but during college, I was still like, I wasn't shit posting on the internet as much.

01:58:21 Speaker_02
I was still posting some, but I was like following the stocks and all these sorts of things, the supply chain, all the way from like the tool equipment companies. And the reason I like those is because like, oh, this technology, oh, it's made by them.

01:58:31 Speaker_02
You know, you kind of- Did you have like friends in person who were into this shit or was it just online? I made friends on the internet, right? Oh, that's dangerous.

01:58:40 Speaker_02
No, I've only ever had like literally one bad experience and that was just because he's drugged out, right? Like... One bad experience online or... Like meeting someone from the internet in person. Everyone else has been genuine.

01:58:52 Speaker_02
Like you have enough filtering before that point. You're like, you know, even if they're like hyper mega like autistic, it's cool, right? Like I am too, right? You know? No, I'm just kidding. But like, you know, you go through like the...

01:59:04 Speaker_02
you know, the layers and you look at the economic angle, you look at the technical angle, you read a bunch of books just out of like, you know, you can just buy engineering textbooks, right? And read them, right?

01:59:13 Speaker_02
Like, what's, what's, what's stopping you, right? And if you bang your head against the wall, you learn it, right?

01:59:18 Speaker_06
And then while you were doing this, was there like, did you expect to work on this at some point? Or was it just like, pure interest?

01:59:23 Speaker_02
No, it was like, it was like, obsessive hobby of many years, and it pivoted all around, right?

01:59:28 Speaker_02
At some point, I really liked gaming, and then I moved into, I really liked phones, and rooting them, and underclocking them, and the chips there, and screens, and cameras, and then back to gaming, and then to data center stuff, because that was where the most advanced stuff was happening.

01:59:44 Speaker_02
I liked all sorts of telecom stuff for a little bit. It bounced all around, but generally in computing hardware, right? I did data science. Said I did AI when I interviewed, but it was bullshit, multivariable regression, whatever, right?

02:00:01 Speaker_02
It was simulations of hurricanes, earthquakes, wildfire, for financial reasons, right? Anyways, I moved up to, I had a job for three years after college, and I was posting, and whatever. I had a blog, anonymous blog for a long time.

02:00:15 Speaker_02
I'd even made some YouTube videos and stuff. Most of that stuff is scrubbed off the Internet, including Internet Archive, because I asked them to remove it. In 2020, I quite quit my job and started posting more seriously on the internet.

02:00:31 Speaker_02
I moved out of my apartment and started traveling through the U.S. and I went to all the national parks in my truck slash tent slash, also stayed in hotels and motels three or four days a week. But I started posting more frequently on the internet.

02:00:44 Speaker_02
And I'd already had some small consulting arrangements in the past, but it really started to pick up in mid-2020, consulting arrangements from the internet, from my persona. Like what kinds of people? Investors? Hardware companies?

02:00:57 Speaker_02
There were like, it was like people who weren't in hardware that wanted to know about hardware. It would be like some investors, right? Some couple VCs did it, but some public market folks.

02:01:07 Speaker_02
You know, there was times where like companies would ask about like three layers up in the stack, like me, because they saw me write some random posts and like, hey, like, can we blah, blah, blah.

02:01:15 Speaker_02
There's all sorts of like random, it was really small money. And then in 2020, like it really picked up and I just like, I was like, why don't I just arbitrarily make the price way higher? And it worked.

02:01:25 Speaker_02
And then I started posting, I made it a new, I made a newsletter as well. And I kept posting, quality kept getting better, right?

02:01:34 Speaker_02
Because people read it, they're like, this is fucking retarded, like, you know, there's what's actually right or, you know, like, you know, over over more than a decade, right? And then in 2021, towards the end,

02:01:44 Speaker_02
I made a paid post because someone didn't pay for a report or whatever. I went to sleep that night. It was about photoresist and the developments in that industry, which is the stuff you put on top of the wafer before you put in the lithography tool.

02:01:59 Speaker_02
Did great. I woke up the next day and I had 40 paid subscriptions. I was like, what? okay, let's keep going, right?

02:02:05 Speaker_02
Let's post more paid sort of like partially free, partially paid, did like all sorts of stuff on like advanced packaging and chips and data center stuff and like AI chips, like all sorts of stuff, right?

02:02:15 Speaker_02
That I like was interested in and thought was interesting. And like, I always bridged economically because I read all the company's earnings for like, you know, since I was 18, I'm 28 now, right?

02:02:25 Speaker_02
You know, all the way through to like, you know, all the technical stuff that I could. 2022 I also started to just go to every conference I could, right?

02:02:33 Speaker_02
So I go to like 40 conferences a year, not like, not like trade show type conferences, but like technical conferences, like, like an arc chip architecture, photoresist, you know, AI nirips, right? Like, you know, ICML, like, you go to a year, like 40.

02:02:49 Speaker_02
So you like live at conferences. Yes. Yeah. I mean, I've been a digital nomad since 2020 and I've basically stopped and I moved to SF now. Right. But like kind of kind of not really. You can't say that the government, the California government.

02:03:01 Speaker_01
I don't live at SF, come on. But I basically do now. California Internal Revenue Service. Do not joke about this, guys.

02:03:09 Speaker_02
They're gonna send you a clip of this of this podcast be like I am in San Francisco like sub four months a year contiguously Over the full course of the year, but no like, you know Go to every conference make connections at all these like very technical things like international electron device manufacturing Oh lithography and advanced patterning

02:03:38 Speaker_02
Oh, like a very large scale integration, like, you know, old, you know, circuits conference. You just go every single layer of the stack. It's so siloed. There's tens of millions of people that work in this industry.

02:03:50 Speaker_02
But if you go to every single one, you try and understand the presentations, you do the required reading, you look at the economics of it, you like are just curious and want to learn.

02:03:59 Speaker_02
you like you can start to build up like more and more and the content got better. And like, you know, what I've followed, we have better and then like started hiring people in 2020 and early 2022 as well.

02:04:08 Speaker_02
Or might have been Yeah, like mid mid 2022 started hiring, got people in different layers of the stack. But now today, like you fast forward now today, right?

02:04:17 Speaker_02
Like, almost every hyperscaler is a customer not for the newsletter, but for like data we sell, right? You know, most many major semiconductor companies, many investors, right? Like all these people are like customers of the data and stuff we sell.

02:04:31 Speaker_02
And the company has people all the way from like X-Cymer, X-ASML, all the way to like X like Microsoft and like an AI company, right?

02:04:38 Speaker_02
Like, you know, like, and then through the stratification, you know, now there's 14 people here and like the company and like all across the US, Japan, Taiwan, Singapore, France. Yes, of course. Right.

02:04:50 Speaker_02
Like, you know, all over the world and across many ranges of like and hedge funds as well. Right. Ex hedge funds as well. Right. So you kind of have like this amalgamation of like, you know, tech and finance expertise.

02:05:01 Speaker_02
And we just do the best work there, I think.

02:05:03 Speaker_00
Are you still talking about a monstrosity?

02:05:05 Speaker_02
An unholy concoction of it. So we have data analysis consulting, etc., for anyone who really wants to get deeper into this.

02:05:20 Speaker_02
We can talk about, oh, people are building big data centers, but how many chips is being made in every quarter of what kind for each company? What are the subcomponents of these chips? What are the subcomponents of the servers?

02:05:33 Speaker_02
We try and track all of that.

02:05:34 Speaker_02
follow every server manufacturer, every component manufacturer, every cable manufacturer, just all the way down the stack, tool manufacturer, and know how much is being sold where, and how, and where things are, and project out, all the way out to, hey, where is every single data center?

02:05:49 Speaker_02
What is the pace that it's being built out? This is the sort of data we want to have and sell, and the validation is that hyperscalers purchase it, and they like it a lot. Right. And like AI companies do and like semiconductor companies do.

02:06:03 Speaker_02
So I think that's the sort of like how it got there to where it is, is just like try and do the best. Right. And try and be the best.

02:06:10 Speaker_06
If you were an entrepreneur who's like, I want to get involved in the hardware chain somewhere. Like what is like, what is, if you, if you could start a business today, somewhere in the stack, what would you pick?

02:06:22 Speaker_06
John, tell them about your textile business.

02:06:25 Speaker_00
I think I'd work in memory. Something in memory. Because I think if this concept is there, you have to hold immense amounts of memory. Immense amounts of memory. And I think memory already is tapped technologically.

02:06:41 Speaker_00
HBM exists because of limitations in DRAM. I said it correctly. I think it's... Fundamentally, we've forgotten it because it is a commodity, but we shouldn't. I think it's breaking memory is going to, could change the world in that scenario.

02:06:58 Speaker_02
I think the context here is that Moore's Law was predicted in 1965. Intel was founded in 68 and released their first memory chips in 69 and 70. And so Moore's Law was, a lot of it was about memory.

02:07:11 Speaker_02
And the memory industry followed Moore's Law up until 2012. where it stopped, right? And it became very incremental gain since then, whereas logic has continued and people are like, oh, it's dying, it's slowing down.

02:07:21 Speaker_02
At least there's still a little bit of coming, right? Still more than 10%, 15% a year CAGR, right, of growth and density slash cost improvement. Memory has literally been since 2012 really bad.

02:07:33 Speaker_02
And when you think about the cost of memory, it's been considered a commodity, but memory integration with accelerators, this is something that I don't know if you can be an entrepreneur here though.

02:07:44 Speaker_02
That's the real challenge is because you have to manufacture at some really absurdly large scale or design something which in an industry that does not allow you to make custom memory devices. Or use materials that don't work that way.

02:07:56 Speaker_02
So there's a lot of like work there that I don't, so I don't necessarily agree with you, but I do agree it's like one of the most important things for people to invest in.

02:08:02 Speaker_02
You know, I think there's it's really about where is your where you good at and where can you vibe and where can you like, enjoy your work and be productive in society, right?

02:08:10 Speaker_02
Because there are 1000 different layers of the abstraction stack, where can you make it more efficient? Where can you use utilize AI to build better and make everything more efficient in the world and produce more bounty and like,

02:08:24 Speaker_02
iterate feedback loop, right? And there's more opportunity to today than any other time in human history in my view, right? And so just go out there and try, right? What engages you? Because if you're interested in it, you'll work harder, right?

02:08:37 Speaker_02
If you have a passion for copper wires, I promise to God if you make the best copper wires, you'll make a shitload of money. And if you have a passion for B2B sass, I promise to God you'll make fuckloads of money, right?

02:08:51 Speaker_02
I don't like B2B sass, but whatever, right? It's like, whatever.

02:08:54 Speaker_02
You know, whatever you have a passion for, like just work your ass off, try and innovate, bring AI into it and let it, you try and use AI yourself to like make yourself more efficient and make everything more efficient.

02:09:07 Speaker_02
And I promise you will like be successful, right?

02:09:10 Speaker_02
I think that's really the view is not necessarily that there's one specific spot because every layer of the supply chain has, you go to the conference or you go to talk to the experts there, it's like, dude, this is the stuff that's breaking and we could innovate in this way.

02:09:22 Speaker_02
Or like these five extraction layers, we could innovate this way. Yeah, do it. There's so many layers where this is we're not at the Pareto optimal, right? Like there's so much more to go in terms of innovation and inefficiency.

02:09:32 Speaker_06
All right. I think that's a great place to close. Dylan, John, thank you so much for coming on the podcast. I'll just give people And then the reminder, Dylan Patel, semi-analysis.com.

02:09:44 Speaker_06
That's where you can find the technical breakdowns that we've been discussing today. Asianometry YouTube channel. Everybody will already be aware of Asianometry, but anyways. Thanks so much for doing this. This was a lot of fun.

02:09:55 Speaker_01
Thank you. Yeah.

02:09:56 Speaker_06
Thank you.