How Diamond Cooling Could Power the Future of AI, with Akash Systems AI transcript and summary - episode of podcast No Priors: Artificial Intelligence | Technology | Startups
Go to PodExtra AI's episode page (How Diamond Cooling Could Power the Future of AI, with Akash Systems) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Go to PodExtra AI's podcast page (No Priors: Artificial Intelligence | Technology | Startups) to view the AI-processed content of all episodes of this podcast.
No Priors: Artificial Intelligence | Technology | Startups episodes list: view full AI transcripts and summaries of this podcast on the blog
Episode: How Diamond Cooling Could Power the Future of AI, with Akash Systems
Author: Conviction
Duration: 00:42:21
Episode Shownotes
In this episode of No Priors, Sarah sits down with Felix Ejeckam and Ty Mitchell, founders of Akash Systems, a company pioneering diamond-based cooling technology for semiconductors used in space applications and large-scale AI data centers. Felix and Ty discuss how their backgrounds in materials science led them to tackle
one of the most pressing challenges in tech today: thermal efficiency and heat management at scale. They explore how Akash is overcoming the limitations of traditional semiconductors and how their innovations could significantly boost AI performance. Felix and Ty also talk about their collaboration with India’s sovereign cloud provider, the importance of strengthening U.S. manufacturing in the AI chip market, and the role Akash Systems could play in advancing satellite technologies. Sign up for new podcasts every week. Email feedback to [email protected] Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil |@AkashSystems | @FelixEjeckam Show Notes: 0:00 Introduction 0:30 What is Akash Systems? 2:12 Felix’s personal path to building Akash Systems 4:45 Ty’s approach to acquiring customers 6:40 Challenges of operating in space 7:54 Live demo on diamond’s conductivity 9:50 Heat issues in data centers 15:38 Heat as a fundamental limit to technological progress 20:44 Akash’s role in the semiconductor market 22:54 Growing diamonds 25:10 Collaborating with India’s sovereign cloud provider 28:15 Importance of American manufacturing for AI chips and outlook on current data capacity 29:45 The Chips Act 31:22 Future of national security lies in satellite and radar tech 32:46 Critical issues in the U.S. AI supply chain 36:34 Deep learning’s role in material science discovery 40:16 The future: AI expanding our possibilities
Full Transcript
00:00:05 Speaker_02
Welcome to KnowPriors. Today, I'm chatting with Felix Adjekum and Ty Mitchell, the founders of Akash Systems, which makes diamond-based cooling technology for computing platforms, from space satellites to AI data centers.
00:00:20 Speaker_02
Their innovation uses highly conductive diamond to help computers run cooler and faster while using less energy. Felix and Ty, welcome to KnowPriors.
00:00:28 Speaker_01
Good to be here. Thank you, Sarah.
00:00:30 Speaker_02
I think we should start with just a quick introduction as to what Akash is.
00:00:35 Speaker_01
Sure. Again, very good to be here with you, Sarah. Akash Systems, we are a venture-backed company based in the Bay Area that is starting from the ground up at the material science level.
00:00:47 Speaker_01
And we are making, using proprietary materials, materials specifically of diamond that we grow in the lab, using it to make electronic systems. that are disruptive in the world by an order of magnitude.
00:01:02 Speaker_01
It's in contrast, oftentimes when we start companies, even in the hardware space, we tend to start injecting ourselves in the middle of a supply chain.
00:01:12 Speaker_01
At Akash, as material scientists, we come in at the periodic table level and we start there to build up chips, boards, systems that ultimately change the lives of our society, whether you're in business or as a consumer. We do that in several ways.
00:01:32 Speaker_01
We change the structure of a basic material, the systems that we've chosen to affect. We started off in the space world, where we make some of the fastest satellite radios ever made by humans. Then we go over, as we are doing now, to AI.
00:01:50 Speaker_01
where we are able to cause compute, a GPU to go faster than has ever been done before since the beginning of this new space, or reducing energy consumption in the data center by a significant amount, all because of innovative material science that we've pioneered at the ground floor.
00:02:13 Speaker_02
So maybe that's a good segue into how you got started working on this, because you've had this idea for a long time. As you said, you started on space applications earlier.
00:02:24 Speaker_02
Can you talk a little bit about your background and, you know, the original scientific idea and how you thought it would be applied?
00:02:29 Speaker_01
Sure. So my background is in material science and electrical engineering. I obtained a PhD in electrical engineering with a minor in material science and device physics from Cornell.
00:02:40 Speaker_01
And in my PhD, I focused on bringing together very dissimilar materials in such a way that 1 plus 1 equals 10. And, you know, for example, silicon, very well known ubiquitous material that's ushered in the current modern era that we have today.
00:03:02 Speaker_01
But then there are other materials, plastics, other types of semiconductors that don't actually do as well as silicon, but they have their own strengths.
00:03:11 Speaker_01
And so for my PhD, I looked at ways of trying to bring together, say, the optics world with electronics, silicon, and merging them together such that the overall system is incredibly powerful.
00:03:26 Speaker_01
That philosophy I've brought to Akash when I started Akash with Tai in 2017 to try to do the same thing. I often found personally, actually, I think it's a very good metaphor for
00:03:40 Speaker_01
for humans, how we interact together, when you bring different people that have different strengths, the combination can be incredibly powerful in ways that excel and exceed the simple summation of the parts.
00:03:55 Speaker_01
And that's exactly what we do at Akash where we bring artificial diamonds, well known as the most thermally conductive material ever grown in nature, or ever to occur in nature.
00:04:07 Speaker_01
And then silicon or even gallium nitride, and quite frankly, any other semiconductor, when brought together, amazing things happen. I'm very happy and excited about doing that in the world of AI.
00:04:20 Speaker_01
We did that in space, where we have now made and launched the fastest radios ever made by man.
00:04:29 Speaker_01
And now with AI, we're able to achieve performance levels, whether in energy efficiency levels or compute speeds, ever obtained by simply using these artificial materials that we've created in the lab.
00:04:45 Speaker_02
Tye, are you the silicon or are you the diamond here?
00:04:50 Speaker_00
I'm actually a little bit of both. I'm the silicon carbide guy. My PhD was on silicon carbide.
00:04:55 Speaker_00
What that taught me when I went into the business world working for a company, Cree Woolspeed, is that Cree Woolspeed developed very good silicon carbide materials level technology.
00:05:07 Speaker_00
And applying this technology to any sorts of systems like radar systems or power electronic systems for EVs or light-emitting diodes, one of the things you learn is that when you have a materials-level advancement, as the person who has that material-level advancement, you really have to make the system
00:05:25 Speaker_00
to convince people that you have the solution. If you just go to someone who is making, let's say, a car, and you say, hey, I've got this great silicon carbide diode, a Schottky diode, or a MOSFET, they'll say, all right, great.
00:05:38 Speaker_00
I'm already using a silicon IGBT. If you meet their price, I'll put you in. And you say, look, if you put my part in, I can increase your range 200 miles. I can increase it 40%. They'll say, OK, yeah, sure.
00:05:53 Speaker_00
However, if you make the car, you find a partner to get your part into the car, or you make the box, you actually make the MOSFET, you make the module, the power module, the farther you go in the system,
00:06:07 Speaker_00
the better chance you have of convincing the customer that you have the solution.
00:06:10 Speaker_00
So that's the approach that we took at Akash, was even though we had this materials level technology, we would make the system and then go directly to the consumer or to the customer and be able to prove our technology out that way.
00:06:25 Speaker_00
And with AI, why did we go into AI? We were in space solving problems that were very difficult, actually more difficult than the AI problem we face today, because in space, You have a limited area.
00:06:39 Speaker_00
You don't have any fluids that you can use to cool because there's no airflow in space. And you have a bunch of other reliability and survivability requirements that are much more difficult to meet than an AI.
00:06:55 Speaker_00
So we thought, all right, AI is a very difficult problem they have. Heat is a very difficult problem they have to solve. It's growing, and nobody really has the right approach to solve the problem, and we think we can help.
00:07:09 Speaker_00
So that's what caused us to dive in.
00:07:12 Speaker_01
And just to give a couple of numbers to what Ty just said, comparing space to AI, the power densities that we cool in space are at the level of 10 to the 3 watts per square centimeter. So 4,000 to 5,000 watts per square centimeter.
00:07:28 Speaker_01
The chips that we cool in on grounds, in AI, in a typical server is a full order of magnitude less than that, a couple hundred. watts per square centimeter.
00:07:39 Speaker_01
So that's what gave us the confidence that if we could address the problem in space, that we absolutely, using the same technology, even backed off a little bit, so we can rapidly ramp, if applied to a server, it would be a home run.
00:07:53 Speaker_02
Yeah, I think you guys had a demo to just sort of explain the, you know, advantage in connectivity that diamonds have or that specifically.
00:08:04 Speaker_01
We do. Thank you for making that very nice segue. What I'm going to show you is how diamonds can very effectively cool down or rather melt ice. So this is an ice cube that you're looking at here in my my little video.
00:08:21 Speaker_01
This is diamond, a little piece of diamond. This is a diamond wafer that we grow in the lab. You can see the Akash name.
00:08:27 Speaker_01
And what I'm going to do is show you that heat from the ambience, and more specifically my fingers, my body temperature, will flow through this diamond rapidly into the ice cube. and melt it as rapidly as I touch it. It'll be just like butter.
00:08:46 Speaker_01
I wish you could touch it yourself. It'll feel cold to the fingers. So I'm just going to wedge it here, and you will see. And there you have it going in. I'm feeling very cold right now. I don't know if you can see it, but there you go. Very cool.
00:09:03 Speaker_01
You can see it wedged in. You can cut ice. Yeah, you can cut ice with your fingers. I'm going to rotate it so you can see. And that's the feature. That's the property that we bring to bear with our chips, with the GPU.
00:09:20 Speaker_01
We're looking at reducing temperatures. Initially, we're starting off with 10 degrees, which is already worth millions for any data center who has a small number of servers.
00:09:31 Speaker_01
But we're looking at further reductions, 20, 30, 40 degrees down the line over the next 12 months. In space, we already reduced temperatures there 80 to 90 degrees.
00:09:41 Speaker_01
So it is it is quite significant, the effects of this and the economic impact is far reaching.
00:09:48 Speaker_02
Yeah, maybe this would be a good time to actually just talk a little bit about why heat dissipation is a problem at all for AI chips and AI GPU servers and data centers. Right. So, you know, if you imagine
00:10:03 Speaker_02
these chips in servers, in racks of servers, in big rows of them, in the data center. We have cooled large data centers for a long time with fans. How does this fit into the, I guess, alternative set of fans and liquid cooling?
00:10:17 Speaker_02
And how has large-scale AI training changed the game at all?
00:10:23 Speaker_00
So this technology that we have, this materials level technology using synthetic diamond, it actually fits with any other cooling technology that's used today.
00:10:33 Speaker_00
Today, the cooling techniques that are used are really at the data center level, where they do airflow containment and air containment, keeping the air from mixing. It's at the rack level, where you are using liquid cooling,
00:10:49 Speaker_00
CDUs and manifolds and pumping liquids right to the devices to keep them cool, or using fans. And you have it at the chip and package level, where people are using techniques to either speed up the chips,
00:11:07 Speaker_00
make more transactions on the chip so that you get more efficiency transactions per watt, or in packaging, doing things like fan-out packaging, where you're spreading out the heat as much as possible for any chip or group of chips.
00:11:24 Speaker_00
Also things like HVM memory, you can stack up right in the same package with the chip. and give it more efficiency and essentially also more transactions per watt. So these are all techniques that are being used today.
00:11:38 Speaker_00
And the beauty of our approach is that it works with all of these.
00:11:42 Speaker_00
You can use diamond cooling by itself, or you can match it with anything else that you're doing and give yourself additional operating margin, give yourself additional performance margin.
00:11:54 Speaker_00
that you can then use to drop your temperature, run your system hotter, and give yourself the opportunity to perform more transactions.
00:12:02 Speaker_00
And that's something that's critically important because you see just in the last year when NVIDIA has introduced a couple of new chips. First, the Samsung HBM, there were heating issues back then. Then with Blackwell,
00:12:18 Speaker_00
Again, heating issues came up where that rollout was delayed due to heating issues. And this is now starting to, for the first time, I think if you listen to the last conference call with NVIDIA, heating issues were mentioned.
00:12:34 Speaker_00
This is something that's going to increasingly be on the radar for companies, for investors. I think that Jensen was able to not directly answer the question.
00:12:44 Speaker_00
People are very skillful in these conference calls, but more and more of these questions are going to be asked and people are going to need approaches to address them.
00:12:56 Speaker_00
We have what we think is the most effective approach because it goes right to the heart of the device.
00:13:02 Speaker_02
Yeah, it's interesting because intuitively not being from the data center management space, I get nervous when anything has a mechanical component, right? But I do have friends running large-scale data centers of this type.
00:13:16 Speaker_02
And I think even though there have been announcements of, for example, the LiquidCool GB200 and VL72 system, and some interest in adoption of LiquidCooling, Fans and liquid cooling come with their own reliability issues, of course, right?
00:13:33 Speaker_02
It's just complex to go implement that and keep that from, I don't know, leaking and breaking and things that movement requires.
00:13:41 Speaker_01
Yeah, Sarah, actually the servers that we ship today, the H200s from NVIDIA, for example, is both liquid cooled and diamond cooled.
00:13:51 Speaker_01
So just to illustrate Ty's point that our diamond technology layers on top of whatever technology that you use, whether it's liquid, fan, or both. But to push on that point further, we believe at Akash Systems that if
00:14:06 Speaker_01
a material science and more specifically a physics or chemistry approach to solving the heat problem is not used today, that the needs of AI data centers around the world, based on projections today, will crash the grid as we know it.
00:14:25 Speaker_01
And if it doesn't crash the grid, we believe that the cost of electricity will be exorbitant. Okay. It's not sustainable at the path that we're taking today. And that's part
00:14:35 Speaker_01
part of our inspiration for attacking these problems, starting with physics and chemistry. So take an example is your laptop, okay? Your laptop has a whole bunch of chips inside of it. You put it on your lap and you feel the warmth of it, okay?
00:14:49 Speaker_01
If you bring an ice pack and put your laptop on top of an ice pack, nothing will change. It will not speed up your CPU.
00:14:56 Speaker_01
You're not going to change the heat extraction of your CPU because there's just such a great distance, a thermal barrier between your lap
00:15:05 Speaker_01
and the GPU or the CPU going straight to the heart of the heat, the heat source, the chip material with physical chemical solutions allows you to make a difference. And that's what we're doing. We don't see a lot of approaches like that out there.
00:15:21 Speaker_01
We think that this is going to be really key to curb the consumption and actually allowed the AI vision. I think that we all hope, love to see actually come to fruition.
00:15:34 Speaker_02
If I think about the complaints that people have running large data centers as blockers or issues to deal with, it is chip supply, GPU reliability, power supply, heat. How does heat relate to all of these other issues?
00:15:50 Speaker_01
Everything you mentioned is heat. I mean, we see the same thing, by the way, in space. Every problem that one addresses in space, it's almost like a whack-a-mole.
00:15:59 Speaker_01
Unless you go to the source of the problem, which is the heat producer, you're really just playing whack-a-mole. You knock it down here, it'll show up elsewhere. Everything you've just described is a heat problem.
00:16:11 Speaker_01
And by the way, there are billions of these chips. One simple server, the ones that we ship today, have eight GPUs in each single blade, and then scores more of other chips, equally heat producing.
00:16:29 Speaker_01
And so we at Akash are actually just scratching the surface of this problem. It's a pervasive problem up and down the supply chain. You just mentioned the way that it shows up in the world, the fact that we're trying to do more with what we have.
00:16:44 Speaker_01
All it's doing, it's breaking the bank, it's costing a tremendous amount of money, it's leading to reliability issues. Servers, oftentimes, it's not uncommon for them to
00:16:54 Speaker_01
suddenly have infant mortality, shows up and it has problems right away, when it gets up to that 80, 90 degrees C temperature, it starts to curb back performance. Thermal throttling.
00:17:05 Speaker_01
This is something a lot of your listeners are going to be familiar with. The fact that the operating system has to back off the workloads on the GPU, which means slowing it down so that they can do the inferences that the customers are asking of it.
00:17:20 Speaker_01
So it is a significant problem. And I think that just attacking it at the network level or sort of putting band-aids at the system level will just kick the can or add cost to the overall ecosystem.
00:17:38 Speaker_01
You have to attack it at the material science, at the level of physics and chemistry.
00:17:44 Speaker_00
And Sarah, you used an interesting word with a blocker. And we are getting to the point where people are starting to get stuck with this issue. We mentioned the issues with the device rollouts and the delays that this has caused.
00:18:00 Speaker_00
You're going to see this happen more and more going forward, where the drive to increase performance, you know, whatever, two to three times every two, three, four years,
00:18:15 Speaker_00
We're not going to be able to do that just because drawing power and water to the site and then getting the parts to run with this heat buildup. And you are stuck inside the server.
00:18:27 Speaker_00
Anything you want to do inside the server, because right now you've got layers. You've got, you know, your chassis, which is probably aluminum. You've got like a copper heat sink. Then you have some sort of epoxy bonding material.
00:18:41 Speaker_00
Then you've got your chip material, which is silicon. Then you've got some sort of solder, gold tin. Then you've got FR4 or some other polymer board. Then you've got more gold bumps. Then you've got another board.
00:18:58 Speaker_00
So you've got this sandwich, and I'm just listing off some of the layers. And every time you have an interface between those sandwiches, that's a thermal barrier. OK, so what are you going to do about that? Because you have to attack that.
00:19:12 Speaker_00
And right now, that's not really being attacked. And this is going to have to be attacked. Otherwise, it's not going to block creation of data centers. But you look at the multiples in the market. You look at earnings.
00:19:28 Speaker_00
This is something that probably the people at AMD and NVIDIA are thinking of. The results are growing now, right? But look what happened at Intel, okay?
00:19:39 Speaker_00
You missed two, three quarters and you go from being at the top of the mountain to, you know, listening to those bells tolling.
00:19:48 Speaker_00
And I think this is something that's gonna be very, very, this is gonna be top of mind for any executive at any of these companies.
00:19:55 Speaker_02
Yes, I understand. You got to cool the sandwich down from the inside as the sandwich gets bigger and hotter.
00:20:00 Speaker_02
I think one of the things that really resonates with me, at least hearing from friends in the industry, is the thermal throttling that you described, the fact that you see and have to manage erratic behavior when these GPUs are at higher temperatures.
00:20:16 Speaker_02
The vast majority of people working in machine learning right now, it's very abstract software field, right?
00:20:24 Speaker_02
And so the idea that you have these challenged, non-deterministic behaviors based on how the materials themselves are interacting, and you have to account for that, and there's a real burden of that is just a new domain to think about.
00:20:39 Speaker_02
Maybe just because it is your area of expertise, how do you fit into the sort of partner ecosystem of the NVIDIAs of the world, the Supermicros of the world, other SIs, etc. ?
00:20:49 Speaker_01
So just to be clear, so we are buying chips, we're not making GPUs, we're taking the hottest chips in the world and we're cooling them down so that we can open the envelope of performance for the system architect.
00:21:05 Speaker_01
And so we fit in, we're coming into the world as a server maker that is opening performance envelopes for folks in inference work, folks training models, data center operators, cloud service providers. That's our entry into the world.
00:21:22 Speaker_01
It makes sense that we would go to the most challenged parts of the market, the folks that are struggling most at that performance edge.
00:21:31 Speaker_01
So I'm going to go out on a limb here and say that we think that with our diamond technology, that we will be able to hyper-accelerate Moore's Law so that in two years, we will be achieving what previously folks had to wait six, seven years in the past to get to in terms of performance.
00:21:52 Speaker_01
Because remember, Moore's Law is about squeezing transistors closer and closer and closer together. can only go so close before you have thermal crosstalk between those devices.
00:22:04 Speaker_01
Right now, the limits we see in AI, the pace at which we can do inference work, is limited by that thermal crosstalk. If we open that thermal crosstalk and we're able to allow greater densities, then all of a sudden we can create a feature-length film
00:22:21 Speaker_01
in seconds rather than the timescales of, you know, if you're doing it offline, you know, months, years, if you're doing it right now with the thermal limitations that there are in AI, probably days.
00:22:35 Speaker_01
But, you know, I think we'd like to see seconds in production time to do a full 90 minute feature length film. And that'll happen because of the unblocking of the thermal limitations inside the GPU.
00:22:50 Speaker_02
I'm going to ask a silly question, but you've mentioned it several times. You're growing diamonds.
00:22:56 Speaker_02
How does that process work for the form factor that you want, assuming the vast majority of our audience has only ever heard of the concept of growing diamonds in the realm of jewelry?
00:23:10 Speaker_00
It's really no different than growing other semiconductor materials. If you're growing silicon or silicon carbide or gallium arsenide or indium phosphide, any of these electronic substrates, you start with a seed crystal.
00:23:29 Speaker_00
and then you use typically some sort of process, chemical vapor deposition, to grow out from that crystal, to grow perfect single crystal material out from that crystal. And that's the same way you grow diamond. Diamond is just carbon, right?
00:23:46 Speaker_00
So you take a seed crystal of perfect carbon, of diamond, and you use a plasma to grow the diamond in a reactor.
00:23:58 Speaker_00
It takes very high temperatures, very high pressures to do this, but it's essentially a similar process to growing silicon or silicon carbide vapors.
00:24:10 Speaker_01
And Sarah, our specialty, our secret sauce lies in how not only we grow the diamond, but also how we intimately couple that diamond with the semiconductor using physics and chemistry. Okay. So it's not a trivial process. It does take some work.
00:24:28 Speaker_01
You were asking about. Why does it take so long? It does take time to do material science. But that intimate coupling of the atoms of diamond with the atoms of a semiconductor is what we understand.
00:24:43 Speaker_01
And that's what we bring to bear in both space and in AI. And that's what makes this not so easy. But we're very excited about it. We're deploying it in the servers that we ship.
00:24:59 Speaker_01
and very strong market pool that we see right now, today, given what's going on in the world.
00:25:07 Speaker_02
You guys announced an exciting customer just this past week, NextGen Data Center and Cloud Technologies. Can you talk about why they're a good early customer? And they're also like a sovereign cloud player. So I want to talk about that as well.
00:25:23 Speaker_01
Sure. So NextGen is the largest sovereign cloud service provider in India. They handle the country's data in a very careful way, making sure that it stays within the sovereign borders of the country.
00:25:40 Speaker_01
We see that requirement coming from countries all over the world. Nobody wants their their people's data leaving the boundaries of their country. We see that, by the way, in space.
00:25:51 Speaker_01
When data is coming down from a satellite or being pulled away, that satellite has an obligation to keep the data within the boundaries of that country. So that's an opportunity for us.
00:26:06 Speaker_01
It means that we're going to be able to address this issue with every country individually. So that's number one. Number two, they are the leader in that region, in India.
00:26:18 Speaker_01
And so we thought that that would be a very good test case to show the world what is possible. The fact that we, as a small growing company, can scale to the kinds of volumes that they need. And we can scale rapidly.
00:26:31 Speaker_01
We've got to ship all of this stuff within the next quarter. And a lot of small companies trip at that. NextGen selected us believing that we have that ability to scale. Thirdly is the fundamentals of the technology.
00:26:46 Speaker_01
I think they saw very quickly, and they're led by some very innovative leaders, that this is a problem that will stay with us for a very long time.
00:26:56 Speaker_01
unless we get to the very heart of it, the material science nature of a solution, you're just going to be tiptoeing around the big elephant in the room.
00:27:07 Speaker_01
And so we were very excited when they saw that opportunity, the fact that, OK, this is a company that's coming at this problem from the material science. We jumped at it when they saw that we could scale. We were very excited about that.
00:27:21 Speaker_01
We jumped at that. And then, you know, I think NextGen is positioning themselves and using our technology to not only scale within India, but potentially scaling around the world. Okay.
00:27:36 Speaker_01
Again, respecting the sovereignty of country data within that country. So we think that this is a very, very nice match. They opened with this size order. We're excited about the things that are even coming down the pike with them in just 2025.
00:27:53 Speaker_01
This is just the beginning.
00:27:54 Speaker_00
And these problems are also faced by US companies as well. You know, India is not the only country that's dealing with this. And we're talking to them as well. This is definitely a very important topic that you brought up, Sarah.
00:28:10 Speaker_02
Yeah, what do you think is the importance of American manufacturing of AI chips and data center capacity?
00:28:17 Speaker_02
This is especially relevant given you're one of the only small companies that is a Chips Act recipient and, you know, Gelsinger just stepped down. What is your current view of American capacity and your outlook for it?
00:28:29 Speaker_00
My view is that the U.S. is not doing enough and needs to do a lot more. This is a technology that the U.S. needs to be the leader in, and it needs to lead all the way up and down the value chain.
00:28:44 Speaker_00
We can't just rely on what NVIDIA or AMD has done to date. We have to continue to invest in not only the larger companies, but also the smaller companies like us, like others.
00:28:57 Speaker_00
who are working on some of the very critical problems because as you know, AI is not only a critical technology for business, it's also a critical technology for national security. And these are things that the U.S.
00:29:13 Speaker_00
cannot rely on other countries to develop for it. And so we have to really drive technology development. We have to drive manufacturing. all the way up and down the supply chain and put a lot of investment into this technology.
00:29:31 Speaker_00
It's going to be very important for the future of this country. And we're there to support that. And that's what we're focused on.
00:29:39 Speaker_01
Let me add to that by saying that our receiving the CHIPS Act is a testament to our support of USA, USA, USA. We're all about doing the things that Ty mentioned, strengthening our supply chain. That's one of the key tenants of the CHIPS Act.
00:30:00 Speaker_01
Supporting national security, we supply to defense. It's public that we work with Raytheon iconic American defense company. This technology allows Raytheon and U.S.
00:30:15 Speaker_01
Defense to maintain defense military supremacy around the world in a way that has never been done ever in the history of mankind. This technology secures our commercial supply chain in a way that I think we started to slip and it became bare.
00:30:34 Speaker_01
COVID laid that bare when we saw that, oh wow, we're depending on others to backfill key chips that we used to be able to make ourselves. Now we can make them at home right here in California and in Texas.
00:30:46 Speaker_01
We're going to be creating jobs in both California and Texas. We have support from a broad spectrum of investors, brilliant investors, Vinod Khosla, Peter Thiel among them. So I think that
00:31:03 Speaker_01
This CHIPS Act is something that's going to enable us to fulfill all of the tenants' mandates of the CHIPS Act, but also things that everyone in the country can be very proud of.
00:31:17 Speaker_02
You said you worked with Raytheon, you'd worked on space applications before. I think it may not be intuitive to every listener, like why, you know, satellites and radio communications are so important from a national security perspective. But I think
00:31:33 Speaker_02
Increasingly, you're going to see conflict and warfare defined by your understanding of the RF spectrum, be it space or other systems. And I think the ability to support that is critically important.
00:31:46 Speaker_02
It's totally separate from any of the AI system work that you're doing.
00:31:50 Speaker_00
100%. When radar was developed during World War II, that was a huge game changer. Without radar, it would have been very difficult for Britain to win the Battle of Britain, because that early warning system was critical for them.
00:32:08 Speaker_00
Just on a personal note, my father-in-law was a radar operator in World War II, and he was one of the first people to get exposed to this technology.
00:32:17 Speaker_00
And he said that they would chase German submarines off Florida, and the submarines would go beneath the surface. They wouldn't know how they kept finding them.
00:32:29 Speaker_00
So it just goes to show that when you introduce these new technologies, they have outsized impact on the world. And yeah, it's true with RF, and it's also true with AI.
00:32:42 Speaker_02
Maybe because you think broadly about this problem as a participant, in the AI supply chain, the US is not doing enough, needs to do more, there's increasing risk. What do you think the other critical problems are that are even feasible to take on?
00:33:00 Speaker_02
Is it credible that the US is gonna have fabs and lithography machines and these other core components in a near term time window?
00:33:10 Speaker_00
Yeah, the U.S. will do it. The only question is whether the U.S. will do it because it has to do it or because it wants to do it. The U.S. can accomplish anything. We have the people, we've got tons of natural resources, we've got the capability.
00:33:27 Speaker_00
And the only question is, and part of it is, corporate culture is driven by earnings, right? And if we only focus on earnings, then if it's cheaper to make something in Asia, Make it in Asia, okay?
00:33:42 Speaker_00
But there's a national security component to that where maybe it's not best for the country if you make it in Asia. Maybe you need to make it here.
00:33:51 Speaker_00
So we need to find a way to bridge that gap so that everybody's not just chasing that last penny of earnings and sending manufacturing of these critical technologies overseas, doing it here. This is a sector where we should start it
00:34:08 Speaker_00
correctly and start doing all these things here, all the way up and down the supply chain, everything from the chips and the server, the software, to the frames, the housings, the racks, all the big dumb metal pieces, and then the centers themselves.
00:34:29 Speaker_00
We can do it all here in the United States, and we should be doing that and focusing on that and focusing federal, federal programs and funding to make sure all of that happens here?
00:34:43 Speaker_01
I do think that AI will play a role in manufacturing and sort of 10x-ing or even 100x-ing manufacturing so that we can actually outperform humans in other countries. I think that's the way it's gonna look, okay?
00:35:01 Speaker_01
So we will not have to reduce labor costs in order to perform and compete with China. I think that that will come through extraordinary feats in AI-powered manufacturing.
00:35:17 Speaker_01
Universities today, when I was in grad school, everyone had to get training in a machine shop. Cornell, I remember, had about four or five machine shops and freshman year engineering, you had to get training in how to use these machines and operators.
00:35:35 Speaker_01
Today, almost every professor has a 3D printer. So that's just a added jet fuel to the capacity of everyone on campus to manufacture whatever they want, whenever they want, and however they want, at a very low cost.
00:35:53 Speaker_01
I think you're going to see the same thing with the use of AI in manufacturing, where we will be able to make per capita
00:36:02 Speaker_01
a thousand times, a hundred thousand times more components, more equipments, more parts, more chips compared to anyone else in the world. And that's what AI makes possible.
00:36:14 Speaker_02
No, I was just going to say, I believe that is possible too. And it's a much more aspiring, you know, inspiring vision for the future than we completely seed the supply chain and are strategically, you know, at the, at the mercy of others. Right. Yeah.
00:36:29 Speaker_02
You both have very esteemed backgrounds in material science.
00:36:35 Speaker_02
One of the most interesting things as a, I'm coming from software computer science world, but the applicability of transformers, diffusion models, and just effectiveness of deep learning overall.
00:36:49 Speaker_02
as it scales is very interesting in that it applies to so many different domains and there's increasing excitement about its applicability to materials science. Do you guys think about that yourselves?
00:37:01 Speaker_00
100 percent, yeah. And it's very appropriate to research and helping research go a lot faster because if you think about how How can AI help you and how can AI models help you do work? Personally, I've been using AI as an assistant.
00:37:18 Speaker_00
It greatly increases your pace of research because a lot of trying to solve a problem in technology, especially core technologies like material science, is first figuring out what's been done to date.
00:37:30 Speaker_00
because there's a lot of smart people in the world and there always have been a lot of smart people in the world and the key to your solution might be something that somebody did back in 1963 but never went anywhere because they didn't have the ability to do as many iterations as you as you can do now so
00:37:47 Speaker_00
your ability to go back, find that information, and then apply it to the problem that you can solve now, this is one of the things that I'm really excited about applying these models to, because I think it can really help drive innovation.
00:38:02 Speaker_00
And we're not all gonna be robots and slaves to AI. AI is going to help us innovate even faster.
00:38:10 Speaker_02
Do you feel optimistic about these ideas around using AI for inverse design or better exploring chemical space or accelerating DFT simulations or more fundamentally in the process of discovery?
00:38:25 Speaker_00
Yeah, so chemical space, density functional theory, any of these topics that you're talking about, they're all just problems to be solved. So no matter what topic or what approach, anything that can be solved through asking questions and iterating,
00:38:53 Speaker_00
it can be done. So any of these approaches can apply to material science, it can apply to the medical field, it can apply to software, to coding. And we've already seen it at lower levels, but really the
00:39:13 Speaker_00
we are limited by our ability to frame the questions. And it's really, that's it. We're limited by our ability to frame the questions and only by our own imagination, really.
00:39:26 Speaker_00
So I'm very excited to really get into this and to figure out how we can use it. Because there's problems that we want to solve right now that
00:39:36 Speaker_00
We don't know how to approach solving it because we know it's going to take so many iterations and there's so much information we need to find to take a good run at our hypothesis that it's difficult to get started.
00:39:49 Speaker_00
But if you've got somebody working for you and working with you who can do these iterations, a billion iterations in a second, I mean, it's really exciting to think about.
00:40:03 Speaker_01
The North Coast, our investor and I would have these conversations about how sometimes I think as entrepreneurs, entrepreneurs can sometimes be limited in how they imagine because you're constantly guided by the boundaries, the constraints of the world as it is today.
00:40:18 Speaker_01
Okay. And what. AI does is open up those boundaries so that we can begin to imagine the things that you're talking about. Inverse design, right?
00:40:29 Speaker_01
What if we could run processors a billion times faster than today, because the thermal envelope is no longer there? Could we accelerate the calculations, the modeling that would have taken 100 years, but now it takes a second?
00:40:52 Speaker_01
What does that even mean? What is not possible? I think the greatest challenge is our own imagination. I think Ty hit the nail on the head.
00:41:02 Speaker_01
I think that the difficulty is trying to ask the right questions because now we almost have infinite processing capacity. And that's the difficulty, is how do we get out of the way so that compute can try and solve these problems?
00:41:20 Speaker_01
I think that in the biotech arena, getting drugs that are dialed in to every form of cancer is now well within reach. I think that being able to have battery capacity that is so optimized because we don't have the thermal constraints that we do today,
00:41:39 Speaker_01
that can take us from SF to New York with one charge. I think that that is well within reach. I think that really the sky's the limit and I'm excited about that future.
00:41:53 Speaker_02
On that note, Felix, Ty, it's been a wonderful conversation. Thanks so much and congrats on the progress.
00:41:58 Speaker_01
Thank you, Sarah. Thank you. Thank you, Sarah.
00:42:01 Speaker_02
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week.
00:42:13 Speaker_02
And sign up for emails or find transcripts for every episode at no-priors.com.