Scott Wu - Building Cognition - [Invest Like the Best, EP.401] AI transcript and summary - episode of podcast Invest Like the Best with Patrick O'Shaughnessy
Go to PodExtra AI's episode page (Scott Wu - Building Cognition - [Invest Like the Best, EP.401]) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Go to PodExtra AI's podcast page (Invest Like the Best with Patrick O'Shaughnessy) to view the AI-processed content of all episodes of this podcast.
Invest Like the Best with Patrick O'Shaughnessy episodes list: view full AI transcripts and summaries of this podcast on the blog
Episode: Scott Wu - Building Cognition - [Invest Like the Best, EP.401]
Author: Colossus | Investing & Business Podcasts
Duration: 01:13:04
Episode Shownotes
My guest today is Scott Wu. Scott is the co-founder and CEO of Cognition, which is an applied AI lab that has created the first AI software engineer, which they call Devin. In just a year since founding Cognition, Devin functions at the level of a junior software engineer, capable
of handling complete engineering workflows from bug fixing to submitting pull requests. He is a former competitive programming champion and describes the field as simply “the art of telling the computer what you want it to do." Scott predicts AI will surpass the world's best competitive programmer within 1-2 years and sees this technology not as replacing programmers, but as democratizing software creation. We discuss the bottleneck in software development, the future of AI in various industries, and the challenges of leveling up Devin. Towards the end, you’ll also hear Scott do an insane card trick on me. You can find the video on our X and YouTube to grasp the madness fully. Please enjoy my conversation with Scott Wu. Watch Scott play the card game. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Alphasense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Imagine completing your research five to ten times faster with search that delivers the most relevant results, helping you make high-conviction decisions with confidence. AlphaSense provides access to over 300 million premium documents, including company filings, earnings reports, press releases, and more from public and private companies. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegas help you make smarter decisions faster. — This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. I think this platform will become the standard for investment managers, and if you run an investing firm, I highly recommend you find time to speak with them. Head to ridgelineapps.com to learn more about the platform. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Follow us on Twitter: @patrick_oshag | @JoinColossus Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com
). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:07:00) Discussion of Devin's current capabilities as a junior engineer (00:09:00) Early use cases and customer adoption (00:11:00) Comparison between IDE assistants and Devin's autonomous approach (00:14:00) History of computer programming and its evolution (00:17:00) Scott's background in competitive programming (00:20:00) Future predictions for AI in software engineering (00:26:00) Explanation of AI agents and their significance (00:29:00) Technical details of how Devin is built (00:40:00) Impact on software engineering jobs and industry (00:47:00) Discussion of business model and pricing (00:52:00) Thoughts on AGI and its practical implications (00:59:00) Commentary on the competitive landscape (01:24:00) Future of AI adoption across industries (01:25:00) Card trick demonstration (01:29:00) The Kindest Thing Anyone Has Done For Scott
Full Transcript
00:00:00 Speaker_02
As an investor, I'm always on the lookout for tools that can truly transform the way that we work as a business.
00:00:06 Speaker_02
AlphaSense has completely transformed the research process with cutting edge AI technology and a vast collection of top tier reliable business content. Since I started using it, it's been a game changer for my market research.
00:00:17 Speaker_02
I now rely on AlphaSense daily to uncover insights and make smarter decisions.
00:00:21 Speaker_02
With the recent acquisition of Tegas, AlphaSense continues to be a best-in-class research platform delivering even more powerful tools to help users make informed decisions faster. What truly sets AlphaSense apart is its cutting-edge AI.
00:00:34 Speaker_02
Imagine completing your research 5 to 10 times faster with search that delivers the most relevant results, helping you make high-conviction decisions with confidence.
00:00:42 Speaker_02
AlphaSense provides access to over 300 million premium documents, including company filings, earnings reports, press releases, and more from public and private companies.
00:00:52 Speaker_02
You can even upload and manage your own proprietary documents for seamless integration.
00:00:56 Speaker_02
With over 10,000 premium content sources and top broker research from firms like Goldman Sachs and Morgan Stanley, AlphaSense gives you the tools to make high conviction decisions with confidence.
00:01:06 Speaker_02
Here's the best part invest like the best listeners can get a free trial. Now just head to alpha dash sense.com slash invest and experience firsthand how AlphaSense and Tegas help you make smarter decisions faster. Trust me.
00:01:19 Speaker_02
Once you try it, you'll see why it is an essential tool for market research. Every investment professional knows this challenge. You love the core work of investing, but operational complexities eat up valuable time and energy.
00:01:31 Speaker_02
That's where Ridgeline comes in, an all-in-one operating system designed specifically for investment managers.
00:01:37 Speaker_02
Ridgeline has created a comprehensive cloud platform that handles everything in real time, from trading and portfolio management to compliance and client reporting.
00:01:46 Speaker_02
Gone are the days of juggling multiple legacy systems and spending endless quarter ends compiling reports. It's worth reaching out to Ridgeline to see what the experience can be like with a single platform.
00:01:55 Speaker_02
Visit ridgelineapps.com to schedule a demo, and we'll hear directly from someone who's made the switch.
00:02:01 Speaker_02
You'll hear a short clip from my conversation with Katie Ellenberg, who heads investment operations and portfolio administration at Geneva Capital Management.
00:02:08 Speaker_02
Her team implemented Ridgeline in just six months, and after this episode, she'll share her full experience and the key benefits they've seen.
00:02:16 Speaker_01
We are using our previous provider for over 30 years. We had the entire suite of products from the portfolio accounting to trade order management, reporting, the reconciliation features.
00:02:28 Speaker_01
I didn't think that we would ever be able to switch to anything else. Andy, our head trader, suggested that I meet with Ridgeline. And they started off right away, not by introducing their company, but who they were hiring.
00:02:41 Speaker_01
And that caught my attention. They were pretty much putting in place a dream team of technical experts. started talking about this single source of data. And I was like, what in the world?
00:02:51 Speaker_01
I couldn't even conceptualize that because I'm so used to all of these different systems and these different modules that sit on top of each other. And so I wanted to hear more about that.
00:03:01 Speaker_01
When I was looking at other companies, they could only solve for part of what we had and part of what we needed. Ridgeline is the entire package and they're experts. We're no longer just a number. When we call service, they know who we are.
00:03:17 Speaker_01
They completely have our backs. I knew that they were not going to let us fail in this transition.
00:03:27 Speaker_02
Hello and welcome everyone. I'm Patrick O'Shaughnessy, and this is Invest Like The Best. This show is an open-ended exploration of markets, ideas, stories, and strategies that will help you better invest both your time and your money.
00:03:39 Speaker_02
Invest Like The Best is part of the Colossus family of podcasts, and you can access all our podcasts, including edited transcripts, show notes, and other resources to keep learning at joincolossus.com.
00:03:52 Speaker_00
Patrick O'Shaughnessy is the CEO of PositiveSum. All opinions expressed by Patrick and podcast guests are solely their own opinions and do not reflect the opinion of PositiveSum.
00:04:03 Speaker_00
This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of PositiveSum may maintain positions in the securities discussed in this podcast. To learn more, visit psum.vc.
00:04:21 Speaker_02
My guest today is Scott Wu. Scott is the co-founder and CEO of Cognition, which is an applied AI lab that has created the first AI software engineer, which they call Devin.
00:04:30 Speaker_02
In just a year since founding Cognition, Devin functions at the level of a junior software engineer, capable of handling complete engineering workflows from bug fixing to submitting pull requests.
00:04:40 Speaker_02
Scott is a former competitive programming champion, and he describes the engineering field as simply the art of telling a computer what you want it to do.
00:04:48 Speaker_02
Scott predicts AI will surpass the world's best competitive programmer within one to two years, and sees this technology not as replacing programmers, but as democratizing software creation.
00:04:58 Speaker_02
We discussed the bottleneck in software development, the future of AI in various industries, and the challenges of leveling up Devon. The implication of Scott's work, and Devon itself, is enormous. Please enjoy my great conversation with Scott.
00:05:13 Speaker_02
Scott, since not everyone listening will know Cognition intimately, can you just start by describing what the business is and does? It's quite young.
00:05:22 Speaker_02
It's made a lot of progress very, very quickly, which is going to be a topic of our conversation in general. But maybe just start by describing for those listening what Cognition does.
00:05:31 Speaker_03
Yeah, absolutely. And thank you so much for having me. We got started actually just around one year ago. And we are focused on building AI to automate software engineering. The product we're building is called Devon.
00:05:42 Speaker_03
And Devon is the first fully autonomous software engineer. And what that means is that Devon is basically able to do all of the parts of the software flow. And so you work with Devon the same way that you would work with a coworker.
00:05:55 Speaker_03
And Devon is in your Slack, Devon is in your GitHub, uses all of your tools and works in your local developer environment.
00:06:01 Speaker_03
And so you can just tag Devon, you can say, hey, Devon, I've got this bug that's going on in the front end, can you figure out what's going on?
00:06:07 Speaker_03
Devin will do not just the parts of writing the code, but actually all of the parts around it as well, which includes getting the local environment running, reproducing the bug itself, taking a look at the logs, figuring out what's going on, testing the bug, adding in some unit tests, and then making a pull request for you as the engineer to go review.
00:06:24 Speaker_02
What's the right way to think about, if I think about Devon as an actual engineer, it's level of sophistication.
00:06:30 Speaker_02
Maybe you could put it in like salary terms or something, or years of experience terms, or some other thing, like help manage my expectations for how much I could have this thing do without any human involved.
00:06:44 Speaker_03
It's very much a junior engineer today. That's how we think of it. That's how we describe it. I would say right around when we got started, high school CS student or something like that is probably fair. Maybe six months ago, an intern.
00:06:56 Speaker_03
And I think today it's very much a entry-level junior engineer. There's going to be a lot of tasks.
00:07:02 Speaker_03
I mean, for example, if you wanted to say like, all right, I need you to rearchitect my entire code base, rewrite the whole thing, make it all 10 X more efficient. And then I'll check in in like a week. That's probably not the right way to use Devon.
00:07:12 Speaker_03
Whereas actually a lot of it, it really does function turning every engineer into an engineering manager where you get to work with your own team of Devons and hand off tasks.
00:07:21 Speaker_03
And so a lot of these little things, which you would want to delegate and spread off, you would say, Oh, Hey Devon, take a look at this, take a first pass.
00:07:28 Speaker_03
Maybe you have to fix a few things or maybe Devin asks a few questions that you have to answer or things like that.
00:07:33 Speaker_03
But the whole idea is that you're working asynchronously with your team of Devins while you're doing your own stuff or doing things separately.
00:07:40 Speaker_02
Should I think of Devin as a singular or a plural?
00:07:44 Speaker_03
The way that it's all set up is we build by usage. So we have a plan. The plan is 500 bucks a month for engineering teams by default. And then obviously as you use more and then we build more.
00:07:53 Speaker_03
And the whole idea with it obviously is that you can have your own army of Devins. And each Devin is its own thing. And let's say you have five different bugs or issues or something that you're trying to work on today.
00:08:03 Speaker_03
A lot of our users do this where you just spin up five different Devins. And each one is tackling a different one. And obviously they ask you as they have questions or put up code for you to review or things like that.
00:08:13 Speaker_03
But now you're actually jumping between your different Devons and tackling these tasks in parallel.
00:08:18 Speaker_02
It seems like GPTs are GPTs, general purpose technologies. And one of the cool things about general purpose technologies is it's very hard to predict how the world will use them to build other things.
00:08:30 Speaker_02
Now that you've had Devon live for a while in private and now generally available, what has surprised you most about the ways in which people use it to do things?
00:08:39 Speaker_03
A lot of the emergent use cases are things that we wouldn't have predicted ourselves.
00:08:44 Speaker_03
One of the first things that companies that we work with have been using Devon for actually is a lot of these migrations, replatforms, modernization and version upgrades, things like that.
00:08:54 Speaker_03
For example, we work with banks, we work with healthcare companies, and many of these folks have code that is literally decades old.
00:09:02 Speaker_03
And they would love to be on the latest stack, on the latest technologies, make things faster, make things more efficient and make things more secure. It would make things easier for their developers to work on as well.
00:09:12 Speaker_03
But it's just such a pain, as you can imagine, if your code base is 30,000 files to actually go through and do this kind of modernization effort. And so it's the kind of thing that's very tedious.
00:09:22 Speaker_03
And it requires a little bit of active thought and effort to get into, but it's obviously there's just so much volume of it. And that was one of the first things where we really, really saw folks using Devon.
00:09:30 Speaker_03
A lot of our customers did their own internal studies of the usage and they were finding that every hour of an engineering's time using Devon was equivalent to eight to 12 hours actually without Devon.
00:09:41 Speaker_03
So what that looks like is you're just asking questions and reviewing the code rather than having to write the code yourself.
00:09:46 Speaker_03
And so some of these usage patterns where a lot of the kind of more routine tasks that needed to get done, Devin was just really able to do in one fell swoop.
00:09:54 Speaker_02
Can you explain the key differences between those innovating in the IDE, like a cursor or something that just make it much easier for a programmer to program versus something like Devin, where it feels more holistic, like I'm handing off a whole thing and coming back and getting a complete piece of work?
00:10:12 Speaker_03
We've been doing this, call it this agentic approach, like a full automation ever since we got started. And the way that I describe it, I think the biggest step function change is about asynchronous versus synchronous.
00:10:24 Speaker_03
So a lot of these code assistant tools, to call them that, which you mentioned a lot of great examples, really great tools, honestly.
00:10:30 Speaker_03
What happens obviously is you use language models to do an autocomplete or you have simple things where it writes certain files or functions for you or things like that.
00:10:38 Speaker_03
And if it writes the right things, you hit tab and confirm those or confirm the suggestion that gets given saves you that time of typing.
00:10:46 Speaker_03
I think in general, what folks have seen with those kinds of technologies is they see something like a 10 to 20% improvement across the board. So.
00:10:54 Speaker_03
all of the different software work that they're doing, it makes them a 10 to 20% faster engineer, basically. By the way, engineering is huge. That's not a small number at all.
00:11:03 Speaker_03
But with Devon, what we kind of see is there is a certain level of task, obviously, that Devon is capable of taking on. You're not going to ask it to do the entire re-architecting or something like that.
00:11:13 Speaker_03
But for the tasks that Devon can do, it's more like a 10X rather than a 10% because it really is you just handing off the task to Devon and you go ahead and do your own stuff. You work on other things. Maybe you're running other Devons as well.
00:11:25 Speaker_03
And in the meantime, Devin is really just a co-worker that you've given the task and works on that and gives you a full review and a full piece of code. There's a lot of meaningful differences in the product experience that all stem from that.
00:11:37 Speaker_03
So Devin has to have the ability to test its own code, to run things live, to go and look up documentation, to be able to pull up websites and stuff by itself.
00:11:46 Speaker_03
You have to be able to talk to it in your GitHub or your Slack or your Linear or whatever technologies you're using as a team to communicate with each other. It's quite a different product experience.
00:11:54 Speaker_03
And honestly, we've seen from folks that it's complimentary. We use the Assistant suite of tools as well. And then we also use Devon. A lot of folks use both.
00:12:02 Speaker_02
Because of where you sit and your own personal experience, I'd love to use the opportunity of chatting with you today to really talk about the past, present and future of software engineering. I would love to start with the past.
00:12:13 Speaker_02
Maybe, you know, in my mind, it's this growing abstraction layer where it gets simpler and simpler and you have to know less and less things down close to the bare metal itself of semiconductors to do what you want to do in a computer system.
00:12:25 Speaker_02
Maybe you can describe your perspective on the history so far to this point of computer programming and like the major milestones that we've reached before we talk about today and then the future.
00:12:36 Speaker_03
Well, I love the way that you actually just described it, which is I think software engineering this whole time programming or whatever we want to call this has always been basically just the art of telling the computer what you want it to do.
00:12:46 Speaker_03
That has been the goal. It is really that simple. And it's kind of funny, obviously, because the form factor of that has changed so much over time. And obviously, like a long, long time ago, it was all about punch cards.
00:12:55 Speaker_03
But even before that it was about these gas tubes or things like that. That was programming. And then there was assembly and assembly, that was programming. There was basic and there was C and these other languages and frameworks and technologies.
00:13:06 Speaker_03
We had cloud, we had mobile. A lot of big broad shifts in the technology. I think the undercurrent that's going through this all is over the last 50 to 100 years, we have been able to automate so much of the processes that we need in our economy.
00:13:24 Speaker_03
And automating that is essentially, it boils down to being able to explain to the computer exactly what you want the computer to do for you.
00:13:31 Speaker_03
Really, really fascinating to study a lot of the history of it, because I think often it actually was the case that many of these technologies, at the time they came out, they were very interesting technology.
00:13:40 Speaker_03
And then there's a question of how do you make it useful? What is the use case? Sometimes it was kind of surprising.
00:13:45 Speaker_03
I mean, probably the most famous example that folks know about is the ENIAC back in the 40s, which was the first real automated computer.
00:13:52 Speaker_03
And the initial use case, obviously, that really got it there was missile trajectories, ballistics, and all the physics that you want to calculate with that.
00:14:00 Speaker_03
If you want to be very, very accurate and doing 10 digits of precision or something, you don't want to be doing that by hand. With punch cards, for example, the first real big thing that really caused that to be valuable was the U.S. Census.
00:14:10 Speaker_03
And that was the thing that needed to be automated. The first Macintosh, for example, like a lot of the initial use case actually was for graphic designers actually.
00:14:18 Speaker_03
And then number two was actually the spreadsheet was the killer app that made it useful. With each of these, there was this technology that came about and then it took some time to really make it useful.
00:14:26 Speaker_03
We're seeing a very similar paradigm here where language models came out and obviously they took the world by storm. ChatGBT in November of 2022, almost exactly two years ago, it was the moment when everyone realized what was possible.
00:14:38 Speaker_03
And now we're seeing what are all the natural applications of that. And I think one of those obviously is software itself. Accelerating software itself is just this recursive thing where you get such massive continued gains.
00:14:49 Speaker_02
You, I don't know what age you were, I think you were 17 or something, won the competitive programming challenge. So literally you were the best programmer alive, or at least in some major way.
00:15:00 Speaker_02
At that time, and maybe today, what distinguishes, you hear about this idea of the 10X or 100X engineer, what are the qualities of those people that make them better than others? In those competitions, what is the competitive frontier?
00:15:14 Speaker_02
What are you trying to best your opponents at in that field?
00:15:19 Speaker_03
Well, thank you so much for the kind words. There's these competitions, which are kind of more like algorithmic problems. There's software.
00:15:24 Speaker_03
But in all of these things, I honestly think the soul of programming the whole way through, even as we're saying it's changed in form factor so much, I think the soul of it is really just being able to understand, deeply understand these abstractions and just work at this abstract level.
00:15:38 Speaker_03
I think that's true in software today where the soul of software engineering is the part of taking a customer's problem or your own product's problem or something like that and really just figuring out, okay, here is exactly the solution that I need to build.
00:15:51 Speaker_03
Here's how I'm going to represent it and here's how I'm going to architect it. Here's all the cases and the details that I need to be thinking about. Here's the flows maybe that I'm going to be setting up in my website.
00:15:59 Speaker_03
Here's the database schema that's going to make this all work and make this efficient or something like that. And the thing that's kind of funny about it, obviously, is that is the part that every engineer loves about software.
00:16:09 Speaker_03
And it is also, I think, as you're saying, it's the thing that really distinguishes somebody from being a 1x versus a 10x versus a 100x. At the same time, it's probably only about 10% of what the average engineer does.
00:16:21 Speaker_03
Because the other 90% of their time obviously is debugging, implementing, setting up testing, dealing with their DevOps, figuring out what went down with their database, whether it just crashed, or all of these kind of various things, which look a lot more like implementation than this creativity.
00:16:36 Speaker_03
But I would say even despite that, the creativity and the problem solving is actually really what defines programming.
00:16:42 Speaker_02
in those competitions, how did you get better? Maybe just walk me through the competitions themselves, how they were structured, what you did to try to win, how you won. I'm really interested in that chapter of your life.
00:16:53 Speaker_03
The competition was called the IOI, the International Olympiad of Informatics. It was very much the Olympics of programming. So every country sends their top contestants. There's gold medals, silver medals, bronze medals.
00:17:04 Speaker_03
You qualify at your local level and then you make it to the team selection at your country and you qualify. And yeah, I mean, that was my entire life growing up, actually. I'm from Baton Rouge, Louisiana.
00:17:14 Speaker_03
There are not that many people who are into competitive programming in Baton Rouge, Louisiana, but I just really, really loved it. My older brother actually did it first, and I got super into it through him.
00:17:24 Speaker_03
And it was the number one focus of my life for the first 17 years of my life, honestly. At the core of it, it's really just problem solving.
00:17:30 Speaker_03
And you have code as the medium by which you build the solutions and all the things that you're designing around and so on. But the core of it is really whether you can figure out these algorithmic problems and understand the right approaches.
00:17:42 Speaker_02
I wonder if you could sketch for us what your best guess is as to what will happen in this world in the next five to 10 years.
00:17:49 Speaker_02
Computer programming writ large, obviously you're directly trying to affect that trajectory and what the answer is, but you also occupy this cool seat with maybe five or 10 others at the very bleeding edge frontier.
00:18:00 Speaker_02
So not only are you pushing that frontier, but it probably gives you the chance to see a little bit further. As you look out, what do you see? What do you think?
00:18:08 Speaker_02
Maybe you could draw confidence intervals or something for us, best case, worst case or something. But if you think forward five years, where might we get in terms of what's possible with engineering?
00:18:18 Speaker_03
I mean, for one, you're asking about competitive programming. The best programmer in history is actually a guy named Gennady Korokevich. He's from Belarus. And I would imagine that we have AI that can beat Gennady in the next one or two years.
00:18:31 Speaker_03
Beating the best programmer in the world at this programming competition I think that is going to happen. And I think it's going to be like an AlphaGo style moment where it's obviously like, holy shit, like this is truly there.
00:18:40 Speaker_03
The tech today, honestly, already is folks have been putting out things. Google put out something on the IMO, the Math Olympiad and OpenAI put something out on the IOI, but they're already at a level where they're competitive.
00:18:51 Speaker_03
with some of the best competitors in the world. Getting to number one is honestly not too far from that.
00:18:57 Speaker_03
I think what happens to software engineering as a whole is I really come back to this point of what we're saying that all of programming, it really is just about telling your computer what to do.
00:19:06 Speaker_03
A lot of this work that's happening with AI and AI agents as truly just the next generation of human computer interface. And so what I mean by that, obviously, is if you think about a programming language, let's say Python, for example,
00:19:17 Speaker_03
perhaps the most popular language out there. Python is great. It's beautiful and everything. But if you really just look at it, it is such a stark compromise that you have to be making from both sides.
00:19:27 Speaker_03
From the human side, obviously, you have to learn how to code. 99% of people in the world don't know how to code. And that takes years to learn all the little details. And here's how you do like a Lambda function.
00:19:37 Speaker_03
And here's how you do this little particular thing. And these are all things you have to learn just to be able to talk to your
00:19:43 Speaker_03
And from the computer side, Python itself is notoriously actually slow and inefficient and so on, even compared to other languages.
00:19:49 Speaker_03
But obviously compared to how fast you could truly run things on bare metal assembly, if you knew this is exactly what I want to do, you can make all of the systems, honestly, 10 or a hundred X faster.
00:20:01 Speaker_03
And the way I think about it is Python is maybe the best language out there by a lot of people's definitions. And really it's this awkward compromise that we have to make because this is the only way that humans and computers are really communicating.
00:20:12 Speaker_03
today.
00:20:13 Speaker_03
One of the great things I think that AI will really solve is essentially this translation problem where a human, any human, doesn't have to be somebody who is trained in code or something, can describe precisely what it is that they want to build.
00:20:26 Speaker_03
And the computer will build the efficient and feature complete version of that, that actually just works and does all these things. It's going to take some real time to get there, obviously, but I think that's the future that we're headed towards.
00:20:37 Speaker_02
Meaning, just to use a simple example, I, using natural language, want a very, very specific application to stick on my phone or something that maybe within five years will be at a point that me with no particular CS experience can get exactly what I want in a way that works with high fidelity without having to rely on human programmers.
00:20:58 Speaker_02
Do you think that's feasible?
00:20:59 Speaker_03
I think that's right. If I had to put down a number right now, it's obviously very hard to predict. I think five to 10 years is like the reasonable realm for that.
00:21:07 Speaker_03
One point I would make on that, even the way that we talk about applications, we kind of assume obviously that there's some application that has to generalize. One way to put it maybe is software engineering is so expensive today.
00:21:18 Speaker_03
It is so, so expensive that the only software that you can actually build that's worth building is basically applications that reach millions of people.
00:21:27 Speaker_03
It's like, okay, it was worth it to code Facebook because of how many people use the exact same software every day. YouTube or something or DoorDash or whatever.
00:21:35 Speaker_03
And obviously there are somewhat more niche things, but there's honestly lots and lots of things out there where it would be better served with code. There's communities of, call it a couple thousand people.
00:21:45 Speaker_03
There's little groups or little niches or things like that. There's personalized products even for one person. And so, Whatever your day-to-day chores are, they really are better served with code.
00:21:55 Speaker_03
That code can do all of the things that you might need to do and just make all the execution of that much cleaner.
00:22:00 Speaker_03
The reason we don't have that obviously is because there's not really anyone in the world for whom that makes sense to write custom software just for their own personal use case every time they want to do something new.
00:22:11 Speaker_03
But I think we will get to that point where you are just able to execute these things and maybe it's code on the backend, but from the perspective of us, it's really, yeah, just explaining to the computer exactly what we want and seeing that happen.
00:22:22 Speaker_03
There's going to be a lot of middle steps, but yeah.
00:22:23 Speaker_02
I'm curious what you think about one layer down or up, depending on how you think about it, which is the same question for agents specifically and whether or not all of these systems are going to cause applications themselves to be less common and agents we interact with through voice or through text or through other means to do more and more of the things that we currently use applications to do.
00:22:46 Speaker_02
Since you're building an agentic system, I'd love you to just teach us about agents. Everyone's saying that now. You hear the word agent in every single call. I feel like that scene in Seinfeld talking about write-offs.
00:22:57 Speaker_02
You're like, you don't really know what that means, do you? Tell us what agents means to you and why it's such an important construct in this coming age of technology.
00:23:05 Speaker_03
Like we were saying, GPT came out two years ago. It's a huge surprise for everyone. And I think for a while, all of the products that came out of that were basically what I would call these text completion products that work off of that.
00:23:16 Speaker_03
And by the way, it makes sense, obviously, because the native form of a language model is put in this input text, tell me what the output text is. But that's kind of where you saw every application at first. GPT itself is obviously like that.
00:23:27 Speaker_03
You had a lot of Q&A products, marketing, copywriting, hey, I need you to write this. Paragraph for me that explains X, Y, and Z. Can you just put together a nicely written thing? Boom, here's the output. Customer support.
00:23:38 Speaker_03
Customer's asking something and then it's like, what would you say here? Here's all the information you need to know. Boom. Code, that was the same thing too, right?
00:23:44 Speaker_03
Where the first big code products, I mean, GitHub Copilot is probably the biggest of these. It's a text autocomplete where it's like, here's the file so far.
00:23:52 Speaker_03
predict me the next line and if it is correct, I'll just hit tab and it obviously saved me the time to type the line.
00:23:57 Speaker_03
It doesn't save me the time of thinking of the line because obviously it's still me that's deciding what's correct and what's not correct, but it does save the time of typing the line.
00:24:04 Speaker_03
And I think what we're now seeing and what everybody is talking about now with agents, although of course it's very early, is this new paradigm where models can actually act and make decisions in the real world.
00:24:14 Speaker_03
There's a lot to it and it's how humans do it too. It's not like a human just produces this perfect paragraph. Here's everything that I needed to give you and then you're all done.
00:24:23 Speaker_03
A lot of it is going and interacting with the real world, looking up websites or something, trying out things, seeing what works or what doesn't work.
00:24:31 Speaker_03
In the case of code, obviously running the code and looking at the output or looking at the logs and things like that.
00:24:37 Speaker_03
And I think maybe the simplest way to put it is in all of these verticals, I think we're going to see this where basically, hey, having a legal Q&A bot is great, but the next step is to have a lawyer.
00:24:47 Speaker_03
Having something that can answer questions about math problems or something that you have is great. The next step is to have like a full teacher.
00:24:54 Speaker_03
And obviously in software, it's the same thing where it's like having the text autocomplete for coding does make you faster yourself as an engineer. But what if you had a whole engineer that you're working with?
00:25:03 Speaker_03
That's kind of the core of this agentic work.
00:25:06 Speaker_02
Maybe you can describe then how literally Devon is built.
00:25:09 Speaker_02
Behind this question, I'm also curious about taking your experience as advice for other builders who are using core foundation models as a key part of the thing that they're building and building in such a way that is complementary to those underlying foundation models and won't just get subsumed by them.
00:25:27 Speaker_02
Everyone asks about this. I'm sure every VC asks every founder, like, how will a one not just do this? So tell us about how Devon is built and how you think about building with and alongside these evolving foundation models.
00:25:40 Speaker_03
We are a hundred percent laser focused on solving all the problems along the way of software engineering. And the truth is software engineering is just very messy.
00:25:48 Speaker_03
My point with that is just that there's this idea of like solving these math problems in a vacuum. For example, there's a lot of research that's going towards that, but there's also this entire thread between of how do you take that
00:25:59 Speaker_03
IQ and turn that into something that is actually meaningfully useful for software engineers in their day-to-day work. So there's a ton of stuff that needs to be done on adapting the capabilities for a particular tooling and particular stack.
00:26:12 Speaker_03
And then there's also a ton of stuff that has to be done on the product side as well. I'll just give several examples of that. A simple one is, obviously, Devon has to be plugged into all your tools.
00:26:20 Speaker_03
Solving problems in a vacuum is great, but Devon has to work in your GitHub. It has to be working in your code base. It has to use your Confluence docs or whatever it is. That's where the information is. It has to read the bugs on JIRA.
00:26:31 Speaker_03
And on top of that, obviously, Devon needs to learn with the code base over time. And so even a really, really, really sharp software engineer, they join your company on day one.
00:26:40 Speaker_03
They're probably not going to be all that productive, at least compared to somebody who's been there for five years and knows the entire code base in and out. Devon needs to learn the same way of understanding, hey, here's what this file does.
00:26:51 Speaker_03
Here's what these functions are for. If I want to do the schema migration, here's step one, two, three, to get that done and all that.
00:26:58 Speaker_03
And then there's obviously all of the actual step-by-step decision-making that goes into it, which is somewhat different. we were talking about fixing a bug. I think a very natural flow of fixing a bug is, okay, I got a report for bug.
00:27:10 Speaker_03
First thing I'm going to do is I'm going to try and reproduce the bug myself and see what's going on. I got to go run all the code locally and try to make the bug happen.
00:27:17 Speaker_03
If the bug does happen, I'm going to go and take a look at the logs and see what the error was or what happened exactly, what the variables were. And then from there, I'm going to go read the relevant files.
00:27:29 Speaker_03
Maybe you can make some edits, test things out and whatever. And then I'm going to go test it again and see if it works. And if that all works, then I'm going to add a unit test to make sure it doesn't fail again and whatever.
00:27:37 Speaker_03
And all this step-by-step decision making, it's a very different problem, obviously, from text completing. What would somebody really smart on the internet say?
00:27:45 Speaker_03
How do you use the tools available to you and make a sequence of decisions that maximizes your chance of success?
00:27:51 Speaker_03
I think the other question you asked was how, as a builder, to think about this in relation to obviously this steady stream of improving models.
00:27:58 Speaker_03
It's a really interesting one because a lot of other technologies that we've had have been a lot more step function. Mobile phones came out and the first question was, what are you going to do when everyone has a smartphone in their pocket?
00:28:08 Speaker_03
That's in some ways the core of Uber or Airbnb or DoorDash or all these things. And you can kind of think about Okay, this is the technology shift and let's think about what happens over the next coming years now that we have this shift.
00:28:20 Speaker_03
AI is a lot more of a very incremental thing. Here's GPT-3 and 3.5 and 4 and each little technology gets a little bit better over time.
00:28:28 Speaker_03
And part of what that implies, I think, is, yeah, just really thinking about what are the models not going to do by themselves over time.
00:28:37 Speaker_03
And a lot of the stack and a lot of the things that we care about obviously are around building capabilities and the product experience specifically for software.
00:28:44 Speaker_03
I have no doubt that the foundation models will continue to increase in IQ and increase in IQ and so on over time. But a lot of the detail of all of the tools that you work with, how you make decisions with all of that,
00:28:55 Speaker_03
even how you communicate with the user. What is the user interface where you're talking with Devin and giving feedback and seeing what Devin's up to and checking in and making sure nothing's wrong and all these things.
00:29:04 Speaker_03
There's a lot, a lot of detail obviously in that flow that really brings out the inherent IQ, to put it that way. We've always thought about it in that way of how do we make sure everything that we're doing is complementary with the model increases?
00:29:16 Speaker_03
And by the way, complementary with everything else, the hardware is getting faster all the time. The whole stack is getting more efficient everywhere.
00:29:22 Speaker_03
And we have our part of the stack and we have always thought about it as how do we do as well as possible in our part of the stack, specifically for software.
00:29:29 Speaker_02
What improvements are you hoping for from the foundation model companies that would make Devin more effective than it is today in the future?
00:29:38 Speaker_03
I should mention, by the way, the foundation model labs are all super, super excited about agents. And it makes sense. I think one thing which is kind of interesting, just practically is let's say you asked GPT a question.
00:29:49 Speaker_03
Who's the fifth president or something? You ask one question, you get one answer, and that's a single model query.
00:29:53 Speaker_03
And obviously there's a consumer pricing and everything, but if you're going by the API pricing, that costs one millionth of a dollar or something like that, something ridiculously low.
00:30:02 Speaker_03
Whereas if you're asking Devin, hey, I've got this new feature that I need you to build out. Can you build it and test it and make sure it looks good?
00:30:08 Speaker_03
Over the course of that, Devin's probably going to be making hundreds, if not thousands actually of decisions over the next half an hour or something while it's going and building that out.
00:30:17 Speaker_03
And it also just practically means it's actually just a lot more inference on GPUs. And so for that reason, what we'll see over the next while, honestly, even the next one year, I think agents are going to become a lot more widespread.
00:30:28 Speaker_03
And what we'll see actually is I think the dominant usage of these GPUs, of these language models and everything will be for agents.
00:30:34 Speaker_03
Largely because every single agent query is just so much more magnitude of underlying language model calls than a single Q&A query.
00:30:44 Speaker_03
I think what happens next with that, which I think these foundation model labs are all thinking about is, yeah, how do we build our, our models to be as optimized for agents?
00:30:52 Speaker_03
And so I think a lot of these things of working with multi-turn traces, being able to handle long context outputs, for example, like context is a big thing that people are spending a lot of time thinking about.
00:31:03 Speaker_03
And with an agent, again, it's like, do you have half an hour of decisions? Obviously it helps a lot to know what's been going on in the last half hour. So some of these particular technologies that will really help with that.
00:31:13 Speaker_02
How do you use Devon in your own building of Devon?
00:31:18 Speaker_03
We use a lot of Devon actually when we're building Devon. It really is this new workflow, a very asynchronous workflow. And so I'll just give an example. We do all of our communications in Slack. We have a few channels.
00:31:28 Speaker_03
We have a crashes channel where every production crash gets reported. We have a front-end channel where we talk about any like front-end feature requests, or maybe like, Hey, let's switch the flow around this way.
00:31:38 Speaker_03
Or, Hey, this design is a little bit off or things like that. front-end things that come up. We have a bugs channel where users are filing bug reports of, hey, this thing didn't work or something, or this thing broke or whatever.
00:31:49 Speaker_03
In all of these channels, one thing we do every time something gets reported is we tag Devon. You just go through the channel, hey, this thing's broken, add Devon. Hey, somebody requested this, add Devon. Hey, we just had a crash, add Devon.
00:32:01 Speaker_03
Devon's not perfect. Devon's a junior engineer and often does need a lot of guidance and I would say probably about 50% of the time, Devin's output on one of these is just like a pull request code that's straight out of the box, ready to merge.
00:32:13 Speaker_03
And the other 50% time, obviously you can make the call of, okay, do I want to talk with Devin and coach Devin through this? Do I want to go and make some changes myself and just touch up whatever needs to be fixed?
00:32:23 Speaker_03
Or do I want to just go start from scratch if it's especially far off or whatever, but having this be part of our workflow, you had to report the bug anyway. And so you might as well have just
00:32:32 Speaker_03
tag on an at Devon at the end and then you get a first pass for free. And so a lot of workflows like this, a lot of it is just thinking about how to put it into the asynchronous workflows.
00:32:41 Speaker_03
A similar one, honestly, is if one of our engineers is building some big new project and they're going and implementing things or building the IDE, for example. This happens all the time as an engineer.
00:32:51 Speaker_03
You're going to go through the code and you're going to see all these little things that are like, man, why did we write it this way? This is so messy.
00:32:56 Speaker_03
Or like this thing is technically correct, but it is not the most stable way to do this or lots of little things. You get caught up in a lot of these little things as you're building your bigger project. It does distract and it takes away your focus.
00:33:07 Speaker_03
And what we do now obviously is you just kick off Devon from the IDE for each one of these. And it's like, Hey. Can you go refactor this one real quick? Hey, can you change this to be type safe?
00:33:15 Speaker_03
Hey, order this in a different way so that there's not going to be a race condition or something. And you're doing all these little fixes and you're kicking them off async. And then you're doing your own project in the meantime.
00:33:24 Speaker_03
In both of these cases, a lot of it is really just figuring out these asynchronous flows.
00:33:28 Speaker_02
It seems like part of the answer to in the near term, you said earlier that maybe 10% of the engineer's time is spent on what a friend of mine calls positive engineering, building things, the creative part, 90% might be on make sure things work, more almost like a mechanic than a creative, productive engineer.
00:33:44 Speaker_02
So does that mean that the near term, what Devin and other systems like it are going to help us do is eat that 90% and keep the programmer focused on the 10%. And then what happens beyond that?
00:33:55 Speaker_02
If we beat the world's best programmer in two years, why do we need programmers? Will there be programmers? What will they do?
00:34:03 Speaker_03
So I think it's exactly right, eating the 90% of all the miscellaneous implementation and debugging and iteration and whatever. I think what happens then is honestly, you just get to do 10X more of the 10%.
00:34:14 Speaker_03
And I think the cool thing about it, we should say it outright, obviously, there's a lot of fear in AI about what's going to happen and what happens next.
00:34:19 Speaker_03
But I think one of the really beautiful things about software is there really is 10 times more software to go right if we just had 10 times the capacity. Over the last however many years, we've always been bottlenecked by supply.
00:34:31 Speaker_03
If you talk to any engineering team, I haven't met a single team who said, yeah, we built everything that we wanted to build. There's actually no software left to do. Every team out there has 20 projects that they want to do.
00:34:40 Speaker_03
They have to pick three to take on this week because there's just so few software engineers and software just takes so long to build.
00:34:46 Speaker_03
I honestly think we're going to see a lot of this Jevons paradox where we're just going to be building more and more and more software as it gets easier. It's what we've seen already, but maybe to a somewhat lesser scale.
00:34:55 Speaker_03
Building software now versus building it in the 80s probably is about 10X easier or so. And this shift will happen a lot quicker, obviously, but over that 40 years, we've gotten a lot more programmers and we've built a lot more software.
00:35:07 Speaker_03
And I think we're going to see a shift that looks like that as well.
00:35:09 Speaker_03
I think in the long term, I think the soul of software engineering, again, is each person getting to decide what are the problems that I want to solve and what is exactly the solution that I want to build. AI is not magic.
00:35:19 Speaker_03
Obviously, it's not going to go and solve all your problems for you, but it will give you the power to specify exactly what the solution is that you want to have and just be able to have that happen and build that out.
00:35:30 Speaker_02
Do you think in general that all this means that business people need to start thinking harder about demand and distribution? If we've been supply constrained, and that's about to go away.
00:35:40 Speaker_02
Does that mean that the people that are going to win in the future are those with better demand and distribution?
00:35:45 Speaker_03
I think in software, especially, we have always been capped by supply and not demand. Obviously, there are a lot of things that are more fixed in terms of their demand.
00:35:52 Speaker_03
I think there's a lot of interesting questions of what happens to the economy with this and a lot of interesting second order effects. The simplest way to put it, where does all the money in tech go?
00:36:00 Speaker_03
Where does the money at venture capital, what does it go into? And it is primarily software engineers.
00:36:06 Speaker_03
So when you're getting 10X more bang for the buck for every software engineer that you're bringing on, it does change the landscape a lot of how much people can build with products and how much people can do.
00:36:15 Speaker_03
And then I think down the line, you see a lot of pretty interesting effects with businesses. I see a few different categories of businesses broadly.
00:36:22 Speaker_03
If we're being blunt, you know, I think the toughest category to be in is business that's heavily reliant on switching costs and lock-in.
00:36:28 Speaker_03
A lot of businesses out there, software businesses, that the reason they make all the revenue that they make is because it is so painful to switch to the other products or the other technology or something like that.
00:36:38 Speaker_03
If Devon is just able to do all that implementation and migration and those costs go to zero, obviously it's going to be a lot more about having the better product. That's going to be a real shift, I think, that companies are going to deal with.
00:36:48 Speaker_03
At the same time, it's not a nihilist view. You know, there are a lot of things that will be just as strong, if not stronger. Network effects, I think, will always be golden. I think data and personalization is going to be even more powerful.
00:36:59 Speaker_03
So a business that owns the infrastructure, for example, or a business that has all the personal relationships and the data and so on being able to serve for each person can individualize product that it's truly optimized just for them.
00:37:10 Speaker_03
Instead of this clunky thing that kind of works for everybody and touches all the millions of people, but it's not the perfect product for anybody. I think a lot of those businesses will be able to do very well.
00:37:20 Speaker_02
What kinds of questions are becoming most common when you spend your time talking to the senior most people at the big established technology companies?
00:37:29 Speaker_03
There's definitely a lot of folks asking, how do I think about my engineering headcount and how do I work with this?
00:37:34 Speaker_03
I think we're very much in the paradigm where people like to say this, AI won't replace you, but someone using AI better than you might.
00:37:41 Speaker_03
And so a lot of it is how do you get smart and how do you make sure you're on top of the latest technologies and you're ready for
00:37:47 Speaker_03
shift, which are honestly going to quicker than most people imagine and getting your whole team set for that and getting things built for that.
00:37:53 Speaker_03
I think there's also a special case actually with these technology businesses themselves, folks who are building SaaS products or things like that, where we talk so much about agents.
00:38:01 Speaker_03
I actually think one of the kind of interesting product changes is that a lot of these companies are actually going to be serving agents in addition to humans.
00:38:09 Speaker_03
A lot of great businesses out there that are basically meant for this collaboration, coordination, a lot of interfacing.
00:38:15 Speaker_03
We talked about companies like Slack or GitHub or Datadog or Atlassian or things like that, where there's a lot of work and compiling that they do. And also a lot of it is about the interface with how humans interact.
00:38:27 Speaker_03
I think the way I think about it is right now, it's those are really valuable tools for agents to be using as well. Sure, you can imagine this nihilist view where an agent is just building everything on its own from scratch.
00:38:37 Speaker_03
But the truth is there's a reason that we look at logs the way that we look at logs. The reason that we set up our documents the way that we set up our documents and it does just make it easier.
00:38:46 Speaker_03
It makes the problem easier of getting the context that you need. So what will happen actually is we're going to have a lot of agents who are using these products as well.
00:38:53 Speaker_03
So businesses that are in that space, a lot of folks are actually thinking about how do I make my product ready for my new wave of agent customers and in addition to all the human customers.
00:39:03 Speaker_02
I was talking to Martine Casado about these declining cost curves that technology creates.
00:39:09 Speaker_02
And the first one that's clearly happened already in AI is like the cost of marginal content creation has started its march towards zero in all different forms of media.
00:39:18 Speaker_02
It seems like with you and what you're building, there's that same curve story happening in the cost of software itself. How do you build a software business in that reality?
00:39:27 Speaker_02
If I fast forward 10 years or something sufficiently long, why should Devon cost anything? Why won't there just be like an open source Devon that does everything for me?
00:39:37 Speaker_02
And what do I need to pay for software if we're entering an era of marginal cost of creating software being very close to zero?
00:39:45 Speaker_03
I mean, I think right now, Devin is really the only one out there in the market that's doing that. But as you're saying, I'm sure there will be lots and lots of competitors pretty soon, even, and folks that want to do similar things.
00:39:55 Speaker_03
ENCODE is great for the consumer. We were just talking about the 10X engineers and things like that. The truth is that every level of decision-making quality and every level of skill that you can have in your software engineer actually
00:40:05 Speaker_03
really does have this exponential reward curve and it makes a big difference. And one of the things that we're getting obviously is this ability to really, really personalize Devon to each of their customers.
00:40:14 Speaker_03
And so like Devon, out of a lot of our customers today, when we work with Devon, for example, it's just like, Devon knows the entire code base in and out. Devon wrote half the files in the code base. And so there's all these details in it.
00:40:24 Speaker_03
Obviously it's not just the lines of code, it's the decision-making that went into it and thought process and so on that Devon has internalized and working with the entire engineering team.
00:40:33 Speaker_03
And it really is that trade-off where it is about, here's an engineer who's maybe smart and sharp and stuff, but is starting day one on your code base today versus here's an engineer who's been with your code base for years and years and understands every file and has built all of these things and knows exactly what your engineers mean when they ask certain things.
00:40:50 Speaker_03
And there's a real trade-off there. It comes down to these things where it's kind of like pure technology switching costs, where it's migrating all your files to some new thing or moving to some different platform.
00:41:00 Speaker_03
Yeah, I think that is going to get cheaper and cheaper over time. At the same time, obviously, if you have the level of personalization, if you have the network effects or whatever, it's not about the cost to switch.
00:41:10 Speaker_03
It's really just about you're able to provide a better product.
00:41:13 Speaker_02
Maybe say a bit about pricing. This is such an interesting question. It seems like someone at OpenAI put their finger in the air and decided 20 bucks was a reasonable price to charge and a whole business ecosystem has spun up.
00:41:25 Speaker_02
And that seems to be the default pricing. You made a very different pricing choice. It's 500 bucks a month. OpenAI just released the $200 a month option.
00:41:32 Speaker_02
So we're marching upward, but talk me through how you thought about and think about pricing in this new world. Yeah, we're going to see more of that, obviously.
00:41:41 Speaker_03
I mean, I think for us, It's funny, we had a lot of questions with our launch, for example, questions of like, how much do we anthropomorphize Devon? How do we explain the structure of it and so on?
00:41:51 Speaker_03
But I really do think it is a pretty different paradigm where with agents, it does come down to usage for better or for worse. There's reasons for both the inputs and the outputs for that. For the inputs, like we're saying, I mean,
00:42:03 Speaker_03
It is just more expensive to run. That is the case. You're doing more queries and more work. There's more GPU compute that goes into each job or each task or something like that.
00:42:12 Speaker_03
But I think with the outputs as well, obviously, when you're starting to take on tasks that are real end-to-end tasks, it's just much cleaner and much clearer to measure. the concrete impact that something like this has.
00:42:25 Speaker_03
I think the proxy that we essentially want to have is value-based pricing, where it's basically, we want to make a lot of the projects or the bugs or the features or whatever that you're building 10x cheaper to build.
00:42:34 Speaker_03
And so we want the cost of Devon to run on those to be 10x less. It's obviously hard to have a perfect proxy for that, but the proxy that we have essentially is usage.
00:42:42 Speaker_03
We have a unit called ACUs, Agent Compute Units, and it's basically all the decisions that Devin makes, the box that Devin uses, the code that Devin runs, or things like that.
00:42:50 Speaker_03
And it roughly corresponds to basically for every hour of Devin's work that you're doing. Again, it depends on different machines, but you're paying something like 8 to 12 bucks or something like that. Obviously, Devin can do a lot in an hour.
00:43:02 Speaker_03
The setup is such that we want it to be 10x cheaper than having to do those yourself. But I think that is going to be broadly the new paradigm when we start to have AIs that are doing tasks instead of answering questions.
00:43:13 Speaker_02
Any thoughts on this big fun, almost has become like a cocktail party style question around whether or not we've achieved AGI or not, and whether or not that's a question that you care about? I'm curious what you think about this one.
00:43:26 Speaker_03
I'm in camp number two on that. We already have AGI. Someone said this in our chat earlier, in 2017, if you ask if we have AGI, the answer is no. In 2024, you ask if we have AGI, the answer is, well, you got to define AGI. It depends on this and that.
00:43:38 Speaker_03
I think there is a somewhat recursive definition of AGI, obviously, that people like to say, oh, it's doing 80% of human labor. It's like, well, if you could do that, then humans would be doing the 20% so it wouldn't be the 80% anymore.
00:43:50 Speaker_03
There's some recursiveness but the way I would say it practically is I think there's going to be a tale of things where we will push the capabilities, we'll push the capabilities and for a while there's going to be lots of little things that are like well actually humans are still better at this one thing and then we'll solve that thing and then people will be like oh well humans actually can still do this one thing and we obviously have a lot of pride as humans in our own species of
00:44:11 Speaker_03
wanting to find the things that make us unique and so on. But I think that, like, practically, what matters for the world, obviously, is, is the AI good enough?
00:44:21 Speaker_03
And is it built into products and distributed well enough that it's actually affecting the world and affecting the economy and affecting people's day-to-day lives?
00:44:30 Speaker_03
In some sense, that question matters more than, is it doing 95% or 96% or 97% of, you know, the human's workflow. It's more a question of, are humans actually out there doing 25 times more because of the AI technology?
00:44:43 Speaker_03
And I think that actually is much more of a practical question. And in some sense, it's like we're talking about the Aikidos and things like that. The math problems that these base AIs can solve are already, honestly, shockingly good.
00:44:55 Speaker_03
And among humans would be in the upper echelons of humans, if not quite number one in the world yet. And I think I think the biggest distinction is actually not whether we are number one in the world or only top 0.1% human or something like that.
00:45:07 Speaker_03
The distinction is a bit more of, is that level of intelligence actually used by every human out there in the world and done in a way that's actually giving them a lot of value?
00:45:16 Speaker_03
That's going to be the big shift that actually gets us to the massive GDP growth, massive efficiency gains, quality of life, whatever you want to call it.
00:45:24 Speaker_02
Investors mostly are obsessing over scaling loss because it just has such big implications for public equities, for companies building new products in the private markets, and so that's where they tend to focus.
00:45:35 Speaker_02
If you are training your attention on what's going on in the foundation model world, what do you spend your time wondering about, asking your friends about, learning about? Where is your eye trained in that world?
00:45:46 Speaker_03
Yeah, my hot take from this is I think scaling laws are a little bit of a myth. I think we have made continued progress on these models and push the capabilities and things like that.
00:45:54 Speaker_03
And there's been a lot of new technologies, obviously, that we figured out. And there's way better post-trading and there's some of this stuff, obviously, that's going on reinforcement learning and things like that.
00:46:03 Speaker_03
Sure, there is some return to scale, but I think a lot of what we're seeing is basically it is also the introduction of new technologies that are making this more efficient or allowing it to grow with scale beyond that.
00:46:14 Speaker_03
And so people sometimes like to talk about scaling laws as if you just sit back and as long as you got the hardware for it, it's just going to get better and better and better.
00:46:20 Speaker_03
And I think really, obviously we have needed more and more hardware, but it's also been innovations that have actually gotten us there.
00:46:26 Speaker_03
I think practically with foundation models, we always obviously want to have a pulse on what's going on and what things are going on next. We work very closely with all the foundation model labs. We do evaluation of early models with them.
00:46:39 Speaker_03
We do custom training of models with them, things like that. It's like we're saying, it is an incremental thing. These things will get better. And a lot of the question as a decision maker is to basically just have a very clear sense on
00:46:52 Speaker_03
What are the capabilities that will get better? And what are the either the capabilities or the product or the human experience of it that will always need to complement these for you to deliver something of value to your customers?
00:47:03 Speaker_02
I'm really curious to learn a bit about the formative experiences for you as a business person.
00:47:08 Speaker_02
Obviously, you had, we've talked a lot about formative experiences as a technologist, building cognition and also IOI and these other competitions for the first part of your life.
00:47:18 Speaker_02
I'm sure everyone's seen the video of you when you were 12 or something winning math competitions. Obviously, you've had some pretty cool formative experiences. What have those been on the business side?
00:47:26 Speaker_02
For example, I'm interested to hear about your experience with Lunch Club and what you were building, why you were building it, and what that experience or that chapter of your business career taught you.
00:47:37 Speaker_03
Yeah, Lunch Club was my first company. I dropped out of school actually to start the company. It was a great experience, first of all. It was AI for professional networking.
00:47:45 Speaker_03
We made millions of meetings happen and it was a super fun team and product to build. I would say one of the big things, honestly, that I learned is It sounds simple, but it really does come down to the fundamentals.
00:47:55 Speaker_03
I feel like the common pieces of startup wisdom that everybody talks about, in some sense, they're so obvious that it's not obvious. It's not even worth mentioning if people feel that way, where it's like, go as fast as you can. Everybody says that.
00:48:06 Speaker_03
Everybody knows that. Never compromise on recruiting. Stay as close to your customers as possible. Focus on building a product that people love and want. These are almost tautologically true.
00:48:18 Speaker_03
But the thing is, the reality is there's always another gear that you can kick into with your fundamentals. And that's how I've really thought about it. Now, having gone to see it and simple examples, like everybody says you can go fast.
00:48:30 Speaker_03
In our case, we incorporated the company in January. We were building the first versions of the products. We got it out to the first group of initial users in February. We did a launch in March. We raised a big round in April.
00:48:41 Speaker_03
We did a big partnership with Microsoft in May. While we were doing that, we got on a lot of enterprise customers. We grew the team.
00:48:47 Speaker_03
No matter how fast you're going, and every startup in the valley obviously thinks they're going quite fast, it's still worth it to really push for even faster.
00:48:55 Speaker_03
And no matter how close you are to your customers and how deeply you understand their problems, you're probably not going to regret spending even more time with them and understanding that deeper.
00:49:03 Speaker_03
And no matter how high your bar is for hiring, you're probably not going to regret trying to go for folks that are even higher still. And I think that is the biggest lesson I've learned with startups.
00:49:13 Speaker_02
the competitive landscape, does it feel like this big, exciting greenfield, or does it feel like battle lines are being drawn and insanely competitive for every available opening and, you know, sharp elbows?
00:49:28 Speaker_02
Characterize what the playing field feels like right now. You all work out of this amazing house that I've been to. It's such a cool environment, which feels so startup-y, and yet the next day you're with Microsoft. The contrast here of
00:49:41 Speaker_02
real true startups in startup houses and raw cultures that are emerging. They're interacting like you are with the biggest players in the world of technology inside of a few months. It's just a bizarre set of circumstances.
00:49:55 Speaker_02
How would you describe the playing field to an outsider?
00:49:59 Speaker_03
I think it's still very Greenfield. I mean, the technology itself is so early. We're like one or two years in, in some sense to all of this stuff really being valuable and getting there. One thing that I think about a lot is software is just so big.
00:50:11 Speaker_03
We obviously know that, but there's so many facets of software and so many details and so many different things that can be done and can be built. And so one way that I think about it is each of the use cases that we're talking about.
00:50:22 Speaker_03
There are enormous and really, really meaningful businesses that are built and just do that use case. A lot of folks are talking about AI for data observability. For example, Datadog is a great business. It's making billions of revenue.
00:50:34 Speaker_03
It's something like a 50 or $60 billion market cap. It's doing very, very well and growing quickly. And as you can imagine, there's so much to do with AI for data observability in some form.
00:50:43 Speaker_03
Incident response is a huge thing that's only going to get bigger and bigger. And there's a lot of great businesses in the space, like PagerDuty is a great business, but AI is going to make these things again, orders of magnitude bigger.
00:50:52 Speaker_03
It's true with testing. It's true with modernization progress. It's true with migration.
00:50:56 Speaker_03
A lot of these big guys like Microsoft, Amazon, Google, or anyone, the amount of effort they have to go through to migrate customers onto their platform, even today is enormous. It's billions and billions of dollars every year.
00:51:08 Speaker_03
So all of these are just such enormous spaces, and there's still so many questions even about the product form factor.
00:51:14 Speaker_03
And as the technology gets better and better and better, the kinds of use cases that we'll be able to support as a space are only going to get bigger.
00:51:20 Speaker_02
What do you think the world is overestimating and underestimating right now about your whole world, about the automatability of software engineering?
00:51:31 Speaker_03
The speed of it is probably a big one. One thing I always think about if we're just looking all the way across AI, for example, I think agents in general are going to really work everywhere.
00:51:40 Speaker_03
Code is one of the first for a lot of reasons, but I would almost argue one of the first agents, if you want to call it that actually was self-driving cars.
00:51:47 Speaker_03
It's an actual AI that's making decisions in the real world and taking in feedback and iterating with that. People are familiar with the story of self-driving cars where I actually lived in Mountain View in 2014.
00:51:56 Speaker_03
I would see the way most driving around all the time. I never got a driver's license. I had just gotten started working at the time. I'm not going to need one. They're basically there. I'm sure I'll be able to use it. We did finally get there.
00:52:09 Speaker_03
I think there's still a lot of room to grow, obviously, but it did take a lot of time.
00:52:12 Speaker_03
I think the difference between self-driving and code and honestly, a lot of other applications I think we'll see over the next year is I think with self-driving, you really, really need to have it be like a 99.99999% sufficiency.
00:52:25 Speaker_03
And there's not a world where it's like, or there is, but it's not as big of a difference where it's like human and machine combined can do a lot more together. And I think that's people's typical example.
00:52:34 Speaker_03
It's like, okay, you know, there's going to be all the edge cases to figure out and all of that, which is true by the way.
00:52:39 Speaker_03
But the nice thing with software, for example, is there actually is just a really great way to do human plus machine and do a lot more. That's the case with Devon. Devon is not the 99.9999% solution.
00:52:49 Speaker_03
It looks a lot more like the 2014 Waymo or honestly, probably even a little earlier than that. But the thing is.
00:52:55 Speaker_03
Having Devin take a first pass or having Devin send you the code and you take a look and give a review or whatever it is like that, where Devin is doing the 90%, if you're doing one nine instead of six nines, it's still super, super useful.
00:53:07 Speaker_03
It actually does save you 90% of the time, whereas you're not going to get into a car that is only 90% consistent.
00:53:14 Speaker_03
And so I think there's going to be a real form factor difference in how soon we're going to see it actually really transform practical software, not just because of the capabilities, but also just because the product space is better set up for that.
00:53:26 Speaker_02
What was the lesson that Lunch Club taught you about the world of entrepreneurship? Was it something around market? Was it something around product?
00:53:35 Speaker_02
If you think back on that experience, what's the big takeaway and how are you applying that learning to what you're doing at Cognition?
00:53:43 Speaker_03
Probably the biggest one that stands out, honestly, is sometimes it's easier to solve a bigger problem than it is to solve a smaller problem, which is why folks talk about this.
00:53:54 Speaker_03
If you're going after something that's truly enormous and it's the thing and it's the most exciting thing, you'll be able to bring together a group of people who's all really excited to make it happen and is really passionate and dedicated to pushing the frontier in a way that actually sometimes make it easier.
00:54:09 Speaker_03
Our team, for example, Our team's about 20 people right now. Of those 20, I think 14 of them actually have been founders themselves before. And it's an interesting one where we've all done companies and honestly, it's a really, really talented team.
00:54:23 Speaker_03
A lot of these folks would get a blank check to raise money if that's what they wanted to do and they wanted to do their own thing. But a lot of it is just how do we come together and build something that's really great?
00:54:32 Speaker_03
I think that's one of the big lessons for me.
00:54:34 Speaker_02
What has been the most difficult or stressful moment so far in the one year Cognition journey?
00:54:41 Speaker_03
Oh, there's been a lot of it.
00:54:43 Speaker_02
Let's do a few then. I love these.
00:54:45 Speaker_03
One of my favorites, all of these launches, you know how it is with startups, obviously things are so last minute. So we had the GA launch this week, and then we obviously had our initial product launch back in March.
00:54:56 Speaker_03
And for both of those, the few days leading up to the launch were some of the most stressful, but just the most memorable experiences. And our launch in March was huge. there's so much you have to do to get into it.
00:55:08 Speaker_03
And obviously there's the product itself and testing with early users and making sure everything is truly working the way that it's supposed to, getting the info ready for it, setting everything up.
00:55:18 Speaker_03
And then there's obviously all of these other things with filming the video, think about blog posts, think about the content, getting things out there, working with customers to get testimonials and things like that.
00:55:26 Speaker_03
And our March launch, actually, we were doing all of that in addition to We were doing a fundraise right before the launch as well, actually. And so we were doing that.
00:55:34 Speaker_03
We had candidates who were in the pipeline, pretty meaningful candidates that we were working on and figuring out and everything during that as well. And so that weekend was a very fun time.
00:55:42 Speaker_03
That launch was actually, it was an article that went out by Ashley Vance, Bloomberg article about our company. And that was the article obviously that marked the launch. It was supposed to go out at 6am Tuesday.
00:55:52 Speaker_03
We asked them to push just a little bit later. We got it to be 9 a.m. Tuesday instead of 6 a.m. We needed those extra few hours. We weren't done filming the video for the launch as of that night at 1 or 2 a.m.
00:56:05 Speaker_03
We were still recording lines and then there was all the blog posts and putting that all together. It was a crazy weekend. Yeah. Did not get very much sleep that night basically.
00:56:14 Speaker_03
It's just a really fun moment to get together and really do everything together. It's one of the reasons, actually, I think that us all just living and working in a house together is great.
00:56:20 Speaker_03
It's because there's something about the shared experience, yeah, sharing the late night energy and everything. I feel like it wouldn't be quite the same at an office. And yeah, this launch as well, you know, actually we were all in Utah.
00:56:31 Speaker_03
Which was kind of hilarious actually. Last minute decision, it's like, okay, we're going to use that. And we got on a plane that day. We found a cabin that would fit us all. Got into that and we were all just working on the launch.
00:56:41 Speaker_03
And it was the same story always. Even having done this, we got no wiser. And so we were still filming at 2 AM the night before and putting things together for the launch and getting all the content ready and getting the info ready.
00:56:51 Speaker_03
And obviously in this case, it's a full GA thing too. So we got to be ready to handle 10X the load or even more than that. So all these kinds of things that we had to do to get ready. And those are some of the funnest moments, honestly.
00:57:02 Speaker_02
I'm just extremely interested in launching things. I think there's so much rolled up into it that it's like business encompassed or something. Can you tell me the best and worst part about each of those two launches in retrospect?
00:57:14 Speaker_03
Honestly, the experience with the team is the best part. Just getting to build it together and do it together with the team. It really is the soul of startups. The worst, I mean, there's scares and stuff that happen all the time.
00:57:26 Speaker_03
The night before, it's just like, Hey, we did our load testing. This shit didn't work at all. All the info went down. There's no way we're going to be able to handle the load. We were expecting to handle this many concurrent Devon sessions per minute.
00:57:36 Speaker_03
This did not work at all. And the last minute things we had to do, the onboarding flow, for example, last minute changes that we had to make.
00:57:42 Speaker_03
My co-founder, Steven, actually put out a post talking about how we shipped a whole thing where you could do a Slack connect with us and work directly with our team.
00:57:50 Speaker_03
That got started about 1am the night before the launch and shits the morning of the launch. And Devin built a lot of that because obviously, We needed the extra bandwidth from Devin.
00:57:59 Speaker_03
In the moment, every single thing that you're optimizing for is how do we make this not incredibly embarrassing for us all. And, you know, sometimes you just have to put it out there and do your best.
00:58:09 Speaker_02
How do you think about something like O1 Pro Mode and people using that to do coding versus Devin?
00:58:18 Speaker_03
Yeah, I think this tools are great. And I think a lot of companies are thinking about code and are focused on code. And like we're saying, code is such a big thing. There are a lot of cases where, yeah, sure.
00:58:26 Speaker_03
It's like you want to build something, a single file of code out of the box, and you want to think through particular problems or whatever, and have that be a really great single file of code or something.
00:58:36 Speaker_03
where, yeah, you can just go to Owen Pro and consult it or an AI stack overflow, if you will, where it's like you can paste in an error and it'll give you its best guess from what it has. Here's what I think the error is.
00:58:47 Speaker_03
With Devon, there's a ton of just these iterative flows that make a lot of natural sense. And so, for example, if it's not just writing a single file and that's all, but it's like,
00:58:56 Speaker_03
Hey, I need you to work in my code base and build a new feature and plug it into all the existing stuff and test it out to make sure that it works. Then obviously all of those iterative steps, those are the things that Devon is doing for you.
00:59:07 Speaker_03
And similarly, if it was a debugging thing, like a AI stack overflow thing, for example, actually one of our first Devon moment I want to say is we started building this as.
00:59:16 Speaker_03
more like research type technology and just trying to understand multi-turn agents, iterative decision-making and stuff. And eventually it made sense to build into the Devon product.
00:59:25 Speaker_03
But one of the first things that really convinced us is we were trying to set up our database for our own work. We were setting up MongoDB and anyone who's done it knows what this flow looks like.
00:59:33 Speaker_03
Basically where it's just give you a set of commands, you try it and then it doesn't work. And then you read the error and you copy paste it and you try to figure out, you Google what's going on.
00:59:42 Speaker_03
You find something and you run that and then you run into a different error. You do that 10 times until the thing works. Everybody's setup is always a little bit different and there's always these little details that will get you.
00:59:51 Speaker_03
In this case, we'd spend a while trying to set up MongoDB and like it wasn't working. And so we just gave it to Devin. We're like, Hey, can you just please get this working locally? And then Devin did it.
01:00:00 Speaker_03
And that was the thing to what we were saying earlier. It's not just a stack overflow where it just gives you the one answer of, oh, you ran into this error. Oh, maybe you should try this.
01:00:08 Speaker_03
Like Devon sees the error and says, okay, let me see what's going on. Let's check what ports are open. Let me see what the schema of this database is. Let me see if the socket connection is actually set up.
01:00:17 Speaker_03
It's going and running all those commands to go look at that. And then it uses that information to decide, okay, what's the next thing I'm going to try? And then it tries that. And if it run into a new error, then it's able to debug that.
01:00:26 Speaker_03
The integrated autonomous flow is I think the biggest difference.
01:00:29 Speaker_02
How hard is it to build that?
01:00:32 Speaker_02
I'm not technical enough to ask the best version of this question, but if I think about part of the secret sauce being the iterative agentic nature of the whole thing, what are the hardest parts about building that capability?
01:00:45 Speaker_03
The obvious one is the model capability itself. The models are roughly like, here's what a smart person on the internet might say.
01:00:51 Speaker_03
But if you're now thinking, okay, I'm doing a step-by-step process to make decisions to maximize my chance, not now, but after a hundred steps that I'll have solved the issue, then it's obviously a pretty different thing that you're solving for.
01:01:03 Speaker_03
And so a lot of it goes into the capabilities. But I think beyond that too, there's so much with the infra and with the whole experience that gets quite involved, as you can imagine. And so, for example, Devon has its own machine that it works in.
01:01:15 Speaker_03
And a lot of the features that you'd want to support, for example, like rolling back state of the machine or dealing with hot loading these quickly and being able to spin up a Devon instance with a new machine immediately, being able to go and navigate through all the previous decisions and go back and forth between these different things or the ability to work with different Devons in parallel or things like that.
01:01:34 Speaker_03
Yeah, there's a lot of infrastructure problems that go into that as well. And I think it just comes down to this thing where. The software engineering is so messy.
01:01:42 Speaker_03
And so it really is just so different from doing problems in a vacuum and building all of these high-touch components with the real world where you're actually able to do these things in the real world.
01:01:52 Speaker_03
You're actually able to use all your integrations. You're actually able to maintain your own machine state or work within your Git checkout or all of these things.
01:02:00 Speaker_03
A lot of the mess of that is the part that I think that really comes out from us just really focusing only on software engineering.
01:02:07 Speaker_02
If you had to put your most sci-fi lenses on for the future of technology in the world, what are some things that you maybe talk about with your team just for fun that don't seem feasible yet, but might be feasible in the future that get you the most excited?
01:02:23 Speaker_03
My co-founder Walden has this great line, which I've always really loved, which is, we've been playing in Minecraft survival mode for so long, and now it's gonna be time for Minecraft creative mode.
01:02:32 Speaker_02
Describe what that means for those that don't have 10 year old boys.
01:02:36 Speaker_03
And so basically Minecraft is a sandbox open world game and it's meant to be a simulation of life. You can go and grow food, you can go and forage, you can mine, you can do all these different things, you can build stuff.
01:02:46 Speaker_03
It's kind of funny because there's two major modes. There's one, which is survival mode, which is you are actually just have to deal with the enemies that come at you.
01:02:53 Speaker_03
You have to go build your own shelter and make sure you have enough food to stay alive and all these things.
01:02:57 Speaker_03
And there's obviously creativity within that, things that you get to do, but you are also constrained by how hungry your own character is or whether you're being attacked or something like that.
01:03:06 Speaker_03
Whereas Minecraft creative mode is basically a mode where you're able to put together whatever you want. It's not like I didn't find any iron ore today and so I'm not going to be able to build this thing. It's like, I want to build this thing.
01:03:17 Speaker_03
I want to try this new thing. I want to set up this really cool thing. There's folks who have set up entire computers and computer programs in Minecraft, which is insane.
01:03:25 Speaker_03
It's basically just focusing on the ideas and the imagination rather than the execution. It's not about, okay, now I'm going to have to go spend another two hours going and collecting gold ore because I want to build this thing.
01:03:36 Speaker_03
It's like, I know exactly what I want to build and now I have the freedom to just make that come to life. That's honestly, I think the big dream of AI as a whole is to be able to free up humans.
01:03:46 Speaker_03
The point of AI isn't to go and just solve all of our problems for us and leave us with nothing.
01:03:51 Speaker_03
I think it's to give us the power that, for the kinds of problems that we want to solve or the things that we want to build, to give us the ability to actually just turn that into reality.
01:03:59 Speaker_02
Yeah, it's so funny to imagine. I'll use my own experience. When I visited you, you asked. I think the demo is incredibly powerful. So you asked me to come up with an application idea. I think it took eight minutes to build, and we watched it do it.
01:04:11 Speaker_02
Watching the iterative nature of it feels like a real aha moment because it really does look like it's in hyperspeed doing something that a person would do as it builds something.
01:04:20 Speaker_02
But maybe the more profound realization I had was it built the thing that I described in natural language But it wasn't until I saw the thing, the little stupid application that I made up, that I realized why that idea was bad.
01:04:36 Speaker_02
When I said the idea, it sounded good. And when I saw the thing, I knew why it was bad. And so I would say the thing that excites me is not that I can make my ideas.
01:04:49 Speaker_02
but that I can get better ideas, which I think is pretty underrated part of this whole thing that just like a computer program is going to be faster now, anyone with ideas is going to be better and faster because they can see the thing and see why it stinks.
01:05:03 Speaker_02
Does that resonate?
01:05:05 Speaker_03
That's a great point. It's very much the idea of turning 10% of time that you get to spend on your ideas until a hundred percent of time where, whereas before you got to get a team together, they're going to go and build the whole thing.
01:05:14 Speaker_03
And you spend a couple of days or something building this whole thing. And then you get to do your next step of idea iteration. And by the way, Devin's still honestly a lot of room to grow and it's going to take a lot of progress, honestly, to get to.
01:05:24 Speaker_03
this future, but to get to a point where it's basically just you get to do the ideation work, you get to iterate against that, you get to do the new idea, you get to do the new idea and so on.
01:05:31 Speaker_03
And instead of these 90% implementation with the little 10% chunks in between, you just really just get to spend all your focus on the ideas. I'm super excited about that.
01:05:40 Speaker_02
It seems like one of the defining aspects of cognition is the degree of friendship and trust between the leaders is extremely unusual. Maybe you can describe the origin of that. But when you're moving fast, trust is really valuable.
01:05:54 Speaker_02
And I'm just curious to hear about that side of your building experience so far.
01:05:58 Speaker_03
Pretty much all of us on the team have known each other for years and years and years. Steven, Walden, Russell. I've known any of these guys for a long time and the whole crew as well.
01:06:07 Speaker_03
I had always wondered about, okay, is it going to be awkward to work with your friends? There's lots of things that come up. And if anything, it's honestly the other way around where it's easier to have tough conversations because you know, obviously.
01:06:18 Speaker_03
We were friends before all of this. All of this that we're doing is coming from a place of we are just trying to take care of each other and make something great together.
01:06:25 Speaker_03
When you both know that's true and when that undertone is always there, it's a lot easier to just call out stuff like, hey, I think this is totally wrong or we need to change this thing right now or things like that.
01:06:35 Speaker_02
What, in your experience talking to investors, present company included, what do people not ask you about the product, the company, or the vision that they should ask you about?
01:06:46 Speaker_03
Yeah, it's a good question. I think one thing that's definitely different talking about it versus feeling it is that folks sometimes talk about or ask about really is just kind of the experience with the team.
01:06:55 Speaker_03
I think the day to day, it's getting the product out there and seeing what folks are saying about it and talking with our users and things like that, obviously. But ourselves, I think just how we do it and how we build it as a team is truly
01:07:09 Speaker_03
special, I would say. I think we have some of the smartest and most ambitious and most capable people and yet somehow they're all such low ego people as well.
01:07:20 Speaker_03
I think that is truly the, that's the soul of it for me is the 20 of us in a house together trying to do something meaningful. It's a big difference I think talking about it versus really seeing it live.
01:07:30 Speaker_03
And so one of the things that we often do as we're really pushing with it is having folks at the house, just come by and see what it's like. It's going to be 1 a.m. or 2 a.m.
01:07:40 Speaker_03
and stuff, and people are still building, and there's a lot of discussion and stuff that's moving quickly and everything. And that's probably the thing about building the company that I'm most grateful.
01:07:49 Speaker_02
What uncertainty interests you the most that's outside of your control?
01:07:56 Speaker_03
Interesting. Yeah. One of the uncertainties I actually think a lot about is broad adoption. It seems very clear to me now that the technology will get there or maybe already has gotten there in a lot of cases to be extremely impactful.
01:08:09 Speaker_03
And I think the rate at which we see it happening in different industries is going to be pretty fascinating. We can both think of things like medicine or law or things like that, where you can certainly imagine it will take time to do that.
01:08:20 Speaker_03
And then there's obviously just the whole world of distribution. I will call it a win where everyone in Baton Rouge, Louisiana is actually having a significantly better life because of the work that AI is doing. It's a different curve everywhere.
01:08:33 Speaker_03
Obviously, I think we're lucky in software that developers are just kind of inherently curious and always want to try to be fast.
01:08:39 Speaker_03
But I do think that rates of adoption, it's going to be a real question with this technology, because I think the pace of progress is going to be such that technology in some places is going to be way ahead of reality.
01:08:49 Speaker_02
When we were first together, you did this insane car trick. Are you willing to do it for us? Yeah, let's do it. Tell us the origins of this trick.
01:08:57 Speaker_03
Yeah, so this is a card trick. So I wouldn't even call it a card trick. It's more of a card game. And so growing up, I always used to play this game called 24. It's a little math game.
01:09:06 Speaker_03
And I think it actually helps a lot with just getting the fluency in math. I think it's one thing to learn, okay, here's addition, here's multiplication, whatever, which is obviously the first building block.
01:09:17 Speaker_03
But also a lot of it is just this creativity and this ability to reason with numbers and experiment with numbers. And so I'll just show what the game looks like. And so the idea is you have four cards and there are four numbers.
01:09:27 Speaker_03
By the way, Jack is 11, Queen is 12, King is 13. So it would be 4, 7, 8, and 12 are the four numbers that we're dealing with here. And the idea with this is you can use the four numbers in any order, in any combination.
01:09:39 Speaker_03
You got to add, subtract, multiply, divide any of these. And your goal is to put together an equation that uses all four of these and makes the number 24. And so here what I would do is 12 minus 8 is 4. four times seven is 28, and 28 minus four is 24.
01:09:56 Speaker_03
And so you can see how it uses a lot of this creativity where it's like, messing around this number four, 24 is a multiple of four. And so if I could just make a six from the other three, then maybe I could do something.
01:10:06 Speaker_03
It turns out that you can't quite do that, but you can make a seven and subtract another four out, things like that, where you're playing with these ideas. And so at these math camps and programming camps, we used to play this game all the time.
01:10:16 Speaker_03
And eventually we got to the point where you learn all the combinations of 24. There's only about 1,300 total combinations that you have to know. Not all of them are possible, but for the ones that are possible, you know them.
01:10:26 Speaker_03
And so what we did next was we did a thing where you take six cards and you're making 163. And so I'll just steal another six cards. And so in this case, for example, what I would do is 11 times 2 is 22. 22 minus 3 is 19. 6 plus 10, 16. 16 times 9 is 144.
01:10:37 Speaker_03
And 144 plus 19 is 163. And so as you can imagine, it's a lot of fun. We would play this at the math camp during the team selection training for the US team. Basically in between contests, we would just be playing this game all the time and doing this.
01:10:57 Speaker_03
And eventually you get to the point where at 163, you have a lot of the patterns down and you see things. And so then the next thing that we did was we do eight cards and we make the current year. And so now it's 2024.
01:11:06 Speaker_03
And so let's just go ahead and do that. So 13 minus 12 is 1. And this is actually relatively easy. I'm just going to take 1 times 1 times 1, and we still got a 1. And 13 plus 10 is 23. 23 times 11 is 253. 253 times 8 is 2024, which times 1 is still 2024.
01:11:24 Speaker_02
Incredible.
01:11:27 Speaker_03
And eventually, you learn all the patterns with that. And so obviously, the next fun thing that we have to do is we have to do a custom number.
01:11:32 Speaker_02
How many digits? Four digits?
01:11:34 Speaker_03
Yeah, yeah.
01:11:34 Speaker_02
Let's do 7, 5, 3, 2. Seven, five, three, two.
01:11:39 Speaker_03
Okay, so it's a little bigger, so we're gonna do nine cards, okay? Okay, so 11 times seven is 77. 77 minus two is 75. We got two tens here, which is a nice hundred. And so that makes it 7,500. And then from here, nine times four is 36.
01:11:53 Speaker_03
Three plus one is four. 36 minus four is 32. So 7,500 plus 32 is 7,532. Pretty amazing.
01:12:01 Speaker_02
I don't know how you do it. What are you doing? What is the method behind calculation? Or is it multiple methods?
01:12:08 Speaker_03
A lot of it is intuition, I guess, that you build over time. With 163, for example, a lot of these little things where I know, okay, 11 times 15 is 165.
01:12:17 Speaker_03
So if I can make an 11 and a 15, and then I can make a two leftover and I could do 165 minus two, and then I'm searching for that. And so based on the numbers I see, I'm solving for something like that.
01:12:28 Speaker_03
And it depends a lot on the patterns that you get and everything. It's actually, I think, great game for kids.
01:12:33 Speaker_03
This is my favorite game as a three-year-old kid or whatever, playing 24 with the four cards, because a lot of it really pushes the number sense.
01:12:40 Speaker_03
It's not just a question of can you add two numbers, but can you really reason around that and like, you know what that means and be creative with the numbers that you have, which I think is the core skill that you really want to learn, not just the mechanical part.
01:12:51 Speaker_02
I think what you're building is so cool to me because effectively what you're doing downstream is unlocking the creativity of a lot more people than currently have the ability to build things through computer science and programming.
01:13:04 Speaker_02
And I just think that's an incredibly cool and amazing thing for the world. I hope you and others like you are successful in doing so. When I do these interviews, I ask everyone the same traditional closing question.
01:13:14 Speaker_02
What is the kindest thing that anyone's ever done for you?
01:13:18 Speaker_03
The number one thing that I come to throughout my life is honestly just all of the mentors that I've had and how much effort they truly put in. Even as a kid, as you can imagine, like training for the US team, for the international thing.
01:13:29 Speaker_03
It was funny because I never had a coach. I never had the formal, this is the person, but I did have a couple of people and one in particular who just really invested in me and believed in me. And, you know, he was the one who really just pushed.
01:13:40 Speaker_03
I was a high school kid. I had a lot of other things that I wanted to do besides just doing math all the time. And he would always push me and be like, you got to practice, you got to do this thing.
01:13:48 Speaker_03
Every time I would get something wrong, he'd be like, yeah, you can't make that mistake. Here's how we're going to do these things. And all the time that he put in is everything honestly comes from that.
01:13:56 Speaker_03
I feel like I'm really only the sum of all the mentorship that I've received. And I think the same is true in startups. I can think of a few people.
01:14:03 Speaker_03
And again, I can think of one in particular who's really just coached me through a lot of this and guided me through a lot of these things and been the one to give me the harsh criticism and the feedback and hit me with the things that matter and push me to do better and to do more.
01:14:16 Speaker_03
And people often think of kindness as sacrifice. And I do think actually that often a lot of the greatest kindness is about building more together and doing more together.
01:14:25 Speaker_02
Scott, thank you so much for your time. Thank you so much, man. If you enjoyed this episode, check out JoinColossus.com. There you'll find every episode of this podcast complete with transcripts, show notes, and resources to keep learning.
01:14:38 Speaker_02
You can also sign up for our newsletter, Colossus Weekly, where we condense episodes to the big ideas, quotations, and more, as well as share the best content we find on the internet every week. We hope you enjoyed the episode.
01:15:03 Speaker_02
Next, stay tuned for my conversation with Katie Ellenberg, Head of Investment Operations and Portfolio Administration at Geneva Capital Management.
01:15:10 Speaker_02
Katie gets into details about her experience with Ridgeline and how she benefits the most from their offering. To learn more about Ridgeline, make sure to click the link in the show notes.
01:15:19 Speaker_02
Katie, begin by just describing what it is that you are focused on at Geneva to make things work as well as they possibly can on the investment side.
01:15:29 Speaker_01
I am the head of investment operations and portfolio administration here at Geneva Capital. And my focus is on providing the best support for the firm, for the investment team.
01:15:40 Speaker_02
Can you just describe what Geneva does?
01:15:42 Speaker_01
We are an independent investment advisor, currently about over $6 billion in assets under management. We specialize in U.S. small and mid-cap growth stocks.
01:15:53 Speaker_02
So you've got some investors at the high end, they want to buy and sell stuff. And you've got all sorts of investors whose money you've collected in different ways. I'm sure everything in between I'm interested in.
01:16:02 Speaker_02
What are the eras of how you solve this challenge of building the infrastructure for the investors?
01:16:08 Speaker_01
We are using our previous provider for over 30 years. They've done very well for us. We had the entire suite of products from the portfolio accounting to trade order management, reporting, the reconciliation features.
01:16:21 Speaker_01
With being on our current system for 30 years, I didn't think that we would ever be able to switch to anything else. So it wasn't even in my mind. Andy, our head trader, suggested that I meet with Ridgeline.
01:16:34 Speaker_01
He got a call from Nick Shea, who works with Ridgeline, and neither Andy or I heard of Ridgeline. And I really did it more as a favor to Andy, not because I was really interested in meeting them. We just moved into our office.
01:16:48 Speaker_01
We didn't have any furniture because we just moved locations. And so I agreed to meet with them in the downstairs cafeteria. And I thought, OK, this will be perfect for a short meeting. Honestly, Patrick, I didn't even dress up. I was in jeans.
01:17:00 Speaker_01
I had my hair thrown up. I completely was doing this as a favor. I go downstairs in the cafeteria and I think I'm meeting with Nick and in walks two other people with him, Jack and Allie, and I'm like, Now there's three of them.
01:17:16 Speaker_01
What am I getting myself into? Really, my intention was to make it quick. And they started off right away by introducing their company, but who they were hiring. And that caught my attention.
01:17:28 Speaker_01
They were pretty much putting in place a dream team of technical experts. to develop this whole software system, bringing in people from Charles River and FACS at Bloomberg. And I thought, how brilliant is that to bring in the best of the best?
01:17:42 Speaker_01
So then they started talking about this single source of data. And I was like, what in the world? I couldn't even conceptualize that because I'm so used to all of these different systems and these different modules that sit on top of each other.
01:17:55 Speaker_01
And so I wanted to hear more about that. As I was meeting with a lot of the other vendors, they always gave me this very high level sales pitch. Oh, transition to our company, it's going to be so easy, et cetera.
01:18:08 Speaker_01
Well, I knew 30 years of data was not going to be an easy transition. And so I like to give them challenging questions right away, which oftentimes, in most cases, the other vendors couldn't even answer those details.
01:18:21 Speaker_01
So I thought, OK, I'm going to try the same approach with Ridgeline. And I asked them a question about our security master file. And it was Allie right away who answered my question with such expertise.
01:18:34 Speaker_01
And she knew right away that I was talking about these dot old securities and told me how they would solve for that.
01:18:40 Speaker_01
So for the first time, when I met Richline, it was the first company that I walked back to my office and I made a note and I said, now this is a company to watch for.
01:18:49 Speaker_01
So we did go ahead and we renewed our contract for a couple of years with our vendor. When they had merged in with a larger company, we had noticed a decrease in our service. I knew that we wanted better service.
01:19:02 Speaker_01
The same time, Nick was keeping in touch with me and telling me the updates with Ridgeline. So they invited me to base camp. And I'll tell you that that is where I really made up my mind with which direction I wanted to go.
01:19:15 Speaker_01
And it was then after I left that conference where I felt that comfort in knowing that, okay, I think that these guys really could solve for something for the future.
01:19:26 Speaker_01
They were solving for all of the critical tasks that I needed, completely intrigued and impressed by everything that they had to offer. My three favorite aspects, obviously it is that single source data.
01:19:38 Speaker_01
I would have to mention the AI capabilities yet to come. Client portal, that's something that we haven't had before. That's going to just further make things efficient for our quarter-end processing.
01:19:49 Speaker_01
But on the other side of it, it's the fact that we've built these relationships with the Ridgeline team. I mean, they're experts. We're no longer just a number. When we call service, they know who we are. They completely have our backs.
01:20:04 Speaker_01
I knew that they were not going to let us fail in this transition. We're able to now wish further than what we've ever been able to do before. Now we can really start thinking out of the box with where can we take this? Ridgeline is the entire package.
01:20:18 Speaker_01
So when I was looking at other companies, they could only solve for part of what we had and part of what we needed. Ridgeline is the entire package. And it's more than that, in that, again, it's built for the entire firm and not just operational.
01:20:37 Speaker_01
The Ridgeline team has become family to us.