Powerful A.I. By 2026? + Uber's C.E.O. on the Robotaxi Future + Casey's TikTok Test AI transcript and summary - episode of podcast Hard Fork
Go to PodExtra AI's episode page (Powerful A.I. By 2026? + Uber's C.E.O. on the Robotaxi Future + Casey's TikTok Test) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Go to PodExtra AI's podcast page (Hard Fork) to view the AI-processed content of all episodes of this podcast.
View full AI transcripts and summaries of all podcast episodes on the blog: Hard Fork
Episode: Powerful A.I. By 2026? + Uber's C.E.O. on the Robotaxi Future + Casey's TikTok Test
Author: The New York Times
Duration: 01:55:24
Episode Shownotes
This week, the A.I. company Anthropic has Silicon Valley rethinking the timeline for artificial general intelligence. In addition to releasing a new safety policy, the company’s chief executive, Dario Amodei, laid out a vision of how A.I. could help cure cancer, mental illness and mitigate climate change in the near
future. We consider his most surprising claims and what this means for the acceleration of the technology. Then, the Uber chief executive, Dara Khosrowshahi, joins us in the studio to discuss his company’s new partnership with Waymo, the autonomous vehicle company, and the future of that industry. And finally, leaked court documents reveal exactly how many TikTok videos you need to watch to get hooked on the app. So, Casey puts the number to the test. Guest:Dara Khosrowshahi, chief executive of UberAdditional Reading:Dario Amodei’s Essay “Machines of Loving Grace”TikTok Executives Know About App’s Effect on Teens, Lawsuit Documents AllegeWe want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Summary
In this episode of 'Hard Fork,' Kevin Roose and Casey Newton explore Dario Amodei's optimistic vision for A.I. aimed at addressing global challenges like cancer and climate change by 2026. The discussion centers around the responsibilities in A.I. development, the need for safety measures, and the importance of a cooperative strategy among democracies. Additionally, Uber's CEO Dara Khosrowshahi shares insights on their new partnership with Waymo and the implications for the robotaxi industry, emphasizing a platform strategy over in-house development. Casey conducts an experiment with TikTok to discover its addictive qualities, raising concerns about user engagement and algorithmic targeting.
Go to PodExtra AI's episode page (Powerful A.I. By 2026? + Uber's C.E.O. on the Robotaxi Future + Casey's TikTok Test) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Full Transcript
00:00:00 Speaker_07
Casey, let's start the episode today with a little housekeeping. So we've talked on the show the last couple of weeks about the new New York Times audio subscription.
00:00:07 Speaker_07
This is how you can connect your New York Times subscription on Apple Podcasts and Spotify to get full access to every episode of Hard Fork and all of the other shows from the New York Times. This subscription is now live.
00:00:21 Speaker_07
You may be running into it if you use those platforms, and you should link your account. I did it this week. Very easy. It took me less than a minute, and now I can get all of my favorite shows and their whole back catalogs.
00:00:31 Speaker_07
You know, we talked about AI slots. on the show last week and how hard it is to sort of understand what's true and what's not true out there on the internet.
00:00:40 Speaker_07
And after that show, I did actually subscribe to several different publications and I now pay for several different publications because I was like, if I'm preaching the value of good information out there, I gotta be willing to walk the walk.
00:00:53 Speaker_07
You're being the change you want to see in the world. Yes. So if you want to be the change you wish to see in the podcast world, you can subscribe to the New York Times audio subscription. Do it.
00:01:02 Speaker_07
So that is our third and final pitch for the New York Times audio subscription.
00:01:06 Speaker_07
But there's another piece of housekeeping this week, which is that after this episode is over, you will hear a segment from a different New York Times podcast that'll sort of play just at the tail end of this episode.
00:01:18 Speaker_07
It's an episode of the New York Times newest podcast, The Wirecutter Show.
00:01:22 Speaker_08
And let me tell you, this is an episode about laundry, which might not sound like it's in a traditional wheelhouse, but I'm gonna tell you something.
00:01:27 Speaker_08
I went to a dinner party recently, and a person I'd never met, we got to talking, and he said, have you heard the Wirecutter podcast episode about laundry? Because it changed my life. And so now we're giving that to you for absolutely free.
00:01:39 Speaker_07
So please buy a subscription to New York Times Audio. If we sell the most subscriptions, if we sell more than Ezra and Michael Barbaro, we do get a free concert by the Backstreet Boys.
00:01:49 Speaker_08
And I'll say it, I want it that way. Casey you are my fire We just got another email from somebody who said they thought I was bald I Have an apparently crazy bald energy you have all the energy cast.
00:02:14 Speaker_07
What do you think is a bald seeming about you? I
00:02:17 Speaker_08
I think, for me, they think of me as a wacky sidekick, which is a bald energy.
00:02:23 Speaker_07
Is it? I think so. I don't associate wacky and bald, because I'm thinking Jeff Bezos. I know a lot of very hardcore bald men. Oh, interesting.
00:02:33 Speaker_08
So do you think that maybe people think that I sound like a sort of titan of industry, plutocrat? I would not say that's the energy you're giving, is plutocrat energy, but... Oh, really? Because I just fired 6,000 people to show that I could.
00:02:48 Speaker_07
You did order me to come to the office today.
00:02:51 Speaker_08
I did. I said, there's a return to office in effect immediately. No questions.
00:03:01 Speaker_07
I'm Kevin Roose, a tech columnist from the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork.
00:03:05 Speaker_08
This week, are we reaching the AI endgame? A new essay from the CEO of Anthropic has Silicon Valley talking. Then Uber CEO Dara Khosrowshahi joins us to discuss his company's new partnership with Waymo and the future of autonomous vehicles.
00:03:20 Speaker_08
And finally, internal TikTok documents tell us exactly how many videos you need to watch to get hooked. And so I did. Very brave. God help me. Well, Kevin, the AI race continues to accelerate, and this week, the news is coming from Anthropic.
00:03:44 Speaker_08
Now, last year, you actually spent some time inside this company, and you called it the White Hot Center of AI Doomerism.
00:03:52 Speaker_07
Yes, well, the headline of my piece called it the White Hot Center of AI Doomerism. Just want to clarify. Oh, classic reporter.
00:03:57 Speaker_08
Blame the headline.
00:03:59 Speaker_07
Well, you know, reporters don't often write our own headlines, so I just feel the need to clarify that.
00:04:03 Speaker_08
Fair enough. But the story does talk about how many of the people you met inside this company seemed strangely pessimistic about what they were building.
00:04:11 Speaker_07
Yeah, it was a very interesting reporting experience because I got invited to spend, you know, several weeks just basically embedded at Anthropic as they were gearing up to launch an update of their chatbot, Claude.
00:04:24 Speaker_07
And I sort of expected, you know, they would go in and try to impress me with how great Claude was and talk about all the amazing things it would allow people to do.
00:04:32 Speaker_07
And then I got there and it was like, all they wanted to do was talk about how scared they were of AI and of releasing these systems into the wild.
00:04:39 Speaker_07
I compared it in the piece to like being a restaurant critic who shows up at like a buzzy new restaurant and all anyone wants to talk about is food poisoning.
00:04:47 Speaker_08
Right. And so, for this reason, I was very interested to see, over the past week, the CEO of Anthropic, Dario Amadei, write a 13,000-word essay about his vision of the future.
00:05:01 Speaker_08
And in this essay, he says that he is not an AI doomer, does not think of himself as one, but actually thinks that the future is quite bright and might be arriving very quickly.
00:05:13 Speaker_08
And then shortly after that, Kevin, the company put out a new policy, which they call a Responsible Scaling Policy, that I thought had some interesting things to say about ways to safely build AI systems.
00:05:24 Speaker_08
So, we wanted to talk about this today for a couple reasons. One is that AI CEOs have kept telling us recently that major changes are right around the corner.
00:05:36 Speaker_08
Sam Altman recently had a blog post where he said that an artificial superintelligence could be just a few thousand days away. And now here, Amadei is saying that AGI could arrive in 2020. 26, which check your calendar, Kevin, that is in 14 months.
00:05:52 Speaker_08
And certainly there is a case that this is just hype. But even so, there are some very wild claims in here that I do think deserve broader attention.
00:06:00 Speaker_08
The second reason that we want to talk about this today is that Anthropic is really the flip side to a story that we've been talking about for the past year here. which is what happened to OpenAI during and after Sam Altman's temporary firing as CEO.
00:06:15 Speaker_08
Anthropic was started by a group of people who left OpenAI primarily over safety concerns. And recently, several more members of OpenAI's founding team and their safety research teams have gone over to Anthropic.
00:06:27 Speaker_08
And so in a way, Kevin, Anthropic is an answer to the question of, what would have happened if OpenAI's executive team hadn't spent the past few years falling apart?
00:06:36 Speaker_08
And while they are still the underdog compared to OpenAI, is there a chance that Anthropic is the team that builds AGI first? So that's what we want to talk about today, but I want to start by just talking about this essay.
00:06:48 Speaker_08
Kevin, what did Dario Amadei have to say in his essay, Machines of Loving Grace?
00:06:54 Speaker_07
Yeah, so the first thing that struck me is he is clearly reacting to this perception, which I may have helped create through my story last year, that sort of he and Anthropic are just doomers, right?
00:07:05 Speaker_07
That they are just a company that goes around warning about how badly AI could go if we're not careful. And what he says in this essay that I thought was really interesting and important is, you know, we're going to keep talking about the risks of AI.
00:07:19 Speaker_07
This is not him saying, I don't think this stuff is risky. I've been, you know, taken out of context and I'm actually an AI optimist. What he says is it's important to have both, right? You can't just be going around warning about the doom all the time.
00:07:32 Speaker_07
You also have to have a positive vision for the future of AI because that's what, not only what inspires and motivates people, but it matters what we do.
00:07:43 Speaker_07
I thought that was actually the most important thing that he did in this essay was he basically said, look, this could go well or it could go badly. And whether it goes well or badly is up to us. This is not some inevitable force.
00:07:56 Speaker_07
You know, sometimes people in the AI industry, they have a habit of talking about AI as if it's just kind of this disembodied force that is just going to, you know, happen to us. Inevitably.
00:08:05 Speaker_07
Yes, and we either have to sort of like get on the train or get run over by the train. And what Dario says is actually different. He says, you know, this is, here's a vision for how this could go well, but it's gonna take some work to get there.
00:08:17 Speaker_08
It also made me realize that for the past couple of years, I have heard much more about how AI could go wrong than how it could go right from the AI CEOs, right?
00:08:27 Speaker_08
As much as these guys get knocked for endlessly hyping up their products, they also have, I think to their credit, spent a lot of time trying to explain to people that this stuff is risky. And so there was something
00:08:38 Speaker_08
almost counterintuitive about Dario coming out and saying, wait, let's get really specific about how this could go well.
00:08:44 Speaker_07
Totally. So I think the first thing that's worth pulling out from this essay is the timelines, right? Because as you said, Dario Amadei is claiming that powerful AI, which is sort of his term. He doesn't like AGI. He thinks it sounds like too sci-fi.
00:08:56 Speaker_07
But powerful AI, which he sort of defines as like an AI that would be smarter than a Nobel Prize winner in basically any field and that it could basically control tools, go do a bunch of tasks simultaneously.
00:09:10 Speaker_07
He calls this sort of a country of geniuses in a data center. That's sort of his definition of powerful AI. And he thinks that it could arrive as soon as 2026.
00:09:19 Speaker_07
I think there's a tendency sometimes to be cynical about people with short timelines like these, like, oh, these guys are just saying this stuff is going to arrive so soon because they need to raise a bunch of money for their AI companies.
00:09:31 Speaker_07
And, you know, maybe that is a factor. But I truly believe that at least Dario Amede is sincere and serious about this. This is not a drill to him.
00:09:42 Speaker_07
And Anthropic is actually making plans, scaling teams, and building products as if we are headed into a radically different world very soon, like within the next presidential term.
00:09:52 Speaker_08
Yeah. And look, Anthropic is raising money right now. And that does give Dario motivation to get out there in the market and start talking about curing cancer and all these amazing things that he thinks that that AI can do.
00:10:03 Speaker_08
At the same time, you know, I think that we're in a world where the discourse has been a little bit poisoned by folks like Elon Musk, who are
00:10:11 Speaker_08
constantly going out into public, making bold claims about things that they say are going to happen, you know, within six months or a year, and then truly just never happen.
00:10:20 Speaker_08
And our understanding of Dario, based on our own conversations with him and of people who work with him, is like, he is not that kind of person. This is not somebody who lets his mouth run away with him.
00:10:30 Speaker_08
When he says that he thinks this stuff could start to arrive in 14 months, I actually do give some credibility.
00:10:36 Speaker_07
Yeah, and you know, you can argue with the timescales and plenty of smart people disagree about this, but I think it's worth taking this seriously because this is the head of one of the leading AI labs sort of giving you his thoughts on not just what AI is going to change about the world, but when that's going to happen.
00:10:53 Speaker_07
And what I liked about this essay was that it wasn't trying to sell me a vision of a glorious AI future, right? Dario says, you know, all or some or none of this might come to pass, but it was basically a thought experiment.
00:11:05 Speaker_07
He has this idea in the essay about what he calls the compressed 21st century. He basically says, what if all AI does is allow us to make 100 years worth of progress in the next 10 years in fields like biology. What would that change about the world?
00:11:25 Speaker_08
And I thought that was a really interesting way to frame it. Give us some examples, Kevin, of what Dario says might happen in this compressed 21st century.
00:11:32 Speaker_07
So what he says in this essay is that if we do get what he calls powerful AI relatively soon, that in the sort of decade that follows that,
00:11:41 Speaker_07
We would expect things like the prevention and treatment of basically all natural infectious disease, the elimination of most types of cancer, very good embryo screening for genetic diseases that would make it so that more people didn't die of these hereditary things.
00:12:00 Speaker_07
He talks about there being improved treatment for mental health and other ailments.
00:12:04 Speaker_08
Yeah, I mean, and a lot of this comes down to just understanding the human brain, which is an area where we still have a lot to learn.
00:12:11 Speaker_08
And the idea is, if you have what he calls this country of geniuses that's just operating on a server somewhere, and they are able to talk to each other, to dream, to suggest ideas, to give guidance to human scientists in labs, to run experiments, then you have this massive compression effect, and all of a sudden you get all of these benefits really soon.
00:12:30 Speaker_08
You know, obviously, the headline-grabbing stuff is like, you know, Dario thinks we're gonna cure all cancer and we're gonna cure Alzheimer's disease.
00:12:36 Speaker_08
Obviously, those are huge, but there's also kind of the more mundane stuff, like, do you struggle with anxiety? Do you have other mental health issues? Like, are you, like, mildly depressed?
00:12:48 Speaker_08
It's possible that we will understand the neural circuitry there and be able to develop treatments that would just lead to a general rise in happiness, and that really struck me.
00:12:59 Speaker_07
Yeah, and it sounds, when you just describe it that way, it sounds sort of utopian and crazy. But what he points out, and what I actually find compelling, is like most scientific progress does not happen in a straight line, right?
00:13:12 Speaker_07
You have these kind of moments where there's a breakthrough that enables a bunch of other breakthroughs.
00:13:17 Speaker_07
And we've seen stuff like this already happen with AI, like with AlphaFold, which won the freaking Nobel Prize this year in chemistry, where you don't just have a cure for one specific disease, but you have a way of potentially discovering cures for many kinds of diseases all at once.
00:13:34 Speaker_08
There's a part in the essay that I really liked where he points out that CRISPR was something that we could have invented long before we actually did, but essentially, no one had noticed the things they needed to notice in order to make it a reality.
00:13:48 Speaker_08
And he posits that there are probably hundreds of other things like this right now that just no one has noticed yet.
00:13:55 Speaker_08
And if you had a bunch of AI agents working together in a room and they were sufficiently intelligent, they would just notice those things and we'd be off to the races.
00:14:03 Speaker_07
Right, and what I liked about this section of the essay was that it didn't try to claim that there was some completely novel thing that would be required to result in the changed world that he envisions, right?
00:14:17 Speaker_07
All that would need to happen for society to look radically different 10 or 15 years from now, in Dario's mind, is for that sort of base rate of discovery to accelerate rapidly due to AI.
00:14:31 Speaker_08
Yeah. Now, let's take a moment to acknowledge folks in the audience who might be saying, oh my gosh, will these guys stop it with the AI hype? They're accepting every premise that these AI CEOs will just shovel it.
00:14:44 Speaker_08
They can't get enough, and it's irresponsible. These are just stochastic parrots, Kevin. They don't know anything. It's not intelligence, and it's never going to get any better than it is today.
00:14:54 Speaker_08
And I just want to say I hear you and I see you and our email address is Ezra Klein Show at Playtime.com. But here's the thing. You can look at the state of the art right now.
00:15:05 Speaker_08
And if you just extrapolate what is happening in 2024 and you assume some rate of progress beyond where we currently are.
00:15:14 Speaker_08
It seems likely to me that we do get into a world where you do have these sort of simulated PhD students or maybe simulated super geniuses, and they are able to realize a lot of these kinds of things. Now, maybe it doesn't happen in 5, 10 years.
00:15:27 Speaker_08
Maybe it takes a lot longer than that. But I just wanted to underline, like, we are not truly living in the realm of fantasy. We are just trying to get a few years and a few levels of advancement beyond where we are right now.
00:15:39 Speaker_07
Yeah, and Dario does, in his essay, make some caveats about things that might constrain the rate of progress in AI, like regulation or clinical trials taking a long time. He also talks about the fact that some people may just
00:15:55 Speaker_07
opt out of this whole thing. Like they just may not want anything to do with AI. There might be some political or cultural backlash that sort of slows down the rate of progress.
00:16:05 Speaker_07
And he says, you know, like that could actually constrain this and we need to think about some ways to address that.
00:16:11 Speaker_08
Yeah. So that is sort of the suite of things that Dario thinks will benefit our lives. You know, there's a bunch more in there, you know, he thinks it will help with climate change, other issues.
00:16:23 Speaker_08
But the essay has five parts, and there was another part of the essay that really caught my attention, Kevin.
00:16:29 Speaker_08
And it is a part that looks a little bit more seriously at the risks of this stuff, because any super genius that was sufficiently intelligent to cure cancer could otherwise wreak havoc in the world.
00:16:40 Speaker_08
So what is his idea for ensuring that AI always remains in good hands?
00:16:45 Speaker_07
So he admits that he's not like a geopolitics expert. This is not his forte. Unlike the two of us. Right. And there have been, look, a lot of people theorizing about what the politics of advanced AI are going to look like.
00:17:00 Speaker_07
Dario says that his best guess currently about how to prevent AI from sort of empowering autocrats and dictators is through what he calls an entente strategy. Basically, you want a bunch of democracies to kind of come together to
00:17:13 Speaker_07
secure their supply chain to sort of block adversaries from getting access to things like GPUs and semiconductors, and that you could basically bring countries into this democratic alliance and sort of ice out the more authoritarian regimes from getting access to this stuff.
00:17:32 Speaker_07
But I think, you know, this was sort of not the most fleshed out part of the argument.
00:17:36 Speaker_08
Yeah, well, and I appreciate that he is at least making an effort to come up with ideas for how would you prevent AI from being misused.
00:17:46 Speaker_08
But as I was reading the discussion around the blog post, I found this interesting response from a guy named Max Tegmark.
00:17:54 Speaker_08
Max is a professor at MIT who studies machine learning, and he's also the president of something called the Future of Life Institute, which is a sort of nonprofit focused on AI safety.
00:18:05 Speaker_08
And he really doesn't like this idea of what Dario calls the entente, the group of these democracies working together. And he says he doesn't like it because it essentially sets up and accelerates a race.
00:18:18 Speaker_08
It says to the world that essentially whoever invents super powerful AI first will win forever, right?
00:18:25 Speaker_08
Because in this view, AI is essentially the final technology that you ever need to invent because after that it'll just, you know, invent anything else it needs. And he calls that a suicide race.
00:18:36 Speaker_08
And the reason is this, and he has a great quote, horny couples know that it is easier to make a human level intelligence than to raise and align it. And it is also easier to make an AGI than to figure out how to align or control it.
00:18:49 Speaker_07
Wow, I never thought about it like that. Yeah, you probably never thought I would say horny couple on the show, but I just did.
00:18:56 Speaker_08
So Kevin, what do you make of this sort of feedback?
00:18:59 Speaker_08
Is there a risk there that this effectively serves as a starter pistol that leads maybe our adversaries to start investing more in AI and sort of racing against us and triggering some sort of doom spiral?
00:19:13 Speaker_07
Yeah, I mean, look, I don't have a problem with China racing us to cure cancer using AI, right? If they get there first, more power to them.
00:19:21 Speaker_07
But I think the more serious risk is that they start building the kind of AI that serves Chinese interests, right? That it becomes a tool for surveillance and control of people rather than some of these more sort of democratic ideals.
00:19:34 Speaker_07
And this is actually something that I asked Dario about back last year when I was spending all that time at Anthropic because this is the most common criticism of Anthropic is like, well, if you're so worried about AI and all the risks that it could pose, like, why are you building it?
00:19:49 Speaker_07
And I asked him about this and his response was he basically said, look, there's this problem of in AI research of kind of intertwining, right?
00:19:57 Speaker_07
Of the same technology that sort of advances the state of the art in AI also allows you to advance the state of the art in AI safety, right?
00:20:05 Speaker_07
The same tools that make the language models more capable also make it possible to control the behavior of the language models. And so these things kind of go hand in hand.
00:20:17 Speaker_07
And if you want to compete on the frontier of AI safety, you also have to compete on the frontier of AI capabilities.
00:20:23 Speaker_08
Yeah, and I think it's an idea worth considering.
00:20:26 Speaker_08
To me, it just sounds like, wow, you are really standing on a knife's edge there if you're saying in order to have any influence over the future, we have to be right at the frontier and maybe even gently advance the frontier and yet somehow not accidentally trigger a race where all of a sudden everything gets out of control.
00:20:46 Speaker_08
But I do accept and respect that that is Darius's viewpoint.
00:20:49 Speaker_07
But isn't that kind of what we observed from the last couple of years of AI progress, right? Like OpenAI, it got out there with ChatGPT before any of the other labs had released anything similar.
00:21:01 Speaker_07
And ChatGPT kind of set the tone for all of the products that followed it. And so I think the argument from Anthropic would be like, Yes, we could sort of be way behind the state-of-the-art.
00:21:13 Speaker_07
That would probably make us safer than someone who was actually advancing the state-of-the-art. But then we missed the chance to kind of set the terms of what future AI products from other companies will look like.
00:21:22 Speaker_07
So it's sort of like using a soft power in an effort to influence others.
00:21:26 Speaker_07
Yeah, and the way they put this to me last year was that they wanted, instead of there to be just a race for raw capabilities of AI systems, they wanted there to be a safety race, right, where companies would start competing about whose models were the safest rather than whose models could, you know, do your math homework better.
00:21:43 Speaker_08
So let's talk about the safety race and the other thing that Anthropic did this week to lay out a future vision for AI. And that was with something that has, I'll say it, kind of a boring name, the Responsible Scaling Policy.
00:21:56 Speaker_08
I understand this, you know, this maybe wasn't going to come up over drinks, you know, at the club this weekend. Yeah, but I think this is something that people should pay attention to because it's an example of what you just said, Kevin.
00:22:08 Speaker_08
It is Anthropic trying to use some soft power in the world to say, hey, if we went a little bit more like this, we might be safer.
00:22:15 Speaker_07
All right. So talk about what's in the responsible scaling policy that Anthropic released this week.
00:22:19 Speaker_08
Well, let's talk about what it is. The basic idea is just that as large language models gain new abilities, they should be subjected to more scrutiny, and they should have more safeguards added to them.
00:22:31 Speaker_08
They put this out a year ago, and it was actually a huge success in this sense, Kevin. OpenAI went on to release its own version of it, and then Google DeepMind released a similar scaling policy as well this spring.
00:22:48 Speaker_08
So now Anthropica is coming back just over a year later, and they say, we're going to make some refinements.
00:22:54 Speaker_08
And the most important thing that they say is, essentially, we have identified two capabilities that we think would be particularly dangerous. And so if anything that we make displays these capabilities, we are going to add a bunch of new safeguards.
00:23:10 Speaker_08
The first one of those is if a model can do its own AI research and development. That is going to start ringing a lot of alarm bells, and they're going to put many more safeguards on it.
00:23:23 Speaker_08
And second, if one of these models can meaningfully assist someone who has a basic technical background in creating a chemical, biological, radiological, or nuclear weapon, then they would add these new safeguards. What are these safeguards?
00:23:38 Speaker_08
Well, they have a super long blog post about it. You can look it up.
00:23:41 Speaker_08
But it includes basic things like taking extra steps to make sure that a foreign adversary can't steal the model weights, for example, or otherwise hack into the systems and run away with it.
00:23:51 Speaker_07
Right. And this is some of it is similar to things that were proposed by the Biden White House in its executive order on AI last year. This is also these are some of the steps that came up in SB 1047, the AI
00:24:05 Speaker_07
regulation that was vetoed by Governor Newsom in California recently. So these are ideas that have been floating out there in the sort of AI safety world for a while.
00:24:14 Speaker_07
But Anthropic is basically saying we are going to proactively commit to doing this stuff even before a government requires us to.
00:24:21 Speaker_08
There's a second thing I like about this, and it relates to this SB1047 that we talked about on the show. Something that a lot of folks in Silicon Valley didn't like about it was the way that it tried to identify danger.
00:24:34 Speaker_08
And it was not because of a specific harm that a model could cause. It was by saying, well, if a model costs a certain amount of money to train, right? Or if it is trained with a certain amount of compute.
00:24:47 Speaker_08
Those were the proxies that the government was trying to use to understand why this would be dangerous. And a lot of folks in Silicon Valley said, we hate that because that has nothing to do with whether these things could cause harm or not.
00:24:58 Speaker_08
So what Anthropic is doing here is saying, well, why don't we try to regulate based on the anticipated harm?
00:25:04 Speaker_08
Obviously, it would be bad if you could log on to Claude, Anthropic's rival to ChachiBT, and said, hey, help me build a radiological weapon, which is something that I might type into Claude, because I don't even know the difference between a radiological weapon and a nuclear weapon, do you?
00:25:17 Speaker_08
I hope you never learn. I hope I don't either, because sometimes I have bad days, Kevin, and I get to scheming.
00:25:25 Speaker_08
So for this reason, I think that governments, regulators around the world might want to look at this approach and say, hey, instead of trying to regulate this based on how much money AI labs are spending or like how much compute is involved, why don't we look at the harms we're trying to address and say, hey, if you build something that could cause this kind of harm, you have to do X, Y, and Z.
00:25:42 Speaker_07
Yeah, that makes sense to me. So I think the biggest impact that both the sort of essay that Dario wrote and this responsible scaling policy had on me was not about any of the actual specifics of the idea.
00:25:53 Speaker_07
It was purely about the timescales and the urgency. It is one thing to hear a bunch of people telling you that AI is coming and that it's going to be more powerful than you can imagine, sooner than you can imagine.
00:26:06 Speaker_07
But if you actually start to internalize that and plan for it, It just feels very different. If we are going to get powerful AI sometime in the next, let's call it two to ten years, you just start making different choices.
00:26:23 Speaker_08
Yeah, I think it becomes sort of the calculus. I can imagine it affecting what you might want to study in college if you are going to school right now.
00:26:32 Speaker_08
I have friends who are, you know, thinking about leaving their jobs because they think the place where they're working right now will not be able to compete in a world where AI is very widespread.
00:26:44 Speaker_08
So, yes, you're absolutely starting to see it creep into the calculus. I don't know kind of what else it could do. There's no real call to action here, because you can't really do very much until this world begins to arrive.
00:27:00 Speaker_08
But I do think psychologically, we want people to at least imagine, as you say, what it would be like to live in this world, because I have been surprised at how little discussion this has been getting.
00:27:13 Speaker_07
Yeah, I totally agree. I mean, to me, it feels like we are entering, I wouldn't call it like an AI endgame because I think we're closer to the start than the end of this transformation. But it does feel like something is happening.
00:27:29 Speaker_07
I'm starting to notice AI's effects in my life more. I'm starting to feel more dependent on it. And I'm also like, I'm kind of having an existential crisis. Really? Not a full-blown one, but typically, I'm a guy who likes to plan. I like to strategize.
00:27:44 Speaker_07
I like to have a five-year and a ten-year plan. And I've just found that my own certainty about the future and my ability to plan long-term is just way lower than it has been for any time that I can remember.
00:27:57 Speaker_08
That's interesting. I mean, for myself, I feel like that has always been true. You know, in 1990, I did not know what things were going to look like in 2040, and I would be really surprised by a lot of things that have happened along the way.
00:28:08 Speaker_08
But yeah, there's a lot of uncertainty out there.
00:28:10 Speaker_07
It's scary, but I also like...
00:28:14 Speaker_08
Do you not feel a little bit excited about it? Of course! Look, I love software, I love tools, I wanna live in the future, and it's already happening to me. There is a lot of that uncertainty, and that stuff freaks me out.
00:28:29 Speaker_08
But if we could cure cancer, if we could cure depression, if we could cure anxiety, you'd be talking about the greatest advancement to human well-being, certainly in decades, maybe that we've ever seen.
00:28:40 Speaker_07
Yeah. I mean... I have some priors on this because like my dad died of a very rare form of cancer that was, it was like a sub 1% type of cancer.
00:28:55 Speaker_07
And when he got sick, it was like, you know, I read all the clinical trials and it was just like, there hadn't been enough people thinking about this specific type of cancer.
00:29:05 Speaker_07
and how to cure it because it was not breast cancer, it was not lung cancer, it was not something that millions of Americans get. And so there just wasn't the kind of brain power devoted to trying to solve this.
00:29:16 Speaker_07
Now, it has subsequently, it hasn't been solved, but there are now treatments that are in the pipeline that didn't exist when he was sick. I just constantly am wondering if he had gotten sick now instead of when he did, maybe he would have lived.
00:29:34 Speaker_07
And I think that is one of the things that makes me really optimistic about AI is just
00:29:41 Speaker_07
Maybe we just do have the brainpower or we will soon have the brainpower to devote, you know, world-class research teams to these things that might not affect millions of people, but that do affect some number of people.
00:29:54 Speaker_08
Absolutely.
00:29:55 Speaker_07
I just, I don't know. It really, I got kind of emotional.
00:29:59 Speaker_07
reading this essay, because it was just like, you know, obviously it's, you know, I'm not someone who believes all the hype, but I'm like, I assign some non-zero probability to the possibility that he's right, that all this stuff could happen.
00:30:11 Speaker_07
And I just find that so much, more interesting and fun to think about than like a world where everything goes off the rails.
00:30:20 Speaker_08
Well, it's just the first time that we've had a truly positive, transformative vision for the world coming out of Silicon Valley in a really long time.
00:30:30 Speaker_08
In fact, this vision, it's more positive and optimistic than anything that has been like in the presidential campaign.
00:30:37 Speaker_08
You know, it's like when the presidential candidates talk about the future of this country, it's like, well, you know, we'll give you this tax break, right? Or we'll make this other policy change.
00:30:46 Speaker_08
Nobody's talking about how they're going to freaking cure cancer. Right? So I think, of course, we're drawn to this kind of discussion because it feels like, you know, there are some people in the world who are taking really, really big swings.
00:30:58 Speaker_08
And if they connect, then we're all going to benefit.
00:31:00 Speaker_07
Yeah. Yeah.
00:31:08 Speaker_08
When we come back, why Uber has way more autonomous vehicles on the road than it used to.
00:31:40 Speaker_07
Well, Casey, one of the biggest developments over the past few months in tech is that self-driving cars now are actually working. Yeah, but this is no longer in the realm of sci-fi. Yes.
00:31:50 Speaker_07
So we've talked, obviously, about the self-driving cars that you can get in San Francisco now. It used to be two companies, Waymo and Cruise. Now it's just Waymo.
00:31:58 Speaker_07
And there have also been a bunch of different autonomous vehicle updates from other companies that are involved in the space. And the one that I found most interesting recently was about Uber.
00:32:08 Speaker_07
Now, as you will remember, Uber used to try to build its own robo taxis. They gave that up back in 2020.
00:32:16 Speaker_07
That was the year they sort of sold off their autonomous driving division to a startup called Aurora after losing just an absolute ton of money on it. But now they are back in the game and they just recently announced a multi-year partnership with
00:32:31 Speaker_07
Cruise, the self-driving car company. They also announced an expanded partnership with Waymo, which is going to allow Uber riders to get AVs in Austin, Texas, and Atlanta, Georgia.
00:32:43 Speaker_07
They've been operating this service in Phoenix since last year, and that's going to keep expanding. They also announced that self-driving Ubers will be available in Abu Dhabi through a partnership with the Chinese AV company WeRide.
00:32:56 Speaker_07
And they've also made a long-term investment in Wave, which is a London-based autonomous driving company.
00:33:03 Speaker_07
So they are investing really heavily in this, and they're doing it in a different way than they did back when they were trying to build their own self-driving cars.
00:33:10 Speaker_07
Now they are essentially saying, we want to partner with every company that we can that is making self-driving cars.
00:33:15 Speaker_08
Yeah, so this is a company that many people take several times a week, Uber. And yet I feel like it sometimes is a bit taken for granted.
00:33:25 Speaker_08
And while we might just focus on the cars you can get today, they are thinking very long term about what transportation is going to look like in five or 10 years.
00:33:33 Speaker_08
And increasingly for them, it seems like autonomous vehicles are a big part of that answer.
00:33:37 Speaker_07
Yeah, and what I found really interesting, so Tesla had this robo-taxi event last week where Elon Musk talked about how you'll soon be able to hail a self-driving Tesla.
00:33:46 Speaker_07
And what I found really interesting is that Tesla's share price plummeted after that event, but Uber's stock price rose to an all-time high.
00:33:55 Speaker_07
So clearly people think that, or at least some investors think that Uber's approach is better here than Tesla's. It's the sort of thing, Kevin, that makes me want to talk to the CEO of Uber. And lucky for you, he's here. Oh, thank goodness.
00:34:06 Speaker_07
So today we're going to talk with Uber CEO, Dara Khosrowshahi. He took over at Uber in 2017 after a bunch of scandals led the founder of Uber, Travis Kalanick, to step down. He has made the company profitable for the first time in its history.
00:34:21 Speaker_07
And I think a lot of people think he's been doing a pretty good job over there. And he is leading this charge into autonomous vehicles.
00:34:29 Speaker_07
And I'm really curious to hear what he makes, not just of Uber's partnership with Waymo, but of sort of the whole self-driving car landscape.
00:34:36 Speaker_08
Let's bring him in.
00:34:37 Speaker_07
Let's do it. Dara Khosrowshahi, welcome to Hard Fork. Thank you for having me. So you were previously on the board of the New York Times Company until 2017 when you stepped down right after taking over at Uber.
00:34:56 Speaker_07
I assume you still have some pull with our bosses, though, because of your years of service. So can you get them to build us a nicer studio? I didn't have pull when I was on the board, and I certainly have zero pull now.
00:35:07 Speaker_01
I've got negative pull, I think. They're taking revenge on me.
00:35:12 Speaker_07
Well, since you left the board, they're making all kinds of crazy decisions, like letting us start a podcast.
00:35:17 Speaker_06
Yeah. Oh my God. Yeah.
00:35:18 Speaker_07
But all right. So we are going to talk today about your new partnership with Waymo and the sort of autonomous driving future.
00:35:27 Speaker_07
I would love to hear the story of how this came together, because I think for people who've been following this space for a number of years, this was surprising. Uber and Waymo have not historically had a great relationship. The two companies were
00:35:38 Speaker_01
It was a little rocky at first, yes.
00:35:39 Speaker_07
Embroiled in litigation and lawsuits and trade secret theft and things like that. It was a big deal. And so how did they approach you? Did you approach them? How did this partnership come together?
00:35:51 Speaker_01
I guess it's time healing, right? When I came on board, We thought that we wanted to establish a better relationship with Google generally way more generally and even though we were working on our own self-driving technology.
00:36:04 Speaker_01
It was always within the context of we were developing our own but we want to work with third parties as well what are the disadvantages of developing our own technology.
00:36:14 Speaker_01
was that some of the other players, the way most of the world, et cetera, heard us, but didn't necessarily believe us. It's difficult to work with players that you compete. So one of the first decisions that we made was, we can't be in between here.
00:36:30 Speaker_01
Either you have to go vertical, or you have to go platform strategy. You can't achieve both, and we have to make it better.
00:36:39 Speaker_08
We either have to do our own thing, or we have to do it with partners.
00:36:41 Speaker_01
Yeah, absolutely. And so that strategic kind of fork became quite apparent to me. And then the second was just what are we good at? Listen, we, I'll be blunt, we sucked at hardware, right? We tried to apply software principles to hardware.
00:36:58 Speaker_01
It doesn't work. Hardware is a different pace, different demand in terms of perfection, et cetera. And ultimately, that fork, do we go vertical? And there are very few companies that can do software and hardware.
00:37:11 Speaker_01
Well, Apple, Tesla are arguably one of the few in the world. And we decided to make a bet on the platform. And so once we made that bet, we went out and identified who are the leaders. Waymo was a clear leader.
00:37:25 Speaker_01
First, we had to make peace with them and settle in court, et cetera. We got Google to be a bigger shareholder. And then over a period of time, we built relationships. And, you know, I do think there's a synergy between the two.
00:37:38 Speaker_01
So it just makes sense, the relationship. And we're very, very excited to, on a forward basis, expand it pretty significantly.
00:37:46 Speaker_08
So this was, I feel like, maybe your most consequential decision to date as the CEO of this company.
00:37:53 Speaker_08
If you believe that AVs are going to become the norm for many people hailing a ride in 10 or 15 years, it's conceivable that they might open up the Waymo app, right? And not the Uber app. Waymo has an app to order cars. I use it fairly regularly, right?
00:38:07 Speaker_08
So what gave you the confidence that in that world it will still be Uber that is the app that people are turning to and not Waymo or whatever other apps might have arisen for other AV companies?
00:38:18 Speaker_01
I think first is that it's not a binary outcome, OK? I think that a Waymo app and an Uber app can coexist. We saw it in my old job in the travel business, right? I ran Expedia.
00:38:30 Speaker_01
And there's this dramatic, is Expedia going to put the hotel chains out of business? Are the hotel chains going to put Expedia out of business? The fact is both thrived.
00:38:39 Speaker_01
And there's a set of customers who books through Expedia, there's a set of customers who books Hotel Direct, and both businesses have grown and interactivity in general has grown. Same thing if you look at food, right? McDonald's has its own app.
00:38:52 Speaker_01
It's a really good app. It has a loyalty program. Starbucks has its own app, has a loyalty program, yet both are engaging with us. through the Uber Eats marketplace. So my conclusion was that there isn't an either or.
00:39:05 Speaker_01
I do believe there will be other companies. There'll be cruises and there'll be we rides and waves, et cetera. There'll be other companies and self-driving choices.
00:39:13 Speaker_01
And the person who wants utility, speed, ease, familiarity will choose Uber and both can coexist and both can thrive. And both are really going to grow because autonomous will be the future eventually.
00:39:24 Speaker_07
So tell us more about the partnership with Waymo that is going to take place in Austin and Atlanta. Who is actually paying for the maintenance of the cars? Does Uber have to sort of make sure that there's no trash left behind in the cars?
00:39:41 Speaker_07
What is Uber actually doing in addition to just making these rides available through the app?
00:39:45 Speaker_01
Sure. So I don't want to talk about the economics because they're confidential.
00:39:49 Speaker_01
in terms of the deal but in those two cities way more will be available exclusively through through the uber app and we will also be running the fleet operations as well so depots recharging cleaning if something gets lost making sure that it gets back to its owner etc.
00:40:08 Speaker_01
And Waymo will provide the software driver, will obviously provide the hardware, repair the hardware, etc. And then we will be doing the upkeep and operating the networks, so to speak.
00:40:18 Speaker_07
And for riders, if you want to get in a Waymo in one of those cities through Uber, is there an option to specifically request a self-driving Waymo?
00:40:26 Speaker_07
Or is it just kind of chance, like if the car that's closest to you happens to be a Waymo, that's the one you get?
00:40:32 Speaker_01
Right now, the experience, for example, in Phoenix, is that it's by chance. I think you've got one by chance, and you can say, yes, I'll do it or not. And I think that's what we're going to start with.
00:40:41 Speaker_01
But there may be some people who only want Waymos, and there are some people who may not want Waymos. And we'll solve for that over a period of time.
00:40:47 Speaker_01
It could be personalizing preferences, or it could be what you're talking about, which is, I only want a Waymo.
00:40:53 Speaker_07
Do the passengers get rated by the self-driving car the way that they would in a human-driven Uber?
00:40:58 Speaker_01
Not yet, but that's not a bad idea.
00:41:01 Speaker_07
What about tipping? If I get out of a self-driven Uber, is there an option to tip the car if it did a good job?
00:41:06 Speaker_01
I'm sure we could build that. Why not? I don't know. I do wonder if people are going to tip machines. I don't think it's likely, but you never know.
00:41:15 Speaker_08
It sounds crazy, but at some point someone is going to start asking because they're going to realize it's just free margin. You know, it's like even if only 100 customers do it in a whole year, I don't know. You know, it's just free money.
00:41:24 Speaker_01
I mean, the good news is tipping 100% of tips go to drivers now, and we definitely want to keep that. So we like the tipping habit. But whether people tip machines is TBD.
00:41:33 Speaker_07
Yeah. And how big are these fleets? I think I read somewhere recently that Waymo has about 700 self-driving cars operating nationwide. How many AVs are we talking about in these cities?
00:41:43 Speaker_01
We're starting in the hundreds and then we'll expand from there.
00:41:48 Speaker_07
I know you don't want to discuss the economics, even though I would love to learn what the split is there.
00:41:52 Speaker_01
I'm not going to tell you.
00:41:54 Speaker_07
But you did recently talk about the margins on autonomous rides being lower than the margins on regular Uber rides for at least a few more years. That's not intuitive to me because in an autonomous ride, you don't have to pay the driver.
00:42:09 Speaker_07
So you would think the margin would be way higher for Uber. But why would you make less money if you don't have to pay a driver?
00:42:15 Speaker_01
So generally, our design spec in terms of how we build businesses is any newer business, we're going to operate at a lower margin while we're growing that business. You don't want it to be profitable day one.
00:42:25 Speaker_01
And that's my attitude with autonomous, which is, again, get it out there, introduce it to as many people as possible. At a maturity level, generally, if you look at our take rate. Around the world, it's about 20 percent. We get 20 percent.
00:42:38 Speaker_01
The driver gets 80 percent. We think that's a model that makes sense for any autonomous partner going forward. And that's that's what we expect. I kind of don't care, honestly, what the margins are for the next five years.
00:42:49 Speaker_01
The question is, can I get lots of supply? Can it be absolutely safe? And, you know, does that twenty eighty split look reasonable going forward, and I think it does. Yeah.
00:43:00 Speaker_07
I want to ask about Tesla. You mentioned them a little earlier. They held an event recently where they unveiled their plans for a robo-taxi service. Do you consider Tesla a competitor?
00:43:14 Speaker_01
Well, they certainly could be right if they develop their own AV vehicle and they decide to go direct only through the Tesla app, they would be a competitor. And if they decide to work with us, then we would be a partner as well.
00:43:31 Speaker_01
And to some extent, again, both can be true. So I don't think it's going to be an either or. I think Elon's vision is pretty compelling, especially like you might have these cyber shepherds or these owners of these fleets, et cetera.
00:43:45 Speaker_01
Those owners, if they want to have maximum earnings on those fleets, will want to put those fleets on Uber. But at this point, it's unknown what his intentions are.
00:43:56 Speaker_08
There's this big debate that's playing out right now about who has the better AV strategy between Waymo and Tesla in the sense that the Waymos have many, many sensors on them. The vehicles are much more expensive to produce.
00:44:11 Speaker_08
Tesla is trying to get to full autonomy using only its cameras and software.
00:44:18 Speaker_08
And Andrej Karpathy, the AI researcher, recently said that Tesla was going to be in a better position in the long run because it ultimately just had a software problem, whereas Waymo has a hardware problem, and those are typically harder to solve.
00:44:30 Speaker_08
I'm curious if you have a view on this, whether you think one company is likelier to get to a better scale based on the approach that they're taking with their hardware and software.
00:44:41 Speaker_01
I mean, I think that hardware costs scale down over a period of time. So sure, Waymo has a hardware problem, but they can solve it. I mean, the history of compute and hardware is like the costs come down very, very significantly.
00:44:57 Speaker_01
The Waymo solution is working right now. So it's not theory, right? And I think the differences are bigger, which is Waymo has more sensors, has cameras, has LIDAR. So there's a certain redundancy there. Waymo generally has more compute, so to speak.
00:45:12 Speaker_01
So the inference of that computer is going to be better. Right. And Waymo also has a high definition maps. that essentially makes the problem of recognizing what's happening in the real world a much simpler problem.
00:45:28 Speaker_01
So under Elon's model, the weight that the software has to carry is very, very heavy versus the Waymo and most other player model where
00:45:38 Speaker_01
You don't have to kind of weigh as much on training and you make the problem much simpler as a compute problem to understand. I think eventually both will get there.
00:45:48 Speaker_07
But if you had to guess, who's going to get to sort of a viable scale first?
00:45:52 Speaker_01
Listen, I think Elon eventually will get to a viable scale. But for the next five years, I bet on Waymo. And we are betting on Waymo.
00:46:00 Speaker_08
I'll say this, I don't want to get into an autonomous Tesla in the next five years. Somebody else can test that out. I'm not going to be an early adopter of that one.
00:46:07 Speaker_01
FSD is getting pretty good.
00:46:08 Speaker_08
Have you used it recently? I have not used it recently.
00:46:11 Speaker_01
It's really good. For example, the cost of a solid-state lighter now is $500, $600. So why wouldn't you put that into your sensor stack? It's not that expensive. And for a fully self-driving, specialized auto, I think that makes a lot of sense to me.
00:46:30 Speaker_01
Now, Elon has accomplished the unimaginable many, many, many times. So I wouldn't bet against them.
00:46:37 Speaker_08
Yeah, I don't know. This is my secret dream. Obviously, you should stay at Uber as long as you want.
00:46:42 Speaker_08
When you're done with that, I actually do think you should run Tesla because I think you would be just as you've done Uber, you'd be willing to make some of the sort of easy compromises, like just put a $500 fricking LIDAR on the thing and we'd go much faster.
00:46:54 Speaker_01
So I have a full time job and I'm very happy with it. Thank you.
00:46:57 Speaker_08
Well, the Tesla board is listening.
00:47:00 Speaker_00
I don't know if the Tesla board listens to you, too. Good point. That's true. I made too many Kennedy jokes. We're opening up the board meeting with an episode of Hard Fork, everybody.
00:47:11 Speaker_07
They can learn a lot from this show. What's your best guess for when, say, 50% of Uber rides in the U.S. will be autonomous?
00:47:19 Speaker_01
I'd say close to eight to 10 years is my best guess, but I am sure that'll be wrong. Probably closer to 10. Most people have overestimated, you know, again, it's a wild guess. The probabilities of your being right are just as much as mine.
00:47:38 Speaker_07
I'm curious if we can sort of get into a future imagining mode here. Like in the year, whether it's 10 years or 15 years or 20 years from now, when maybe a majority of rides in at least big cities in the U.S.
00:47:52 Speaker_07
will be autonomous, do you think that changes the city at all? Like do the roads look different? Are there more cars on the road? Are there fewer cars on the road? What does that even look like?
00:48:02 Speaker_01
So I think that the cities will have much, much more space to use. Parking often takes up 20-30% of the square miles in a city, for example, and that parking space will be open for living, parks, etc. So there's no doubt that it will be a better world.
00:48:23 Speaker_01
You will have greener, cleaner cities, and you'll never have to park again, which I think is pretty cool.
00:48:30 Speaker_07
I'm very curious what you think about the politics of autonomy in transportation. In the early days of Uber, there was a lot of backlash and resistance from taxi drivers. And, you know, they saw Uber as a threat to their livelihoods.
00:48:43 Speaker_07
There were some, you know, well-publicized cases of sort of sabotage and big protests. Do you anticipate there will be a backlash from either drivers or the public to the spread of AVs as they start to appear in more cities?
00:48:57 Speaker_01
I think there could be. And what I'm hoping is that we avoid the backlash by having the proper conversations. Now, historically, society as a whole, we've been able to adjust to job displacement because it does happen gradually.
00:49:11 Speaker_01
And even in a world where there's greater automation now than ever before, employment rates, et cetera, are at historically great levels. But the fact is that AI is going to displace jobs. What does that mean? How quickly should we go?
00:49:26 Speaker_01
How do we think about that? Those are discussions that we're going to have. And if we don't have the discussions, sure, there will be backlash. There's always backlash against societal change that's significant. Now,
00:49:37 Speaker_01
We now work with taxis in San Francisco and taxi drivers who use Uber make more than 20% more than the ones who don't. So there is a kind of solution space where new technology and established players can win.
00:49:52 Speaker_01
I don't know exactly what that looks like.
00:49:54 Speaker_07
But that calculus does not apply to self-driving. You know, it's not like the Uber driver who's been driving an Uber for 10 years and that's their main source of income can just start driving a self-driving Waymo. You don't need a driver.
00:50:05 Speaker_01
No, you don't need a driver.
00:50:05 Speaker_07
It's not just that they have to switch the app they're using. It's that it threatens to put them out of a job.
00:50:10 Speaker_01
Well, listen, could they be part of fleet management, cleaning, charging, et cetera? That's a possibility. We are now working with some of our drivers. They're doing AI map labeling and training of AI models, et cetera.
00:50:24 Speaker_01
So we're expanding the solution set of work on demand work that we're offering our drivers because there is part of that work, which is driving, maybe going away or the growth in that work is going to slow down at least over the next 10 years.
00:50:40 Speaker_01
And then we'll look to adjust. But listen, these are these are issues that are real. And I don't have a clean answer for them at this point. Yeah.
00:50:48 Speaker_08
You brought up shared rides earlier. And you know, back in the day, I think when Uber X first rolled out shared rides, like I did that a couple of times. And then, you know, I don't know, I like got a raise at my job.
00:50:59 Speaker_08
And I thought, you know, from here on out, I think it's just gonna be me in the car. How popular do you think you can make shared rides? And like, is there anything that you can do to make that more appealing?
00:51:09 Speaker_01
Well, I think the way that we have to make it more appealing is to reduce the penalty, so to speak, of the shared rides. I think the number one reason why people use Uber is they want to save time. They want to have their time back.
00:51:20 Speaker_01
And a shared ride would, you know, you would get about a 30 percent decrease in price historically, but there could be a 50 to 100 percent time penalty. Yeah. We're working now.
00:51:30 Speaker_07
You might end up sitting next to Casey Newman. That would be cool.
00:51:34 Speaker_01
That'd be amazing. Although I would feel very short. Otherwise I would have no complaints. People so far we've heard don't have a problem with company. It really is time and they don't mind writing with other people.
00:51:46 Speaker_01
There's a certain sense of satisfaction with writing with other people, but we're now working with both algorithmically and I think also fixing the product. Uh, previously you would choose a shared ride and you get an upfront discount.
00:51:59 Speaker_01
So your incentive as a customer is to get the discount, but not to get a shared ride. So we would have customers gaming the system. They get a shared ride at 2 AM when they know they're not going to be matched up, et cetera.
00:52:10 Speaker_01
Now you get a smaller discount and you get a reward, which is a higher discount if you're matched. So part of it is we're not. Customers aren't working against us and we're not working against customers, but we're working on tech.
00:52:22 Speaker_01
We are reducing the time penalty, which is we avoid. these weird routes etc that's gonna cost you fifty percent of your time or a hundred percent of your time now like an autonomous.
00:52:33 Speaker_01
If we are the only player that then has the liquidity to introduce shared autonomous into cities that lowers congestion, lowers the price, that's another way in which our marketplace can add value to the ecosystem.
00:52:45 Speaker_07
Got it. Speaking of shared rides, Uber just released a new airport shuttle service in New York City. It costs $18 a person. You book a seat. It goes on a designated sort of route on a set schedule. I don't have a question.
00:53:01 Speaker_07
I just wanted to congratulate you on inventing a bus.
00:53:04 Speaker_01
It's a better bus. You know exactly when it's coming, picking you up, like just knowing exactly where your bus is, pickup, knowing what your path is, real time. It just gives a sense of comfort. We think this can be a pretty cool product. And again.
00:53:18 Speaker_01
Is bus going to be hugely profitable for us long term? I don't know, but it will introduce us to a bigger audience to come into the Uber ecosystem. And we think it can be good for cities as well.
00:53:30 Speaker_01
If you're in Miami, by the way, over the weekend, we got buses to the Taylor Swift concert as well. So I'm just saying.
00:53:35 Speaker_08
Well, I mean, look, it should not be hard to improve on the experience of a city bus. Yeah. Like, you know what I mean? So I like city buses. When was the last time you were on a city bus?
00:53:44 Speaker_08
Well, I took the train here, so it wasn't a bus, but it was transit.
00:53:47 Speaker_01
He doesn't take shared, he doesn't take bus. I'm a man of the people. I like to ride public transit. You're an elitist.
00:53:53 Speaker_08
No, I would love to see a picture of you on a bus sometimes in the past five years, because I'm pretty sure that's never happened.
00:53:58 Speaker_01
Let me ask you this. I think we can make the experience better.
00:54:01 Speaker_08
So far I've resisted giving you any product feedback, Dara, but I have this one thing that I have always wanted to know the explanation for, and it's this.
00:54:09 Speaker_08
At some point in the past couple years, you all, when I ordered an Uber, started sending me a push notification saying that the driver was nearby.
00:54:16 Speaker_08
And I'm the sort of person, when I've ordered an Uber, Dara, I'm gonna be there when the driver pulls up. I'm not making this person wait, okay? I'm gonna respect their time.
00:54:24 Speaker_08
And what I've learned is when you tell me the driver is nearby, what that means is they're at least three minutes away and they might be two miles away. And what I want to know is why do you send me that notification?
00:54:35 Speaker_01
We want you to be prepared to not keep the driver waiting. Maybe we should personalize it. I think I think that's a that's a good question, which is depending on whether or not you keep the driver waiting.
00:54:45 Speaker_01
I think that is one of the cool things with AI algos that we can do at this point. You're right. The experience is not quite optimized, but it's for the driver. It's for the driver.
00:54:55 Speaker_08
No, I get it. And if I were a driver, I would be happy that you were sending that. But you also send me this notification that the driver's arriving. And that's when I'm like, OK, it's time to go downstairs.
00:55:04 Speaker_08
But it sounds like we're making progress on this.
00:55:07 Speaker_01
I think the algorithm just likes you.
00:55:08 Speaker_08
It just wants to have a conversation with you. Yeah, they know that I love my rides.
00:55:11 Speaker_07
Yeah. Well, Casey has previously talked about how he doesn't like his Uber drivers to talk to him. And this is a man who doesn't share.
00:55:19 Speaker_08
Listen, this man likes to coast through life in a cosseted bubble. I mean, here's what I'm saying.
00:55:24 Speaker_08
If you're on your way to the airport at 630 in the morning, do you truly want a person you've never met before asking you who you're going to vote for in the election? Is that an experience that anyone enjoys?
00:55:32 Speaker_01
By the way, I drive, I drove, and reading the rider as to whether they want to have a conversation or not, I was not good at the art of of kind of conversation as a driver.
00:55:45 Speaker_00
Were you too talkative?
00:55:46 Speaker_01
No, no. Hey, how's it going? Are you having a good day? Going to work? And then I just shut up and have a nice day.
00:55:52 Speaker_08
To me, that's ideal.
00:55:54 Speaker_01
But I don't know if that's... No, that's perfect.
00:55:57 Speaker_08
That's going to give you all the information that you need.
00:55:59 Speaker_07
I'll be your driver any day. This is Casey's real attraction to self-driving cars is that he never has to talk to another human.
00:56:04 Speaker_08
Look, you can make fun of me all you want. I am not the only person who feels this way.
00:56:08 Speaker_01
Let me tell you. When I check into a hotel, same thing. Like, did you have a nice day? Yeah, but where are you coming in from? Let's not get into it.
00:56:16 Speaker_08
I would love to see you checking into a hotel. So did you have a nice day and you're like, well, let me tell you about this board meeting I just went to because the pressure I'm under, you don't want to hear about it.
00:56:26 Speaker_07
All right. Well, I think we're at time. Dara, thank you so much for coming. We appreciate it.
00:56:29 Speaker_08
It was fun.
00:56:32 Speaker_07
When we come back, well, AI is driving progress and it's driving cars. Now we're gonna find out if it can drive Casey insane. He watched 260 TikTok videos and he'll tell you all about it.
00:56:52 Speaker_07
Well, Casey, aside from all the drama in AI and self-driving cars this week, we also had some news about TikTok.
00:56:59 Speaker_08
One of the other most powerful AI forces on Earth.
00:57:02 Speaker_07
No, truly. Yes. I unironically believe that. Yeah, that was not a joke. Yes. So this week we learned about some documents that came to light as part of a lawsuit that is moving through the courts right now.
00:57:15 Speaker_07
As people will remember, the federal government is still trying to force ByteDance to sell TikTok. But last week, 13 states and the District of Columbia sued TikTok, accusing the company of creating an intentionally addictive app that harmed children.
00:57:29 Speaker_08
And Kevin, and this is my favorite part of this story, is that Kentucky Public Radio got a hold of these court documents and they had many redactions, you know, often in these cases, the most interesting sort of facts and figures will just be redacted for who knows what reason, but the geniuses over at Kentucky Public Radio just copy and pasted everything in the document.
00:57:48 Speaker_08
And when they pasted it, everything was totally visible.
00:57:51 Speaker_07
This keeps happening. I feel like every year or two, we get a story about some failed redaction. Like, is it that hard to redact a document?
00:58:00 Speaker_08
I'll say this, I hope it always remains this hard to redact a document because... I read stuff like this, Kevin, and I'm in heaven.
00:58:06 Speaker_07
Yes. So they got a hold of these documents. They copied and pasted. They figured out what was behind sort of the black boxes in the redacted materials. And it was pretty juicy.
00:58:15 Speaker_07
These documents included details like TikTok's knowledge of a high number of underage kids who were stripping for adults on the platform. The adults were paying them in digital gifts.
00:58:28 Speaker_07
These documents also claim that TikTok had adjusted its algorithm to prioritize people they deemed beautiful.
00:58:35 Speaker_07
And then there was this stat that I know you honed in on, which was that these documents said, based on internal conversations, that TikTok had figured out exactly how many videos it needed to show someone in order to get them hooked on the platform.
00:58:49 Speaker_07
And that number is 260.
00:58:54 Speaker_08
260 is what it takes. You know, it reminds me, this is sort of ancient, but do you remember the commercial in the 80s where they would say, like, how many licks does it take to get to the center of a Tootsie Pop? Yes.
00:59:03 Speaker_08
This, to me, this is the sort of 2020s equivalent. How many TikToks do you have to watch until you can't look away ever again?
00:59:11 Speaker_07
Yes, so this is, according to the company's own research, this is about the tipping point where people start to develop a habit or an addiction of going back to the platform and they sort of become sticky in the parlance of social media apps.
00:59:24 Speaker_08
In the disgusting parlance of social media apps, it becomes sticky.
00:59:28 Speaker_07
Yes. So Casey, when we heard about this magic number of 260 TikTok videos, you had what I thought was an insane idea. Tell us about it.
00:59:38 Speaker_08
Well, Kevin, I thought if 260 videos is all it takes, maybe I should watch 260 TikToks. And here's why. I am an infrequent user of TikTok. I would say once a week, once every two weeks, I'll check in, I'll watch a few videos.
00:59:54 Speaker_08
And I would say generally enjoy my experience, but not to the point that I come back every day. And so I've always wondered what I'm missing.
01:00:03 Speaker_08
because I know so many folks that can't even have TikTok on their phone because it holds such a power over them. And they feel like the algorithm gets to know them so quickly and so intimately that it can only be explained by magic.
01:00:18 Speaker_08
So I thought, if I've not been able to have this experience just sort of normally using TikTok, what if I tried to consume 260 TikToks as quickly as I possibly could and just saw what would happen after that?
01:00:33 Speaker_07
Not all heroes wear capes. Okay, so Casey, you watched 260 TikTok videos last night. Yeah. Tell me about it.
01:00:41 Speaker_08
So I did create a new account. So I started fresh. I didn't just reset my algorithm, although that is something that you can do in TikTok. And I decided a couple of things.
01:00:52 Speaker_08
One is I was not going to follow anyone, like no friends, but also no influencers.
01:00:57 Speaker_07
No enemies.
01:00:58 Speaker_08
No enemies, and I also was not going to do any searches, right? A lot of the ways that TikTok will get to know you is if you do a search.
01:01:05 Speaker_08
And I thought, I want to get the sort of broadest, most mainstreamy experience of TikTok that I can, so that I can develop a better sense of how does it sort of walk me down this funnel toward my eventual interest, whereas if I just follow 10 friends and did like three searches for my favorite subjects, like I probably could have gotten there faster.
01:01:28 Speaker_08
And so, do you know the very first thing that TikTok showed me, Kevin?
01:01:31 Speaker_07
What's that?
01:01:31 Speaker_08
It showed me a 19-year-old boy flirting with an 18-year-old girl trying to get her phone number.
01:01:36 Speaker_08
And when I tell you I could not have been any less interested in this content, it was aggressively straight, and it was very young, and it had nothing to do with me, and it was not my business.
01:01:50 Speaker_08
And so, over the next several hours, this total process, I did about two and a half hours last night, and I did another 30 minutes this morning. And I would like to share, you know, maybe the first nine or ten things that TikTok showed me.
01:02:08 Speaker_08
You know, the assumption is it knows basically nothing about me. Yes.
01:02:11 Speaker_08
And I do think there is something quite revealing about an algorithm that knows nothing throwing spaghetti at you, seeing what will stick, and then just picking up the spaghetti afterwards and saying, well, what is it, you know, that I thought was interesting.
01:02:24 Speaker_08
So here's what it showed me. Second video, a disturbing 911 call, like a very upsetting sort of domestic violence situation. Skip. Three, two people doing trivia on a diving board and like the person who loses has to jump off the diving board.
01:02:37 Speaker_08
Okay, fine. Four, just free booted clip of audition for America's Got Talent. Five, vegetable mukbang. So just a guy who had like rows and rows of beautiful multicolored vegetables in front of them who was just eating them.
01:02:54 Speaker_08
Six, a comedy skit, but it was like running on top of a Minecraft video. So one of my key takeaways after my first six or seven TikTok videos was that it does actually assume that you're quite young. That's why it started out by showing me teenagers.
01:03:09 Speaker_08
And as I would go through this process, I found that over and over again, instead of just showing me a video, it would show me a video that had been chopped in half, and on top was whatever the sort of core content was, and below would be someone is playing Subway Surfers, someone is playing Minecraft, or someone is doing those sort of oddly satisfying things.
01:03:30 Speaker_08
This is a growth hack. I'm combing through a rug or whatever. And it's like, it's literally people trying to hypnotize you, right? It's like, if you just see the, oh, someone is trying to smooth something out or someone is playing with slime.
01:03:43 Speaker_07
They're cutting soap. Have you seen the soap cutting? Yes.
01:03:46 Speaker_08
Soap cutting is huge. And again, there is no content to it. It is just trying to stimulate you on some sort of like lizard brain level.
01:03:54 Speaker_07
It feels vaguely narcotic. Absolutely. It is like, yes.
01:03:57 Speaker_08
It is just purely a drug. Video number seven, an ad. Video number eight, a dad who was speaking in Spanish and dancing. I mean, it was very cute.
01:04:07 Speaker_07
Now, can I ask you a question? Yeah. Are you doing anything other than just swiping from one video to the next? Are you liking anything? Are you saving anything? Are you sharing anything?
01:04:16 Speaker_07
Because all of that gets interpreted by the algorithm as like a signal to keep showing you more of that kind of thing.
01:04:21 Speaker_08
Absolutely. So for the first 25 or so videos, I did not like anything, but because I truly didn't like anything, like nothing was really doing it for me.
01:04:30 Speaker_08
But my intention was always like, yes, when I see something I like, I'm going to try to reward the algorithm, give it a like, and I will maybe get more like that.
01:04:37 Speaker_08
So, the process goes on and on, and I'm just struck by the absolute weirdness and disconnection of everything in the feed.
01:04:49 Speaker_08
At first, truly nothing has any relation to anything else, and it sort of feels like you've put your brain into like a Vitamix, you know? where it's like, swipe, here's a clip from Friends. Swipe, kid's complaining about school.
01:05:03 Speaker_08
Swipe, Mickey Mouse has a gun and he's in a video game, right? Those are three videos that I saw in a row. And the effect of it is just like disorienting, right?
01:05:11 Speaker_07
Yeah, and I've had this experience when you like go onto YouTube but you're not logged in. You know, on like a new account, and it's sort of just, it's just showing you sort of a random assortment of things that are popular on YouTube.
01:05:22 Speaker_07
It does feel very much like they're just firing in a bunch of different directions, hoping that something will stick. And then it can sort of, it can then sort of zoom in on that thing.
01:05:31 Speaker_08
Yes, absolutely. Now, I will add that in the first 30 or so videos, I saw two things that I thought were like, actually disturbing and bad. Like things that have never, should never have been shown to me.
01:05:45 Speaker_07
Was it a clip from the All In podcast?
01:05:47 Speaker_08
Yes, no, fortunately it didn't get that bad. But one, there was a clip of a grate in like a busy city, and there was air blowing up from the grate. And the TikTok was just women walking over the grate and their skirts blowing up.
01:06:01 Speaker_07
That seems bad.
01:06:02 Speaker_08
Horrible, that's horrible. That was in the first 20 videos that I saw.
01:06:05 Speaker_07
Wow.
01:06:05 Speaker_08
It was this video, okay? I guess if you like that video, it says a lot about you, right? But it's not bad. The second one, and I truly, I do not even know if we are,
01:06:15 Speaker_08
We'll want to include this on our podcast, because I can't even believe that I'm saying that I saw this, but it is true.
01:06:21 Speaker_08
It was an AI voice of someone telling an erotic story which involved incest, and it was shown over a video of someone making soap.
01:06:33 Speaker_07
Wow. Like, what? This is dark stuff.
01:06:37 Speaker_08
This is dark stuff.
01:06:38 Speaker_07
Now, at what point did you start to wonder if the algorithm had started to pick up on your clues that you were giving it?
01:06:45 Speaker_08
Well, so I was desperate to find out this question because I am gay and I wondered when I was going to see the first gay content. Like, when it was actually just gonna show me two gay men who were talking about gay concerns. And it did not happen. Ever?
01:07:01 Speaker_08
No. It never quite got there. On this morning... In 260 videos. In over 260 videos. Now, it did show me queer people. Actually, do you know the first queer person, identifiably queer person, that the TikTok algorithm showed me?
01:07:15 Speaker_08
Are you familiar with the very popular TikTok meme from this year, very demure, very mindful? Yes.
01:07:20 Speaker_08
The first queer person I saw on TikTok, thanks to the algorithm, was Jules LeBron in a piece of sponsored content, and she was trying to sell me a Lenovo laptop. And that was the queer experience that I got in my romp through the TikTok algorithm.
01:07:36 Speaker_08
Now, you know, it did eventually show me a couple of queer people. It showed me one TikTok about the singer Chaperone, who is queer, so I'll count that. And then it showed me a video by Billie Eilish, you know, a queer pop star.
01:07:51 Speaker_08
And I did like that video. And now, Billy Eilish was one of the most famous pop stars in the entire world. I mean, like, truly. Like, on the Mount Rushmore of famous pop stars right now. So, it makes a lot of sense to me that TikTok would show me that.
01:08:01 Speaker_08
Also, incredibly popular with teenagers. And so, I liked one Billy Eilish video, and then that was when the floodgates opened, and it was like, okay, here's a lot of that. But just from, like, sort of scrolling, no, we did not get to the gay zone.
01:08:18 Speaker_08
Now, I did notice the algorithm adapting to me. So something about me was because again, I was trying to get through a lot of videos in a relatively short amount of time. And TikTok now will often show you three, four or five minute long videos.
01:08:28 Speaker_08
I frankly did not have the time for that. The longer I scrolled, the shorter the videos were that I got. And I do feel like the content aged up a little bit.
01:08:36 Speaker_08
You know, it started showing me a category of content that I call people being weird little freaks. You know, it's like somewhat, these are some real examples. A man dressed as the cat in the hat dancing to Ciara's song, Goodies.
01:08:52 Speaker_08
There was a man in a horse costume playing the Addams Family theme song on an accordion using a toilet lid for percussion.
01:09:01 Speaker_07
This is the most important media platform in the world. Yes, hours a day teenagers are staring at this. This is what it is showing them. We are so screwed.
01:09:12 Speaker_08
Yeah, you know, it figured out that I was more likely to like content about animals than other things, so there started to become a lot of dogs doing cute things, cats doing cute things, you know, other things like that.
01:09:24 Speaker_08
But, you know, there was also just a lot of, like, here's a guy going to a store and showing you objects from the store, or, like, here is a guy telling you a long story.
01:09:33 Speaker_07
Can I ask you a question? Like, was there any, in these 260 videos, were there any that you thought, like, That is a great video.
01:09:42 Speaker_08
I don't know if I saw anything truly great. I definitely saw some animal videos that if I showed them to you, you would laugh, or you would say that was cute.
01:09:49 Speaker_08
There was stuff that gave me an emotional response, and I would say particularly as I got to the end of this process, I was seeing stuff that I enjoyed a bit more, but I did this morning, I decided to do something, Kevin, because I'd gotten so frustrated with the algorithm, I thought, it is time to give the algorithm a piece of data about me.
01:10:07 Speaker_08
So do you know what I did? What'd you do? I searched the word gay. Very subtle. Which, in fairness, is an insane search query. Because what is TikTok supposed to show me in response?
01:10:19 Speaker_08
You can show me all sorts of things, but on my real TikTok account, it just shows me queer creators all the time, and they're doing all sorts of things. They're singing, they're dancing, they're telling jokes, they're telling stories.
01:10:28 Speaker_08
So I was like, I would like to see a little bit of stuff like that. Do you know the first clip that came up for me when I searched gay on TikTok to train my algorithm? What was it? It was a clip from an adult film. Now, like, explicit? Unblurred?
01:10:43 Speaker_08
It was from, um, and I don't know this, I've only read about this, but apparently at the start of some adult films, before the explicit stuff, there'll be some sort of story content, you know, that sort of establishes the premise of the scene.
01:10:55 Speaker_08
And this was sort of in that vein. Um, but I thought... If I just sort of said offhanded, you know, oh, TikTok, yeah, I bet if you just search gay, they'll just show you, like, porn. People would say, like, it sounds like you're being insane.
01:11:08 Speaker_08
Like, why would you say that? That's being insane. Obviously, they're probably showing you their, like, most famous queer creator, you know, something like that. No, they literally just showed me porn.
01:11:19 Speaker_08
So it was like, again, so much of this process for me was like,
01:11:23 Speaker_08
hearing the things that people say about TikTok, assuming that people were sort of exaggerating or being too hard on it, and then having the experience myself and saying like, oh no, it's actually like that.
01:11:34 Speaker_07
That was interesting. An alternative explanation is that the algorithm is actually really, really good, and the reason it show you all the videos of people being weird little freaks is because you are actually a weird little freak.
01:11:42 Speaker_08
That's true. I will accept those allegations. I will not fight those allegations.
01:11:47 Speaker_07
So, okay, you watched 260 videos. You've reached this magic number that is supposed to get people addicted to TikTok. Are you addicted to TikTok?
01:11:55 Speaker_08
Kevin, I'm surprised and frankly delighted to tell you I have never been less addicted to TikTok than I have been after going through this experience. Do you remember back when people would smoke cigarettes a lot?
01:12:10 Speaker_08
And if a parent caught a child smoking, the thing that they would do is they say, you know what? You're gonna smoke this whole pack and I'm gonna sit in front of you and you're gonna smoke this whole pack of cigarettes.
01:12:18 Speaker_08
And the accumulated effect of all that stuff that you're breathing into your lungs, by the end of that, the teenager says, dad, I'm never gonna smoke again. This is how I feel. After watching hundreds of these TikToks.
01:12:32 Speaker_07
So, okay, you are not a TikTok addict. In fact, it seems like you are less likely to become a TikTok power user than you were before this experiment.
01:12:39 Speaker_08
I think that's right.
01:12:40 Speaker_07
Did this experiment change your attitudes about whether TikTok should be banned in the United States?
01:12:46 Speaker_08
I feel so bad saying it, but I think the answer is yes. Not ban it, right? My feelings about that still have much more to do with free speech and freedom of expression.
01:12:58 Speaker_08
I think that a ban raises a lot of questions that the United States approach to this issue. It just makes me super uncomfortable with. You can go back through our archive to hear a much longer discussion about that.
01:13:11 Speaker_08
If I were a parent of a teen who had just been given their first smartphone, hopefully not any younger than like 14, it would change the way that I talk with them about what TikTok is, and it would change the way that I would check in with them about what they were seeing, right?
01:13:27 Speaker_08
Like I would say, you are about to see something that is going to make you feel like your mind is in a blender, and it is going to try to addict you, and here's how it is gonna try to addict you.
01:13:37 Speaker_08
And I might sit with my child and might do some early searches to try to precede that feed with stuff that was good and would give my child a greater chance of going down some positive rabbit holes and seeing less of, you know, some of the more disturbing stuff that I saw there.
01:13:53 Speaker_08
If nothing else, I think it was a good educational exercise for me to go through.
01:13:57 Speaker_08
And if there is someone in your life, particularly a young person who is spending a lot of time on TikTok, I would encourage that you go through this process yourself because these algorithms are changing all the time.
01:14:09 Speaker_08
And I think you do want to have a sense of what is it like this very week if you really want to know what it's going to be showing your kid.
01:14:15 Speaker_07
Yeah. I mean, I will say, I spent a lot of time on TikTok. don't recall ever getting done with TikTok and being sort of happy and fulfilled with how I spent the time. Like, there's a vague sense of, like, shame about it.
01:14:33 Speaker_07
There's a vague sense of, like, sometimes it, like, helps me turn my brain off at the end of a stressful day. It has this sort of, like, you know, this sort of narcotic effect on me.
01:14:43 Speaker_07
And sometimes it's calming, and sometimes I find things that are funny. But rarely do I come away saying, like, that was the best possible use of my time.
01:14:51 Speaker_08
There is something that happens when you adopt this sort of algorithm first, vertical video, mostly short form, infinite scroll. You put all of those ingredients into a bag, and what comes out does have this narcotic effect, as you say.
01:15:08 Speaker_07
Well, Casey, thank you for exposing your brain to the TikTok algorithm for the sake of journalism. I appreciate you.
01:15:17 Speaker_08
And, you know, I will be donating it to science when my life ends.
01:15:21 Speaker_07
People will be studying your brain after you die. I feel fairly confident. I don't know why they'll be studying your brain, but there will be research teams looking at it. Can't wait to hear what y'all find out.
01:15:47 Speaker_07
Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant. Today's show was engineered by Alyssa Moxley. Original music by Marion Lozano, Sophia Landman, Diane Wong, Rowan Nemisto, and Dan Powell.
01:16:01 Speaker_07
Our audience editor is Nel Galogli. Video production by Ryan Manning and Chris Schott. As always, you can watch this full episode on YouTube at youtube.com slash hardfork.
01:16:11 Speaker_07
Special thanks to Paula Schumann, Hui-Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at hardforkatnytimes.com.
01:16:39 Speaker_03
From The New York Times, you're listening to The Wirecutter Show.
01:16:43 Speaker_05
Hey everyone, it's The Wirecutter Show. I'm Kyra Blackwell. I'm Christine Cyr-Clissette.
01:16:48 Speaker_04
And I'm Rosie Guerin, and we work at Wirecutter, the product recommendation site from The New York Times.
01:16:55 Speaker_05
Each week, we bring you expert advice from our newsroom of 140 journalists who review everyday products that will make your life better.
01:17:03 Speaker_03
Today's episode of The Wirecutter Show is called The Secret to Better Laundry.
01:17:13 Speaker_05
Hey, y'all. It's our very first episode. Hooray!
01:17:17 Speaker_04
Love it. Our inaugural episode. Should we tell people a little bit about who we are and what we're doing here?
01:17:22 Speaker_03
Yeah.
01:17:22 Speaker_04
Absolutely. So, Christine, what is Wirecutter? What does Wirecutter do?
01:17:25 Speaker_03
Well, Wirecutter is a product recommendation service. We're part of the New York Times company. We have about 140 journalists who do rigorous product testing. And what we mean by that is they are very serious about the products they test.
01:17:38 Speaker_03
What are we talking about here? So, I'll give a couple examples. We've done a guide to hiking boots, and for that one, our writers tried out 55 pairs of boots and hiked 1,400 miles over seven years. I mean, that's a lot of steps, you know?
01:17:52 Speaker_03
Another example, this one's kind of wild. We did a review of fire safes, and our writer actually built a room and burned the room with all these fire safes inside of it. I mean, that's the kind of lengths our journalists go to test things.
01:18:07 Speaker_04
Tried and true service journalism.
01:18:10 Speaker_03
Absolutely. Committed. And we're also independent, which that means we don't let companies pay us to review their products and we don't take freebies. Our recommendations are based only on what we really believe is the best stuff.
01:18:23 Speaker_03
It's a really fun place to work, and I'm excited to introduce listeners here to the amazing people I work with, like Kyra. Hi.
01:18:31 Speaker_04
Kyra, who are you?
01:18:33 Speaker_05
So I'm a writer, and I cover all things sleep, and I test mattresses. People find it hilarious because I get to say that I sleep for a living, and I've probably tested about 100 mattresses at this point for work.
01:18:46 Speaker_05
I've been at Wirecutter for four years, but Christine here has actually been here longer than me.
01:18:51 Speaker_03
That's right. I'm OG. I've been here 11 years, nearly since the site was started, and I'm an editor. How about you, Rosie? Who am I? Yeah, who are you?
01:19:01 Speaker_04
I'm the show's executive producer, and I'm kind of going to be your producer's sidekick here to pose the questions that, you know, us normies have.
01:19:10 Speaker_05
Well, we're so happy to have you here.
01:19:12 Speaker_04
Thanks, Kyra. So to that end, why are we all here? Let's tell the people. I mean, what are we doing on the show? What's the plan?
01:19:18 Speaker_05
Look, we get tons of questions from people about problems they encounter or think that they're just trying to hack or solve. And our staff has a lot of expertise to share. Right.
01:19:29 Speaker_03
So we want to bring all of that to audio every week. We're going to sit down with a Wirecutter journalist and talk through a topic that we think people could use some advice on.
01:19:39 Speaker_03
We'll give product recommendations like we always do on the site, but this is going to be a lot more than that. We're going to talk through solutions, ideas, hacks, and ways that you can really make your life better.
01:19:50 Speaker_03
And sometimes we'll even invite outside experts to come on. I love it.
01:19:54 Speaker_05
And that's why I think today's episode is so good to start with, because it's a problem that we all have. But maybe we don't realize that we have. And I am talking about laundry.
01:20:05 Speaker_04
Why are you looking at me? No, you're right. I don't think I ever properly learned to do my laundry. I really hope my mom's not listening.
01:20:17 Speaker_03
Rosie, I'm going to make the confession that I, too, don't think that I know how to do my laundry.
01:20:21 Speaker_04
I'm in such good company.
01:20:22 Speaker_03
Yeah, I mean, I'm a lady. I'm a grown lady. You are a lady. Maybe that's why I don't know how to do my laundry, because I'm a lady. Yeah. And someone else should be doing it for me. A lady of leisure. Yes, yes.
01:20:31 Speaker_03
The issue with, like, the entire time as an adult, I feel like, is that I've just never really been able to figure out how to get stains out. And so all of my entire family is just always walking around with, like, something on their clothes.
01:20:42 Speaker_04
The whole process, I will say, is very intimidating, Kyra.
01:20:47 Speaker_05
It's a myth. Don't even ask me. I don't know. It's a myth. My mom taught me how to like separate lights from darks. And it's not the 1920s. I feel like we don't need to do that anymore. And that's where I'm at right now.
01:20:59 Speaker_04
That's a hot take. But there's a lot that I don't know.
01:21:02 Speaker_03
Yeah, and you know what? That's why I'm really glad we have our special guest today.
01:21:06 Speaker_03
Because we are Wirecutter, we do happen to have someone on staff whose entire job it is to think about laundry, test laundry solutions, and basically give great advice about laundry.
01:21:19 Speaker_03
Andrea Barnes is a staff writer and industry expert in all things laundry, and she's going to join us to tell us the best laundry products and how to use them properly.
01:21:28 Speaker_04
Can't wait. And so listeners know we're going to be dropping links to the guides and products we talk about in this episode and all our episodes right in the show notes.
01:21:37 Speaker_04
So if you hear us talk about something and want to know more, just head over there.
01:21:44 Speaker_05
Does Tide live up to the hype? Do we need to use hot water on our dirty clothes? Can anything get cooking oil stains out of my favorite T-shirt? We'll find out in just a minute.
01:22:03 Speaker_06
you
01:22:10 Speaker_05
We're back with Andrea Barnes. Andrea is Wirecutter's premier expert on big home appliances like dishwashers and washing machines, but we're not even really going to focus on that today.
01:22:20 Speaker_05
All of her appliance testing has also led her to become our expert on all things laundry.
01:22:26 Speaker_05
She's written a ton of our most popular guides and articles in the category, like for detergent, washing your tennis shoes, washing things by hand, and stain removal.
01:22:35 Speaker_02
Hi, Andrea.
01:22:36 Speaker_03
Hi. It's nice to be here. Nice to see you. So Andrea does a ton of laundry and tests out different washers and dryers and obviously detergents. And the wire cutter office is this just massive space in Long Island City.
01:22:51 Speaker_03
You know, there's like weird clothing hanging everywhere that's stained with like egg and lipstick and grass and obviously like lots of detergent all over the place on carts.
01:23:01 Speaker_03
And I don't know, it's like this weird Willy Wonka's chocolate factory of laundry over there. I want to know what you're doing because I feel really undereducated on the topic.
01:23:10 Speaker_03
But first, you're testing our laundry IQ today with a quiz so you can judge our baseline on how much or little we might know about what's coming up. It's really like how little we know. We're going to come in humble.
01:23:23 Speaker_02
Yes, I have a little true-false quiz for you. So question one, true or false, soda water is the best thing to remove a fresh red wine stain. True!
01:23:32 Speaker_03
I was so enthusiastic about that. I love it. So somebody spilled red wine all over me at my wedding, and I was wearing a white wedding dress, and my mom's best friend, who's an amazing laundress, she got it out with soda water. Laundress?
01:23:46 Speaker_03
She swooped in and she was like, I got this.
01:23:48 Speaker_02
Is it true?
01:23:48 Speaker_03
Yeah, she was like, we are doing this.
01:23:50 Speaker_02
Yeah. I'm glad that that worked, but I would actually say false, according to our testing. Oh, no! I'm sure that soda water can work, but we found that the best thing for fresh red wine stains is actually white wine. What? Oh, wow. Yes.
01:24:05 Speaker_05
So you pour white wine on top of the red wine?
01:24:07 Speaker_02
Exactly.
01:24:08 Speaker_05
And it just cancels out? It cancels it out.
01:24:09 Speaker_03
Well, I mean, I did drink a whole bottle of white wine after that person spilled the red wine on me and the stain did come out. So it did kind of work. That always works.
01:24:17 Speaker_04
Arguably for different reasons. All right, question two, what do you got?
01:24:25 Speaker_02
True or false? A top-load washer cleans better than a front-load washer.
01:24:29 Speaker_04
True?
01:24:30 Speaker_02
No, it's false. Even though they use less water, actually a front-load washer is way better at removing stains than a top-load washer. Huh. It doesn't have that little thingy in the middle, though. I know.
01:24:43 Speaker_02
It's really confusing, but it's actually the friction that really makes the front load washer work.
01:24:47 Speaker_04
It's the friction, not the thingy.
01:24:48 Speaker_02
Wow. Yeah. Who knew? Science. Yeah. Okay. Next. Okay. So, you need fabric softener. True or false? False. False. You are correct. It is false. It's actually just fat that's usually added to the end of a cycle. Oh. Fat? What? Yeah. It's fat.
01:25:04 Speaker_02
We actually used to use it in college to condition our hair. Oh, my gosh. I wouldn't say do it now, but it worked really well.
01:25:09 Speaker_04
The olden times were wild. Absolutely wild. I love it.
01:25:16 Speaker_02
OK, so the last question. In order to get stains out, you need to wash with hot water.
01:25:20 Speaker_05
This has to be true.
01:25:21 Speaker_02
No, it's actually not. What if my cat puked on my bed? Especially cat puke should be washed with cold water. Stop it. Oh my gosh. Really? Yes, because it has protein in it and protein needs cold water to be washed out.
01:25:34 Speaker_05
Wait, so have I just been baking it deeper into my comforter? You may have. Did the smell go away when you did it? More you know, Kyra. I would like to think so, but who knows? I have so many, like, candles lit at all times.
01:25:49 Speaker_02
I mean, warm water will get it out and friction helps, but, you know, cold water is definitely more efficient. All right.
01:25:54 Speaker_04
Well, I think Andrea proved our point. We have some laundry tips and knowledge that we need to acquire ASAP.
01:26:01 Speaker_03
Yeah, I think that's about right. Okay, so let's get into it, Andrea. I want to start with getting a better sense of how you even test something like laundry detergent.
01:26:10 Speaker_02
Oh, that's a great question. You know, initially when I started designing the laundry detergent tests, the idea was to put common stains on T-shirts. I polled the wire cutter writers and asked them, what are the worst stains you've had?
01:26:23 Speaker_02
What are the things that have been hardest to remove? And most of them were pretty typical, like lipstick, ink, cooking oil. And we stained all these shirts, we very quickly realized that it's actually very hard to recreate difficult stains.
01:26:38 Speaker_02
And that most of the stains that you recreate, especially because they're only sitting for a day or two before you wash them, almost everything was pretty easy to wash out.
01:26:48 Speaker_04
So when you say recreate, are you taking a bottle of Heinz and just mushing it into some new T-shirt?
01:26:54 Speaker_02
Totally, exactly. Wow. And so, with that in mind, I called one of my sources, who's a former washing machine developer from Whirlpool. And I asked him, what would you do in this situation? Because everything's cleaning out pretty easily.
01:27:10 Speaker_02
And he said, you've got to call Clorox and get their standardized test. And he described this swatch with 15 different stains on it that are all the same size. And Clorox, of course, was not going to give me that, right?
01:27:25 Speaker_02
But I did find the place that Clorox buys it from, which is a German company that has a dealer in Pennsylvania. So to order these swatches, we actually have to email them and give them the credit card information and all this stuff.
01:27:40 Speaker_02
Like, we can't even do it online. It took forever. But this machine is awesome because it uses the exact same amount of whatever the stain material is, and it presses them down under thousands of pounds of pressure.
01:27:54 Speaker_02
So you have a standardized stain set. And the stains are pretty typical stains you encounter in daily life. So there's tea. There's mustard, there's something that's called beta-carotene, which is more like a carrot or a sweet potato stain.
01:28:07 Speaker_02
Then there are some that you're not necessarily encountering every day. Like, there's one that's used engine oil, which, you know, we joked about going to find a mechanic who would give us oil rags, but this seemed like a better solution.
01:28:21 Speaker_02
But the idea here is not to remove all of the stain. It's actually really just comparative data. What it does is it shows us how good a laundry detergent or stain remover is by how much stain they remove.
01:28:32 Speaker_02
So the idea really isn't to see a blank T-shirt or a blank piece of jersey cotton at the end. It really is to see some stain left.
01:28:40 Speaker_04
I don't want to turn anybody's stomach, but I'm very, very curious because I think a comment, especially if you have kids, my kids fall and scrape their knees all the time. Blood?
01:28:51 Speaker_02
I'm afraid to ask. I became very good friends with the butchers at Paisano's on Smith Street in Brooklyn, where they sell both pork and beef blood for different recipes.
01:29:04 Speaker_02
And I bought, I think we probably, between laundry detergent testing and stain remover testing, I think I probably bought six quarts of blood.
01:29:12 Speaker_04
Are you on some sort of list?
01:29:15 Speaker_02
I know, I've been wondering about that. Do they know what you want the blood for? I probably just think I'm making a recipe, but I think for testing, we left several t-shirts overnight for three or four days after being stained with blood.
01:29:34 Speaker_02
That was actually one of the best tests. Like, some things that could get those stains out were good detergents or good stain removers. Yeah, I'm just seeing, like, a Carrie scene at the office.
01:29:44 Speaker_03
Like, isn't that what they used?
01:29:45 Speaker_04
That movie actually was about a girl, I think, who was in training to become a detergent tester. That's a little-known fact.
01:30:03 Speaker_03
It's something that I personally – I don't know about the rest of you, but I just use liquid detergent because it's what I buy and that's what I use, and I don't really do anything else.
01:30:13 Speaker_03
But from reading your work, it sounds like there is a difference between liquid and powder and when you want to use those. So can you talk a little bit about that?
01:30:20 Speaker_02
Sure. I mean, I think for most people, liquid detergent is probably the best choice. I mean, there are great products on the market.
01:30:29 Speaker_02
And most people are removing body oils from their clothing, and a lot of people aren't actually removing that many stains from their clothing. I mean, honestly, the water itself does a lot of the work, too, right?
01:30:42 Speaker_02
If you have a job where you're outdoors a lot and you're working a lot with clay or mud or particulate soils is what they would be called, powder detergent's probably a better choice because
01:30:57 Speaker_02
powder detergent is going to work with those stains better because of the way it's formulated.
01:31:03 Speaker_02
And also, if you just have a lot of stains in your clothes in general, powder detergent's a nice choice because most of them have oxygen bleach or some other sort of non-chlorine bleach built into them, so there's a little bit more stain-removing power.
01:31:15 Speaker_02
And you can use those on all colored fabric? That's okay? Yes. Totally different than chlorine bleach, which is not safe to use on all sorts of fabrics, but oxygen bleach is
01:31:23 Speaker_02
It's basically hydrogen peroxide, so you're going to want to test a small part of your clothing to make sure that it can withstand the oxygen bleach, but we have yet to run into something unless it's like a really poorly dyed item that can't be used with oxygen bleach.
01:31:38 Speaker_02
Like the most famous oxygen bleach is like OxiClean, right? Yes. And we tested it. We didn't love it in testing. OK. Wow. That guy was lying to me in one of those infomercials.
01:31:48 Speaker_08
One of those great infomercials. Watch how OxiClean unleashes the power of oxygen, making tough stains disappear like magic.
01:31:58 Speaker_02
OxiClean didn't do great in testing because it takes a really long time to dissolve. It really needs hot water to dissolve, and it took a lot longer to remove stains than other detergents that we tested.
01:32:13 Speaker_02
Well, OxiClean's not a detergent, but we tested Tide UltraOxi, which is a powder tied with oxygen bleach, and it did a much better job with removing things than OxiClean did.
01:32:23 Speaker_05
I read in one of your guides that the thing that really picks up stains is this thing in it called enzymes, and that's what really eats away at the stains, so you don't really need bleach to get out a stain.
01:32:34 Speaker_02
No. For most stains, you don't need bleach. I say to use oxygen bleach when it's very specific kinds of stains. Basically anything that you would use as a natural dye if you were making dye in your own clothing.
01:32:46 Speaker_02
So like tea, coffee, fruits, things where the stain really changes the fabric versus just food stains, which are pretty topical and enzymes are really good at getting those.
01:32:59 Speaker_03
Okay, so we can basically just throw out all of our chlorine bleach and we don't need it?
01:33:04 Speaker_02
No, not for your regular clothes. You don't need chlorine bleach, but it's still a good disinfectant.
01:33:08 Speaker_04
And oxygen bleach is for the stains that are really set in and kind of changing the color of the fabric, right? Like wine or something?
01:33:15 Speaker_02
Yes. Soaking in oxygen bleach will get the most stubborn stains out, and it's great for things like wine or tea or coffee that really do soak into the fabric and are very hard to get out.
01:33:26 Speaker_05
So when you say that those like oxygen detergents are basically just hydrogen peroxide, I know there's always like a DIY community in everything. When there's a product, there's a DIY product. But why can't you just make your own detergent?
01:33:41 Speaker_02
I mean, you could if you want to spend a lot of time making something that doesn't work as well.
01:33:46 Speaker_03
That's like a lot of DIY in general.
01:33:55 Speaker_04
You were mentioning when we were talking earlier this phrase that is kind of a shorthand that's sometimes helpful. Is it something like, like, likes, likes?
01:34:06 Speaker_02
Yes. What is this? It's a solvent term. Like, likes, like. So, for example, when you have an oil stain, liquid detergent works really well because the surfactants in liquid detergent behave as an oil, so they are better absorbed into oil stains.
01:34:25 Speaker_02
basically the best way to look at it is that there's water stains and there's oil stains, right? The solvents that you want to use to get rid of those stains are going to be similar in property.
01:34:36 Speaker_02
So remember I said that if you're working around a lot of clay or dirt, powder detergent is better. That's why. It's similar, right? So they behave similarly and they absorb with each other better and combine better.
01:34:49 Speaker_03
So this is why my kids like muddy dirty clothing never gets back to like zero. Yeah it's just like I'm only using liquid. And so it's not it's not picking up the dirt.
01:35:01 Speaker_02
Yeah. I mean you can try pre treating with with a liquid detergent. I find that works really well but you might just try powder detergent too. It might work.
01:35:09 Speaker_04
What exactly is the process of pre-treating?
01:35:12 Speaker_02
So pre-treating a stain is kind of how it sounds. You really are just putting the stain remover or the laundry detergent on the stain beforehand, and it's usually somewhere between 5 minutes to 20 minutes before you put it in the washing machine.
01:35:25 Speaker_02
Got it. For almost every stain, if you have laundry detergent on hand, you probably can just use that for pre-treating. We recommend a different stain remover, all-purpose stain remover for pre-treating, called Amidex.
01:35:41 Speaker_02
But the reason we recommend that is because it removes other stains that laundry detergent is not always good at getting out. So those two are actually makeup. and permanent ink.
01:35:51 Speaker_02
But otherwise, I would say you can pre-treat with laundry detergent for most stains. And it is a good hack for laundry.
01:35:57 Speaker_02
So I don't know if any of you have ever had like a marinara stain on your shirt and you don't pre-treat it, you throw it in the wash and then you come out. I've never done it.
01:36:06 Speaker_04
I've never not treated my stains. Not ever.
01:36:09 Speaker_02
Never have I forgotten. She's lying, Your Honor.
01:36:16 Speaker_02
Yeah, so, I mean, I will admit to you all that when we were doing laundry detergent testing and one of my sources told me to soak oil stains in warm water and liquid laundry detergent, I was able to get oil stains.
01:36:29 Speaker_02
You know those dark oil stains you get on colorful fabric? And it'll be like the same. It's just a slightly darker shade. I think my daughter's goal in life is to have every article of clothing have these stains.
01:36:43 Speaker_02
But I was able to get these stains that had been in three or four washes and dry cycles out from the soaking. So, you know, there are a lot of different things you can do for stain removal.
01:36:54 Speaker_02
Pre-treating is great, but it doesn't necessarily mean that you're totally in trouble if you still have stains coming out of the wash later. Got it.
01:37:04 Speaker_03
So I've heard you mention Tide a couple times. Why? And I think both of the powdered detergent, which is which, what is that again, the one that we like? Tide Ultra Oxi. Tide Ultra Oxi. And then the liquid we like is Tide Free and Gentle. Gentle and Free.
01:37:19 Speaker_03
Gentle and Free. No. Why do we like Tide? I mean, like, we're not shilling for Tide. There is no Tide commercial on this podcast.
01:37:29 Speaker_02
So what's the deal? You know, it's interesting. I really went into testing wanting to not recommend Tide because I had a lot of misconceptions over being allergic to it. And then the more testing that I did and the more research I did, I learned that
01:37:45 Speaker_02
The ingredient that most people are allergic to in laundry detergent, it's called MI or MCI. It's a chemical preservative. And it's in even, quote-unquote, natural liquid detergents, almost all of the ones we tested contain it.
01:37:59 Speaker_02
And what's interesting is that people perceive that that's what they're allergic to. One, almost all of it's rinsed off by your washing machine. But the other is that it's the same preservative as probably in your shampoo.
01:38:11 Speaker_02
your shower gel, anything that you're using that needs to be preserved. So that's actually probably what's causing your allergy. I mean, we talked to multiple dermatologists about this.
01:38:21 Speaker_02
So that said, we partly chose Tide Free and Gentle because it doesn't have that preservative. And so they had all this money and all this R&D to go into creating a laundry detergent that lacks these allergens.
01:38:33 Speaker_02
It was the combination that had hypoallergenic detergent with the fact that it removed stains so well that it made it go out on top. So, in the end, Ty just kept the — I don't — they're doing something really good there. Right.
01:38:45 Speaker_02
I mean, they have a big budget, right? Yes.
01:38:48 Speaker_03
They have a big budget to spend on R&D, which is something that we kind of find across categories at Wirecutter that, like — We're kind of sometimes surprised that the big companies do come out ahead because they just have so much money.
01:38:58 Speaker_05
I'm kind of relieved when like the big names are actually doing something right because then you can get it anywhere. Like if you run out, you know, liking niche stuff sucks. It's so true.
01:39:12 Speaker_04
Okay, so Wirecutter recommends Tide Free and Gentle Liquid Detergent for most laundry. For really stained stuff, dirt stained stuff, Wirecutter recommends Tide Ultra Oxy.
01:39:24 Speaker_04
And I trust these recommendations, frankly, based off all of the blood testing and stuff we discussed.
01:39:30 Speaker_03
Oh, yes.
01:39:31 Speaker_05
You know what, I think we're learning. We're doing good, guys. Okay, but I do have some, like, graduate-level questions I need to just throw in here, like about dry cleaning and pods and just more questions about stains.
01:39:44 Speaker_03
And how to make eco-friendly choices at the laundromat. But first, a break. Welcome back to The Wirecutter Show. Today we're getting a master class in laundry tips from Andrea Barnes.
01:40:14 Speaker_03
Andrea is Wirecutter's staff writer on large appliances and all things laundry. So, so far we've covered some detergent basics, but now we're going to get into some of the nitty gritty details, literally.
01:40:26 Speaker_05
Okay, so what if you have really nice or vintage items like cashmere or silk? Do you have to get those dry cleaned?
01:40:35 Speaker_02
It's really going to depend on the item, but I would say the vast majority of the time, no. First of all, you most likely can use regular detergent, Tide Free and Gentle on a lot of your dry clean only items if you use a mesh bag and a Gentle Cycle.
01:40:49 Speaker_02
I wouldn't do this with something that you really love, but there are a few other options. We recommend a hand wash detergent called Soak.
01:40:56 Speaker_02
Which is really great, and what you can do is actually hand wash your garment, and you soak the garment with this detergent, which is called Soak, and that's confusing.
01:41:10 Speaker_02
And it's a no-rinse detergent, so you don't have to handle the clothing for that long, right? So you submerge it for 15 minutes, and then you just take the item out and press the water out, and it's ready to go. The detergent evaporates really quickly.
01:41:25 Speaker_03
Wait, and so you don't have to rinse it? No. What if you have allergies? Isn't there something on it that will... So that's a great question.
01:41:32 Speaker_02
The preservative we talked about earlier is not in this hand wash detergent, which is good. And then we interviewed the owner of the detergent company.
01:41:43 Speaker_02
They did third-party testing that showed less than, like, 0.00005% of the residue is left on the — because you also use a tiny amount. Like, we're talking maybe two teaspoons for several gallons, so there's not much left to begin with.
01:41:59 Speaker_02
I would say that that's ideal for, you know, if you have a cashmere sweater that you've worn a bunch of times and it hasn't really been stained, but you want to freshen it up, that would be a great option. If it's something that's really
01:42:13 Speaker_02
You got stains all over it or you bought a vintage item as is. What we learned is that the best thing to do is to use pure sodium percarbonate, which is oxygen bleach, and soak for hours.
01:42:26 Speaker_02
So, in that case, we wouldn't use something like Tide because there's builders and fillers in it that you won't necessarily want on your clothing. But pure sodium percarbonate is great for soaking your vintage items.
01:42:38 Speaker_02
And the product we really like for that in testing is called Restoration. And it's pure oxygen bleach, there's nothing else in it. So OxiClean has fragrance and other fillers in addition to oxygen bleach, and this is just oxygen bleach.
01:42:53 Speaker_02
How do you test these? How are you testing these vintage... So for this, we did an... I'm not going to give you my keyword search.
01:43:06 Speaker_02
But I looked for items on eBay that are stained and we ordered a lot of used linens that looks like they had no hope left. And the craziest part, so we got this huge duvet bag filled with tablecloths and napkins and linens.
01:43:24 Speaker_02
And they had stains, but the best part slash worst part was that we realized when we opened the bag, when we unzipped it, that the person was definitely a smoker. And it smelled like stale, like, cigarettes. Like, really, yeah.
01:43:40 Speaker_04
Not a lot of competing bids.
01:43:43 Speaker_02
What? They sent it in 20 seconds after we ordered it.
01:43:46 Speaker_04
They were like, it's at your door. Check outside. We were praying you would buy this.
01:43:52 Speaker_02
So we found this and it was all delicates like lace and linen. Again, I called a source. This time I called our source, Miki Evans, who's amazing. And she is the assistant wardrobe supervisor at The Notebook, the musical on Broadway. Amazing.
01:44:07 Speaker_02
And yeah, she's great. And she told me about Restoration, this oxygen bleach product that you can buy for soaking vintage items. So we tried it against, I think, six or seven other oxygen bleaches, and this one was by far the best one.
01:44:24 Speaker_02
And we felt, because it had no fillers, was what I would call the least risky one. Did it take out the cigarette smell? It did. It did. Wow. Yeah. And we tested a bunch of hand-washed detergents with this lot of smoky linens.
01:44:40 Speaker_02
And there were definitely some that did not take the stink out.
01:44:45 Speaker_04
Was that your keyword search? Smoky linens?
01:44:47 Speaker_02
Stinky linens. Sounds like an eyeshadow color. Smoky linens.
01:44:56 Speaker_05
So one thing we actually haven't gotten into is laundry pods.
01:44:59 Speaker_04
Oh, yeah. We got to talk about laundry pods.
01:45:00 Speaker_05
We got to talk about the pods. So, Andrea, what's the deal with laundry pods?
01:45:05 Speaker_02
I initially didn't really want to recommend pods because you can't pre-treat with them. It's so concentrated that it actually doesn't absorb well into stains for pre-treating. So you have to add water and you're dealing with pod films.
01:45:19 Speaker_02
So all these things make it not a great choice, in my opinion, for pre-treating. But when we started having paid testers come to the office... And let me just interject.
01:45:30 Speaker_03
Paid testers are people who are not on our staff that we bring in to test with us. And basically, they're folks who have like a variety of different abilities and body types.
01:45:42 Speaker_03
And we like to bring them in and get their feedback to get a wider diversity of opinions about the products we're testing.
01:45:48 Speaker_02
Yes, and some of whom have limited mobility and limited grip strength. I observed very quickly that pods were absolutely 100 percent the best option for them when operating either washing machines or dishwashers.
01:46:03 Speaker_02
And so it made sense to me to make a recommendation for that.
01:46:06 Speaker_05
Yeah, that does make sense. I actually had a little revelation because I don't have washer dryer in my unit, but my partner does. And whenever I go over there, it's like I've never done laundry ever in my life.
01:46:17 Speaker_05
I'll just toss a pod in the little compartment that they have in those washing machines. But I've learned recently that you're not supposed to do that. Why is that?
01:46:26 Speaker_02
It totally depends on the washing machine. But the way the dispenser works is that water goes through the whole thing. And if it's not a very strong stream, it can be very hard for the pot to dissolve.
01:46:38 Speaker_02
Basically, you need the water to be strong enough to pierce the pot and start everything moving. Got it. Yeah. All right. Doing it wrong.
01:46:46 Speaker_03
OK, you talked a little bit earlier about temperature and you told us earlier that you're actually supposed to be using cold water. But is there any time that you should be using warm water or hot water?
01:46:57 Speaker_02
I would say hot water only if you're sanitizing something, like if you had sheets from someone who was sick.
01:47:05 Speaker_04
My kids are about to potty train, for instance.
01:47:08 Speaker_02
Well, see, now pee stains I would say you could do in cold water because urine is pretty sterile. But like a virus or something like that. Yeah, exactly. I would say hot water is appropriate then.
01:47:21 Speaker_02
But, you know, one thing people always think, they see blood and they assume, oh, I should wash it in hot water, and that's actually a great way to stain your clothing more. You absolutely need cold water for removing blood. Oil stains need warm water.
01:47:32 Speaker_02
To kind of melt the fat. Yeah, exactly. Well, the warm water emulsifies it, right? Yeah, and lifts it.
01:47:38 Speaker_03
What about preserving? I mean, I know because I wash a lot of fabric to sew with, and I know that I sometimes wash stuff in hot water, but I'm wondering, does that do anything to the color long term if you do wash in warm or hot water?
01:47:53 Speaker_02
Warm and hot water can definitely degrade and fade dyes faster. Since I've switched to doing cold water washing, I don't see fading on my clothing in the same way that I did before.
01:48:07 Speaker_05
What about odor? I know you're saying pee is okay, even blood is okay, but I just feel like the warm water will take away that smell. Is that totally wrong?
01:48:16 Speaker_02
No, that's not wrong, and that's why we recommend what we do, because the liquid detergents we recommend removed odor even in cold water. I see.
01:48:25 Speaker_04
How do you do the odor testing?
01:48:27 Speaker_02
Well, first of all, my informal odor removal testing is having a teenage son. Oh, yeah. That'll do it. But formally, what we did is we burned bacon and used burned bacon grease, which really smells, like really smells terrible.
01:48:45 Speaker_02
And we stained T-shirts with it and then had a panel of people decide which ones removed the most and which ones smelled the worst. And in testing, the picks we made all removed that odor really well, if not entirely.
01:49:03 Speaker_03
Okay, now that we have fully graduated, we almost are at PhD level here with stain fighting.
01:49:10 Speaker_03
Andrea, I'm curious, you know, if somebody's just really concerned about the environmental impacts of what they're doing in their home and they want to make better choices about their laundry, like, what are the things that they should be doing?
01:49:23 Speaker_02
That's great. The number one thing I would say to anyone is to wash in cold water when you can. Cold water is just a better choice in terms of energy efficiency. So washing machines have internal heaters that warm the water.
01:49:40 Speaker_02
And when you wash in hot water, almost all of the energy that's being used in that wash cycle is to heat the water. So when you wash in cold water, you don't use that energy. So that's probably the best thing you can do.
01:49:56 Speaker_02
I would say air dry when you can. So get a drying rack. pre-treat stains so that clothing lasts longer. And I would use less detergent. When you say less detergent, how much do you mean? Yeah, how much do you mean? Yeah. You know, this is—it's interesting.
01:50:10 Speaker_02
This is a tough question because now so many laundry detergent companies are coming out with hyper-concentrated detergent, but I would say two tablespoons. For like a big load? If you—if it's not heavily stained, yes.
01:50:22 Speaker_05
You know how they give you those caps with the little measuring thing on top and it's like literally a cup?
01:50:28 Speaker_04
Yeah. In college, I think I was putting two, two and a half cups in it.
01:50:33 Speaker_02
Why is my clothing slimy?
01:50:36 Speaker_05
Oh, yeah, that is what it would be, huh?
01:50:37 Speaker_02
Yeah. Yeah, you'll get residue and you just don't need that much. It's just unnecessary. And that is how you get allergies because it'll be right up against your skin. Laundry detergent sheets. No. Okay. They didn't clean well.
01:50:51 Speaker_02
You might as well just wash with water.
01:50:54 Speaker_03
Got it, waste. Okay, but what I'm taking from this is that essentially the best things that you can do from a sustainability lens are also kind of just what's good for your laundry in general.
01:51:04 Speaker_03
It's going to be the stuff that, you know, gets stains out the best. It's the stuff that will keep your clothing nice for longer. And so it's all kind of, it's like what's good for your laundry is good for the environment, too, in that way.
01:51:17 Speaker_03
Yes, I would agree with that.
01:51:19 Speaker_04
And the detergent is the Swiss Army knife. Detergent for everything.
01:51:23 Speaker_02
Use it for everything.
01:51:24 Speaker_04
Cold water.
01:51:34 Speaker_05
Andrea, this has been great. Thank you so much for coming. Before we go, we've got one more question, and we're going to ask this of all of our guests. What's one thing that you've recently bought that you really love?
01:51:46 Speaker_02
It has nothing to do with laundry, but I... She has other interests, guys.
01:51:54 Speaker_03
Oh, you're breaking my heart.
01:51:55 Speaker_02
I bought my husband the Grill Rescue grill brush that's recommended by our kitchen team. And he really loves this grill brush. And because he loves it, I would say it's the coolest thing I've bought recently. Aww, that's really nice.
01:52:10 Speaker_02
If you like it, I love it.
01:52:13 Speaker_04
Andrea, thank you so much. This has been so great.
01:52:16 Speaker_02
Thank you so much for having me.
01:52:24 Speaker_04
Guys, I feel like there are so many takeaways from this segment. I, for one, am going to go on the hunt for some powdered detergent.
01:52:33 Speaker_03
Oh, yeah. I actually ordered some on Amazon.com because that's where... Can you repeat that? Amazon.com. I think it's the only place that I could find it. I thought they only sold books.
01:52:45 Speaker_04
No, that's great. OK, cool, cool.
01:52:48 Speaker_05
I think my takeaway was washing cold because apparently I've been baking my cat's puke directly into my bedding.
01:52:54 Speaker_03
Yeah, I'm going to win that argument with my husband because he's been doing that, so that's a win.
01:52:59 Speaker_03
But to that point, I also think, like, you don't need to hunt around for, like, eco-friendly detergents necessarily if you're trying to be more, you know, earth-friendly.
01:53:07 Speaker_03
It's really just, like, energy is kind of the biggest environmental impact, so washing in cold also solves that.
01:53:13 Speaker_05
Totally. And then, like, you don't have to go out and hunt for a fancy pretreatment like your laundry detergent that you already have will do just fine. Use what you got.
01:53:21 Speaker_03
Wash cold.
01:53:22 Speaker_05
Yeah.
01:53:22 Speaker_03
That's right. I think that like you really can just get like two detergents essentially, a liquid and a powder, and you're probably like 95 percent of the way there. Perfect. Andrea was a star. She was amazing.
01:53:34 Speaker_05
I really hope that I can apply this, you know, when I have a machine in my apartment again.
01:53:39 Speaker_03
I really hope I can get the dirt stains out of my kids' clothes finally.
01:53:42 Speaker_04
Godspeed, my friends. And that's it for us this week. If you want to find out more about Wirecutter's coverage on laundry detergent or snag the products we recommended today, go to NYTimes.com backslash Wirecutter or find a link in the show notes.
01:54:03 Speaker_05
So if you like The Wirecutter Show, which we all hope you do, right, I think people are actually going to listen to this.
01:54:09 Speaker_03
I hope so. I hope people are listening right now. And if they're listening, maybe they'll follow us.
01:54:14 Speaker_05
Yeah. Or leave a review, a hopefully nice one. We'll always read the reviews. Even if it's mean, I guess.
01:54:18 Speaker_04
It also helps other people find the show. Yeah. For sure.
01:54:21 Speaker_05
Yeah. Thank you for listening either way.
01:54:26 Speaker_04
The Wirecutter Show is executive produced by Rosie Guerin and produced by Abigail Kuehl. Editing by Abigail Kuehl. Engineering support from Maddie Messiello and Nick Pittman. Today's episode was mixed by Daniel Ramirez.
01:54:41 Speaker_04
Original music by Dan Powell, Marion Lozano, Alicia Baitup, and Diane Wong. Wirecutter's deputy publisher and interim general manager is Cliff Levy. Ben Fruman is Wirecutter's Editor-in-Chief.
01:54:55 Speaker_04
Special thanks to Anil Chitrappu, Paula Schumann, Nina Lassam, Somi Hubbard, Jen Poyant, Jeffrey Miranda, Sam Dolnik, Julia Bush, and Katie Quinn. Can we rotate who scats each week? Oh my God. Rosie, you won.