Skip to main content

Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division AI transcript and summary - episode of podcast Hard Fork

· 73 min read

Go to PodExtra AI's episode page (Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Go to PodExtra AI's podcast page (Hard Fork) to view the AI-processed content of all episodes of this podcast.

View full AI transcripts and summaries of all podcast episodes on the blog: Hard Fork

Episode: Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division

Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division

Author: The New York Times
Duration: 01:09:52

Episode Shownotes

This week, President-elect Donald Trump picked Brendan Carr to be the next chairman of the F.C.C. We talk with The Verge’s editor in chief, Nilay Patel, about what this could mean for the future of the internet, and for free speech at large. Then, a new study found that ChatGPT

defeated doctors at diagnosing some diseases. One of the study’s authors, Dr. Adam Rodman, joins us to discuss the future of medicine. And finally, court is back in session. It’s time for the Hard Fork Crimes Division. One more thing: We want to learn more about you, our listeners. Please fill out our quick survey: nytimes.com/hardforksurvey. Guests:Nilay Patel, co-founder of The Verge and host of the podcasts Decoder and The Vergecast.Adam Rodman, internal medicine physician at Beth Israel Deaconess Medical Center and one of the co-authors of a recent study testing the effectiveness of ChatGPT to diagnose illnesses. Additional Reading:Trump Picks Brendan Carr to Lead F.C.C.A.I. Chatbots Defeated Doctors at Diagnosing IllnessGary Wang, a Top FTX Executive, Is Given No Prison Time We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Summary

In this episode of 'Hard Fork,' hosts Kevin Roose and Casey Newton discuss Donald Trump's nomination of Brendan Carr as the next FCC chairman, focusing on his stance against big tech and the implications for internet regulation and free speech. The podcast also explores a study revealing that ChatGPT outperformed doctors in diagnosing certain diseases, delving into the evolving dynamics between AI and healthcare. Finally, the segment covers legal issues in the tech world, including the investigation into Polymarket and the sentencing of individuals involved in crypto-related crimes.

Go to PodExtra AI's episode page (Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Full Transcript

00:00:01 Speaker_02
Casey, what's going on? Well, I have really changed my feelings about Blue Sky in the past week. Yeah?

00:00:10 Speaker_02
You know, before last week, I have to admit, while I did use it, and I did occasionally see stuff on there that I thought was, like, really funny or interesting, the feed was so political that I had just sort of written it off as not for me.

00:00:27 Speaker_02
But then something really powerful happened, Kevin. What's that? which is that my following doubled in four days. Mine too, I got like a weird number of followers. I mean, something is happening. There is something in the water.

00:00:42 Speaker_02
And what I think is funny about it, Kevin, is that my own experience reminded me how truly small-minded and petty we are about these social networks.

00:00:51 Speaker_02
Because we will speak in such lofty terms about, well, you know, here is kind of the vibe of the site. And, well, this one, you know, suppresses political content. And, you know, oh, I hate all the ads over there.

00:01:02 Speaker_02
At the end of the day, it is how many followers do I have How many little internet points did I get for making my little quip? And wherever the number is the highest, that is where you will find me. And that is all to say, please follow me on Blue Sky.

00:01:19 Speaker_03
I'm Kevin Reuss, a tech columnist at the New York Times.

00:01:21 Speaker_02
I'm Casey Noon from Platformer. And this is HardFork. This week, the future of the internet could look very different next year. The Verge's Nilay Patel joins us to talk about President-elect Trump's pick for the head of the FCC.

00:01:33 Speaker_02
Then, a new study found that chat GPT outperforms doctors in diagnosing some diseases. One of the study's authors, Dr. Adam Rodman, is here to discuss the future of medicine. And finally, courts back in session, Kevin.

00:01:45 Speaker_02
It's time for HardFork Crime Division. I rest my case. No, you don't rest your case at the beginning. It's at the end. Sorry. Adjourned? No. No! Stop it! Where's my gavel? You're ruining the show! Well, Kevin, let's get started this week with a car crash.

00:02:07 Speaker_02
Oh, yeah? Yeah. Brendan Carr has crashed into the news as the next potential chairman of the Federal Communications Commission.

00:02:15 Speaker_03
Yes. So, obviously, in the post-election period, President-elect Trump has been announcing many of his picks to lead top agencies.

00:02:24 Speaker_03
And the one that really stuck out to me that I thought was relevant to the topic of our show was his pick to lead the Federal Communications Commission, or the FCC, who's a man named

00:02:35 Speaker_02
Yes, Brendan Carr is a Republican. He's been on the FCC since 2017. And the FCC has five members on it. And collectively, they do control broadcast media in this country. They have a lot of legal authority to do that.

00:02:51 Speaker_02
They have less legal authority over the future of the Internet. But Brendan Carr is somebody who has a lot to say about how he thinks the Internet should work.

00:02:59 Speaker_03
Yeah, he's a real activist in some of these debates about internet regulation. He's been very vocal about going after big tech, as he calls it, for their political bias and what he views as anti-conservative censorship.

00:03:14 Speaker_03
He's constantly picking fights with people online and sort of defending his vision of an internet free of left-wing censorship. And he also starts every day, literally every single day, by posting on X, good morning and God bless America.

00:03:29 Speaker_02
And, you know, up until recently, I think, you know, if you're somebody who does not agree with Brendan Carr on these issues, it has been easy to dismiss him as a crank.

00:03:37 Speaker_02
But within a couple of months, he is going to be a person potentially shaping Internet policy in this country. And whenever Internet policy issues are in the news, I want to know what Nilay Patel thinks. Nilay was my old boss at The Verge.

00:03:52 Speaker_02
He's a co-founder of The Verge. He is also a formidable podcaster and one of our greatest adversaries in the realm of podcasting as the host of Decoder and The Vergecast.

00:04:02 Speaker_02
And Nilay has been writing recently about what the arrival of Brendan Carr as chairman of the FCC could mean for not just the internet, but for speech in America in general.

00:04:13 Speaker_03
Yeah, Nilay thinks that we are headed into a truly scary and dark timeline with the appointment of Brendan Carr at the FCC.

00:04:22 Speaker_03
And as someone who has not followed Brendan Carr's career super closely, I'm very curious to understand why he thinks this man poses such a threat to the future of the internet.

00:04:30 Speaker_02
Yeah, we're going to hear about that, but we're also going to ask him for maybe some more empowering thoughts that we can bring into this next chapter in American history. So with that, let's bring in Nilay Patel. Welcome to Hard Fork, Nilay.

00:04:47 Speaker_02
It's great to have you here. And I want to get started with some basic background information. Who is Brendan Carr, and why does Trump see him as a, quote, warrior for free speech?

00:05:00 Speaker_01
Yeah. Brendan, he is an extremely online commissioner of the FCC. He was appointed by Trump.

00:05:08 Speaker_01
His view is that the FCC should spend a lot of time regulating not only the traditional purview of the FCC, which is wireless spectrum and broadcast television, but also big tech companies.

00:05:21 Speaker_01
And he's got a lot of ideas about how he might get that power and then how he might use that power. But really what you have is a

00:05:30 Speaker_01
a guy who likes going on Fox News and Twitter and railing about how there's a censorship cartel in big tech and it should be crushed. And I think Donald Trump likes that a lot.

00:05:40 Speaker_02
So most people probably don't think of the FCC as a very powerful agency in their daily life. But, Neal, there was a time in recent American history where it did play a larger role. So tell us a bit about the recent history of the FCC.

00:05:52 Speaker_01
So most Americans, I think, in 2024 never think about the FCC. Especially the last five years, the FCC has really receded. It has been sort of a neutered agency. No one trusts it anymore to do the things that people want it to do.

00:06:08 Speaker_01
That has not always been the case. If you go back 20 years, the FCC is a cultural force in America. And this really hit its peak with Janet Jackson at the Super Bowl, right? Justin Timberlake.

00:06:18 Speaker_03
Nipplegate.

00:06:19 Speaker_01
Yeah. He rips off the corset, the bodice. America is forced to endure like half a second of a nipple and the world goes crazy. And if you remember the George W. Bush administration, like hated nipples. Right.

00:06:31 Speaker_01
John Ashcroft is George W. Bush's attorney general. He covers famously covers up the statue of Lady Justice, the DOJ, because God forbid lawyers see nipples. It's just a very weird time in America.

00:06:43 Speaker_01
And all of this is based on the dominance of broadcast media. Most Americans at this time get most of their media from broadcast television and radio stations over the airwaves, right?

00:06:54 Speaker_01
You hang up an antenna, you get NBC or CBS or ABC, you hang up an antenna on your car, you get whatever local radio station. And that spectrum is owned by the government and it's licensed out to these broadcasters in the public interest.

00:07:06 Speaker_01
And that's really where the FCC's authority comes from. And there's a long string of Supreme Court cases that basically add up to

00:07:13 Speaker_01
The spectrum belongs to the people, the government gets to make rules based on the spectrum, and the Americans do not want nipples on their public airwaves, so we're gonna freak out about this.

00:07:21 Speaker_01
So this is like the high point of the FCC as a cultural force. And what happens during all of this is the iPhone comes out, and YouTube is introduced, and podcasting is introduced, and Americans by and large switch to cable television in huge numbers.

00:07:35 Speaker_01
They stop watching broadcast TV, they stop consuming this content. So the FCC itself says, we have to get out of this business. We have to be out of the speech policing business. Michael Powell, who's Colin Powell's son, is chairman of the FCC.

00:07:48 Speaker_01
And he's like, we got to stop this. These broadcast providers, they're not competing with each other. They're not the dominant force. They're competing with cable television.

00:07:56 Speaker_03
And cable television is not regulated the same way as broadcast television because it doesn't go over the air on this sort of publicly owned spectrum.

00:08:04 Speaker_01
Right. It's on Comcast's wires, not the public's airwaves. You know, he's a Republican FCC chairman. He's like, we're making these really weird rules for these companies.

00:08:12 Speaker_01
We have to get out of this business and we need to get into the business of broadband deployment. And this has largely been what the FCC has been focused on since 2011, 2012. That's where you get the big net neutrality fights.

00:08:25 Speaker_03
And net neutrality, just for people who are not steeped in the history and the context here, is basically the rules that say, if you're a Comcast, if you're an internet service provider, you cannot dictate what goes over your pipes, right?

00:08:39 Speaker_03
You can't impose censorship or speech regulation at the level of the internet service provider. You are just supposed to be like the dumb pipes. That is essentially net neutrality, correct?

00:08:49 Speaker_01
Yes, and it's really predicated on, I see, that you're gonna have massive competition for internet content. which turned out to not be the case, right? You only ended up with a few giant platforms. You ended up with YouTube and TikTok and Meta.

00:09:03 Speaker_01
Those are your choices.

00:09:05 Speaker_01
So something very weird happened along the way of the internet where we recreated the dominance of broadcast television, just a handful of giant companies that control most of the media in the country, without any of the legal foundation for how the government might get involved in that content.

00:09:22 Speaker_01
So this is the stage we're in. We unwound this previous dominant broadcast media regime where we had pretty overt speech policing all the way to, well, if you want to see a nipple, the internet will provide you a nipple at any time.

00:09:35 Speaker_01
No one cares anymore. But you still have a lot of people who are very interested in how the platforms moderate and the political biases of these platforms.

00:09:43 Speaker_01
You have a very active right wing, which is insistent that any moderation at all that disfavors them is a moral catastrophe that should be stopped with the full weight of the government.

00:09:52 Speaker_01
And this is where you get Brendan Carr, who up until recently was a pretty normal, if somewhat overly online, deregulatory force, right?

00:10:03 Speaker_01
That's his worldview until a couple of years ago, when the big push for, well, we should start yelling at Mark Zuckerberg more to make sure the algorithm favors conservative viewpoints, or at least doesn't overtly favor liberal viewpoints.

00:10:19 Speaker_01
Brendan takes this up.

00:10:22 Speaker_01
And in the sweep of this history, you can see that what he wants is to be an old school chairman of the FCC, where if you're mad about nipples on Instagram, you can write him a letter and he will have the power to fine or otherwise penalize meta.

00:10:36 Speaker_01
And that is just the wheel that is turning right now.

00:10:40 Speaker_02
And so I think that gives us a good sense of, like, what this person's worldview is and what he might do if he had that sort of power. I guess the next logical question, then, Nilay, is, does he have this power? Is any of this authority in the FCC?

00:10:52 Speaker_02
And if it's not, you know, what do you expect him to do about it?

00:10:56 Speaker_01
Yeah, I hope I did a good job of laying out what feels like a logical pendulum swing. But actually, legally, none of this makes any goddamn sense. Like, in a very real way, he does not have this power.

00:11:09 Speaker_01
And he was the author of the Project 2025 chapter on the FCC and what you might do with it and how you might use it. Notably, Project 2025 does not say we should dismantle the FCC, like it says we should dismantle every other agency.

00:11:22 Speaker_01
Brendan's chapter of Project 2025 says the FCC should get even more power.

00:11:27 Speaker_03
It should be even more involved in things.

00:11:29 Speaker_03
The chapter that he wrote, I think most people associate Project 2025, this is a sort of document, this sort of roadmap for a second Trump administration that was put together by some conservative think tanks and groups.

00:11:41 Speaker_03
And I think most people associate it with advocating for rollbacks on, you know, abortion and other social and cultural issues.

00:11:47 Speaker_03
But it actually does have this sort of interesting part about the FCC and how Brendan Carr specifically wants to regulate the Internet.

00:11:54 Speaker_03
And a lot of what's in this chapter is, you know, boring sort of normal FCC chair stuff about spectrum auctions and rural broadband access and stuff. But he starts with this thing about sort of reigning in big tech.

00:12:07 Speaker_03
Neil, what is Brendan Carr's idea, his big idea for how to rein in big tech?

00:12:12 Speaker_01
I want to be very clear. My personal opinion on Brendan Carr is this man isn't capable of having big ideas.

00:12:19 Speaker_01
I do not have a high opinion of Brennan Carr, but his idea is the same idea that everyone else has, which is we should mess with Section 230 until the platforms do what we want.

00:12:29 Speaker_03
And Section 230, for people who are not experts, is the part of the federal law that basically shields online platforms from legal liability over user-generated content, right?

00:12:40 Speaker_03
So if you post something illegal on Instagram, the government can go after the poster, but it can't go after the platform.

00:12:48 Speaker_01
I appreciate that you think there are hard fork listeners at this point who don't know what Section 230 is. There might be a few. Section 230, the stakes of messing with 230 right now are, do you want YouTube to exist?

00:13:00 Speaker_01
Those are the stakes of 230, right? Do you want any user generated platform to exist at scale? Because if you make Google liable for the content on YouTube, there will quickly not be content on YouTube.

00:13:11 Speaker_01
You will actually turn them back into cable companies. So these are existential stakes.

00:13:14 Speaker_03
And Brendan Carr does not propose getting rid of Section 230, interestingly, as some conservatives have done. He doesn't say we should repeal the whole thing.

00:13:22 Speaker_03
What he says instead is we should sort of limit these court-added extras that judges have piled on top of Section 230 to sort of extend the shield granted by the original law.

00:13:35 Speaker_01
Right. And that's the part that is wholly nonsensical. It is a fantasy. So first of all, Section 230 is a law. Congress wrote it. It's famously 26 words long, and it has gone to court numerous times in numerous ways.

00:13:51 Speaker_01
And the courts have uniformly upheld the idea that these 26 words are there to keep platforms from being liable for what their users post over and over and over again. There's not a bunch of court added additions to this.

00:14:04 Speaker_01
That just doesn't exist in law. Second of all, even if there was. You get around that by Congress doing more stuff.

00:14:10 Speaker_01
You don't get around that by being an unelected chairman of an agency most people don't give a shit about and just issuing decrees about what the law means. That's just not – that's fully not how it works.

00:14:20 Speaker_01
And the courts in this country, particularly the conservative justices of this country, do not believe that agencies should have any power.

00:14:28 Speaker_01
So even if you're Brendan Carr, not only does he not have that power, if he tried to use that power, he would run right into the conservative legal movement, which is trying to defang the agencies in a very specific way.

00:14:41 Speaker_01
So it's just none of this makes sense, except Well, if I can wield this weapon over the big platforms, they might do what I say anyway. And that is very much the animus of every attempt to modify 230.

00:14:56 Speaker_01
No one's actually saying we should get rid of this law that allows YouTube to exist. They're saying if we threatened this law enough, YouTube's trust and safety team will moderate YouTube the way we want.

00:15:07 Speaker_02
which I think in practice probably has happened. I think platforms have been responsive to those sorts of threats.

00:15:13 Speaker_02
You know, if you're the sort of person who likes net neutrality, you like section 230, I feel like people might hear what you're saying and be excited about it. They think, okay, cool.

00:15:23 Speaker_02
So this man is, he's sort of, you know, like, you know, banging pots and pans and trying to get everybody all scared, but there's really not a lot of legal basis for what he's threatening. So maybe you might feel relieved. At the same time,

00:15:36 Speaker_02
I feel like we're in a world where we can rely less on judicial precedents than we've been able to in the past. So many things that seemed like a slam dunk either turn into a coin flip or the Supreme Court decides to throw out decades-old precedent.

00:15:50 Speaker_02
So as we move into this new world, this new Trump administration, how are you thinking about that risk and how the internet might change just because we might be sort of living in legal chaos land?

00:16:02 Speaker_01
I feel very strongly that the First Amendment is under the most direct threat that any of us will ever really experience. The rise of the internet that we know coincided with a period of pretty unfettered expression

00:16:17 Speaker_01
The government was told not to regulate the internet. This phrase came up over and over again. Leave these companies alone. We're going to let a thousand voices bloom.

00:16:25 Speaker_01
We're going to get over a lot of weird indecency ideas we have about media in general. More people have more access to speak, that is an unqualified good thing, and we are gonna leave that alone.

00:16:38 Speaker_01
And that is an interpretation of the First Amendment, or at least a First Amendment environment that I think most people are used to right now. That is our expectation. Those walls are gonna come in closer.

00:16:46 Speaker_01
What you're getting out of the Brendan Carrs in the Trump world version of the First Amendment is closing in, is my political opponent should be silenced. or the platform should make sure to favor us.

00:17:00 Speaker_01
And we will wrap it up in what sounds like a defense of free speech, but actually what it is, is punishment. And you see that over and over again. You see it expressed as punishment. You see Elon Musk, who runs an ISP in this country, Starlink. Starlink.

00:17:17 Speaker_01
Saying the hammer of justice is coming for people who publish election hoaxes. Well, lying is legal in America. It's just fully legal. Hate speech is legal in America.

00:17:26 Speaker_01
We've run this all the way up to the Supreme Court multiple times, and it's just legal to lie. It is legal to be racist. The government does not punish these things because we expect the market to punish these things.

00:17:38 Speaker_02
Yeah.

00:17:39 Speaker_02
I mean, on that front, Carr recently sent a letter to the CEOs of four big tech companies, so Apple, Meta, Microsoft, and Google, blaming them for what he called an unprecedented surge in censorship, warning them that they might face investigations, not just for their own content moderation, but for work they do with third-party groups like NewsGuard, which do ratings for news sites around bias and accuracy.

00:18:05 Speaker_02
Do you see that as just kind of more pure intimidation? Yes.

00:18:09 Speaker_01
But I'm curious, you live in this world, you've covered trust and safety a million times. This idea of groups like NewsGuard, where you have this appeal to a third party that will tell you how biased your news is, has always been problematic.

00:18:23 Speaker_01
But do you think the government should have a role to play in telling you how biased your third party is?

00:18:28 Speaker_02
No, absolutely not. I mean, you know, like what activity could be more protected by the First Amendment than saying like, I think this website is biased? You know, like there is no even theory of harm there, right?

00:18:40 Speaker_02
Like, I can see how a car and his allies would come along and say, oh, there's this, like, giant censorship apparatus. But in practice, you know, sites like NewsGuard aren't even particularly widely used, right?

00:18:52 Speaker_02
And there are all kinds of these rating services that I think most people basically ignore.

00:18:57 Speaker_02
But to me, that's actually what makes it scary, is this thing that isn't even that influential, you know, is suddenly the target of an FCC commissioner who is now threatening platform owners saying, do not work with these people.

00:19:09 Speaker_02
I mean, to me, that seems like the much greater threat to speech than, you know, some website that says Fox News leans conservative.

00:19:16 Speaker_01
Right. And the piece of that that really worries me is there's no legal mechanism to mess with these big companies who are all basically nation states unto themselves. Like you can fire threats at Jeff Bezos all day.

00:19:28 Speaker_01
Look, he's going to get on his yacht and sail away from you as fast as he can with his four support yachts in tow. And it'll just be waving at them from the beach. Fine. But there are speakers in America where Brendan will have the power, right?

00:19:41 Speaker_01
So the actual broadcast networks still use the spectrum.

00:19:44 Speaker_01
And Kamala Harris shows up on Saturday Night Live, and Brendan Carr gets to yell about revoking the broadcast licenses of NBC, which also makes no sense, because it's the stations that have the licenses, not NBC proper. And he knows that.

00:19:59 Speaker_01
But it doesn't matter, because you can go on Fox News and say, I'm going to revoke NBC's broadcast license for having the temerity

00:20:04 Speaker_01
for a presidential candidate to be on their program, even though the next day Trump was given free airtime during an Ascar race, which is the rule that the government has.

00:20:15 Speaker_01
NBC is very good at fulfilling this rule because they've been a broadcaster for 5,000 years.

00:20:20 Speaker_03
Yes, I have two questions about this letter that Brendan Carr sent to these big tech CEOs. One of them is the ones that he included were somewhat mysterious to me. So I get why Meta and Google are on this list.

00:20:31 Speaker_03
Conservatives have been mad at those two companies in particular for years about perceived censorship. But what are Apple and Microsoft doing on this list? What kind of objectionable content moderation are they doing in Brendan Carr's eyes?

00:20:43 Speaker_01
Apple runs the App Store, and in order to have an app in the App Store, you have to pass Apple's rules of acceptable moderation.

00:20:49 Speaker_01
So I think famously, Parler was kicked off the App Store, Gab was kicked off the App Store because they were still having all kinds of stuff to go buy. Apple doesn't want this to happen.

00:20:58 Speaker_01
If you're a Brendan Carr and you want to make sure that no one gets to control speech in America except for you, the person who runs the App Store is your greatest enemy because he can keep the platforms off the phones entirely. Microsoft

00:21:10 Speaker_01
runs a bunch of big platforms, sure, like you might be worried about Bing, but they are also a huge developer of AI. And I think Carr is smart enough to know that, you know, the next turn of all of this is what the AI search results are.

00:21:22 Speaker_01
And if the AI starts to say, hey, this is misinformation, if Grok on X literally says Elon Musk is the greatest source of misinformation on X, which it has said recently, that's a big problem.

00:21:33 Speaker_01
And I think putting these companies on notice that, you know, you don't want quote unquote woke AI is a big deal for all of these players. Yeah.

00:21:39 Speaker_02
You mentioned the broadcast licenses a minute ago. I wanted to pick that up again because you also established earlier that the FCC does have a bit more legal authority with them. So, you know, I agree with you.

00:21:54 Speaker_02
It seems like nonsense to say, well, one candidate is allowed to appear on TV, but the other isn't. But at the same time, I also do expect that they will continue making those threats. So what sense can you give us of how

00:22:09 Speaker_02
easy is it for someone like Brendan Carr to wreak havoc with these broadcast networks and what do you expect there?

00:22:16 Speaker_01
I think it's tremendously easy for him to wreak havoc with the broadcast networks, not because of the law, but because they are inherently weak counterparties at this moment in American media history. They are dying.

00:22:28 Speaker_01
This is a historically low moment for broadcast television viewership. And even the things that were keeping it alive, the NFL, are moving to streaming.

00:22:36 Speaker_01
This is a historically low period for cable television viewership, which is how a bunch of these TV networks are making all their money. We'll see. Does anybody there have the fight? Because they could win.

00:22:46 Speaker_01
I honestly believe if they wanted to win these fights, they could body up against Brennan Carr and say, look, we're not going to do speech police in America. And we're also complying with the rules, right? Fully, we are in compliance with the rules.

00:22:58 Speaker_01
But I don't think that matters in a world where the businesses are dying, the executives just want to cash out and leave, and the audiences don't care because they're not watching anyway. And that is very, very dangerous.

00:23:07 Speaker_01
When I say that I think the Brendan Carr FCC embedded in the Trump administration represents the biggest threat to free speech that any of us will have ever experienced, that is the mechanism.

00:23:16 Speaker_01
It's the chilling effect with the power they have combined with their obvious desire to create new power.

00:23:22 Speaker_03
Yeah, I think that's right. And I think media executives have not quite fully internalized the degree to which the people who are about to take power in this country are obsessed with destroying them.

00:23:36 Speaker_03
And I think this is quite different actually than the first Trump term when there were also sort of these grave proclamations about what would happen to the media, but largely media was, you know, fine, or at least there were pockets of it that had a Trump bump from the first Trump term.

00:23:49 Speaker_03
I think this is different because I think for the people who now are going to be running the country, including people like Elon Musk, This is not just something he thinks about occasionally.

00:23:57 Speaker_03
This is one of his driving priorities in life, is to delegitimize and undercut and ultimately destroy what he sees as the legacy media.

00:24:07 Speaker_03
But I'm also just curious, Nilay, as a person who does understand what's coming, does think about this stuff, how do you operate in an environment like that?

00:24:15 Speaker_03
Aside from just hiring lawyers to deal with a bunch of bogus defamation claims, what should you do?

00:24:25 Speaker_01
Well, first of all, Kevin, I'm curious if you think the legacy media continues to exist. Like in my view is that it's already dead, right?

00:24:31 Speaker_01
Like what this election showed is that actually Trump's mastery of the YouTube podcast format was much more relevant than whatever happened on ABC News, like fundamentally.

00:24:43 Speaker_01
And so I don't want to spend my time worrying about a thing that has already destroyed itself. And so it's like, the real question that I have is like,

00:24:52 Speaker_01
if our media is all gonna be a bunch of independent creators on YouTube or independent podcasters buffeted by Spotify's ad rates or whatever, how will those platforms apply this pressure to our speakers in response to the Trump administration?

00:25:05 Speaker_01
And will anybody even be able to follow the causal line of like Brendan Carr yelled at CBS, so the person who runs podcasts at Spotify made sure to promote the Daily Wire more than something else.

00:25:19 Speaker_03
I mean, do you think we would ever see something like an equal time mandate for YouTubers where like if Jake Paul does a video praising Donald Trump, he also has to do one praising whoever's running against Donald Trump?

00:25:31 Speaker_01
I hope not. Elon Musk likes to say he's a free speech absolutist. He is not, but I might actually be one. I have a lot of complicated thoughts about this lately, but I don't think that we should overcome our own First Amendment in that way.

00:25:47 Speaker_01
There are laws in other countries that are wacky. In India, there was a law proposed that said if you had a YouTube channel over a certain size, you had to register with the government for preemptive regulation.

00:25:58 Speaker_01
Imagine how the heavily armed American population would react to that idea in this country.

00:26:03 Speaker_03
I only support regulating YouTube channels like Cocomelon, which are a blight on humanity. But that's kids, right?

00:26:09 Speaker_01
If you go and ask politicians on both sides, no matter how credible or consistent or cynical you think they are, you go and say, where can you find a hook that allows you to overcome the First Amendment

00:26:20 Speaker_01
and pass some speech regulations that everyone will agree on, they will point to children's content universally. And that's why the Kids Online Safety Act exists, right?

00:26:28 Speaker_01
That's why, hey, we should make sure that at least this group of people that cannot protect themselves, and we don't think they can make choices in the market to benefit themselves, we protect them at the platform level.

00:26:39 Speaker_01
And that is also why the platforms are fighting against it so hard, right? Because they don't want to accept that responsibility. But that's about it.

00:26:45 Speaker_01
There's not a world in which we agree that there should be such a thing as the fairness doctrine for podcasts, because the solution is to just have more podcasts. And that basically is this, like, there is an infinite amount of podcasts.

00:27:00 Speaker_03
And that should be- They truly are not, Nilay. You have two. I mean, we can keep creating podcasts in this country.

00:27:09 Speaker_01
I will fix free speech by just starting new podcasts every single day. But that's what I mean. You can either have competition or you can have regulation. And up until recently, our solution has been competition.

00:27:21 Speaker_01
And I think what we're all kind of realizing or maybe waking up to is actually the recommendation algorithms, you know, the TikTok for you algorithms, they're putting much more of a thumb on a scale than anybody can realize or quantify or see, or even research because APIs aren't open.

00:27:36 Speaker_01
And maybe that's the thing we need. Like maybe that's where we should point our regulatory effort is saying you need more competition there.

00:27:43 Speaker_01
Because otherwise you start to get into this really dicey space where you are regulating the content itself, which is what Brendan Carr is trying to do. And I just think, no matter if you're super conservative or super liberal, that's too dangerous.

00:27:55 Speaker_01
The government should not have that power.

00:27:58 Speaker_03
Well, on that cheery note, Nilay, thank you for coming on.

00:28:00 Speaker_01
Look, I'm just telling you, the empowering thing, whenever you see a government regulator being like, we should do some speech regulations, just say they're bad. It's great.

00:28:07 Speaker_01
It's like the most American thing you can do, to look at the speech police and say, no, leave. And it feels good.

00:28:15 Speaker_01
And there's just, I promise you, I promise all listeners, there's something deeply empowering about that, that you can express at almost every turn of your life.

00:28:22 Speaker_03
All right, we'll give it a shot. All right, thanks, Neely.

00:28:25 Speaker_02
Thanks, Al. Neely, this was great. Thank you so much.

00:28:31 Speaker_03
When we come back, we've got a doctor's appointment. We'll talk to one of the authors of a new study showing how effective chat GPT can be in diagnosing disease.

00:28:39 Speaker_02
How much does the co-pay?

00:28:41 Speaker_03
I think it's 20 bucks a month.

00:28:50 Speaker_02
Well, Casey, it's time for your annual checkup. Oh my goodness. You know what? You're joking, but I actually do have my annual checkup later today. Wait, really? Yeah, I do. You're going to the doctor? That's right.

00:29:00 Speaker_02
It's time to find out what's going on with this whole body, Kevin. Well, just from looking at you, I would say you're not getting enough vitamin D. Well, I was recently diagnosed as handsome.

00:29:11 Speaker_03
I think you need to get a second opinion on that. But Casey, I want to talk today about AI and medicine because there was a thing that caught my attention recently.

00:29:20 Speaker_03
My colleague at the New York Times, Gina Collada, wrote a story about a study that came out a few weeks ago over at JAMA, the Journal of the American Medical Association, which showed that on average, at least in this study, CHAT GPT was better at diagnosing illnesses than doctors, even doctors who had access to CHAT GPT.

00:29:39 Speaker_02
And why that's so fascinating to me is for decades people have been turning to WebMD to do something very similar and mostly it seems getting the wrong answer.

00:29:48 Speaker_02
Certainly the people posting online said, oh, I typed these three symptoms into WebMD and you know it told me I was dying. That is not what appears to be happening with ChatGPT. ChatGPT is actually able to figure out what's going on with these folks.

00:30:00 Speaker_03
Yes. So we have so many questions about this study that we invited one of these study's authors, Dr. Adam Rodman, to join us.

00:30:06 Speaker_03
Dr. Rodman is an internist at Beth Israel Deaconess Medical Center in Massachusetts and the host of a medical history podcast called Bedside Rounds. Let's bring him in. The doctor will see us now. The doctor will see us now.

00:30:28 Speaker_03
Adam Rodman, welcome to Hard Fork. Thank you guys for having me. So let's talk about this study that you helped design. Tell us about the study and sort of what you were aiming to discover.

00:30:39 Speaker_05
Well, we were testing a simple hypothesis in a complicated way. That's what scientists do. We get too much into the details.

00:30:45 Speaker_05
But, you know, one of the presuppositions in my field has been this idea that AI plus humans will always be better than AI alone, right? There's something essential about the humans.

00:30:57 Speaker_05
And a lot of health systems have rolled out these, like, secure versions of chat GPT. Sometimes there's other language models. with the idea that it'll make doctors better. So we basically tested that hypothesis out.

00:31:08 Speaker_05
We did a randomized controlled trial where we gave doctors, we gave attending physicians and residents, so those are physicians in training, about, oh, it was literally 50-50.

00:31:17 Speaker_05
And we either randomized them to go through these really complicated cases with CHAT-GPT or without. And we didn't just measure the diagnosis. We did, of course, measure whether they got the diagnosis.

00:31:28 Speaker_05
But we measured these really nuanced measures of how people think. Were you able to look for evidence that supported what you thought? Were you able to look for evidence that didn't support what you thought?

00:31:38 Speaker_05
Were you able to do these kind of basic cognitive tasks of a doctor?

00:31:41 Speaker_03
Hmm. What kind of information were you presenting to these doctors and these AI models? How detailed was it? Like the kind of thing that you would get in a medical school exam or like what kinds of problems were they being asked to solve?

00:31:55 Speaker_02
Yeah, I want to see if we can solve some of them.

00:31:57 Speaker_05
Do you want me to go through one of them for you? Sure, let's hear one. I'm excited to hear you guys attempt to go through a medical case as we go. I think it's scurvy. I don't think any of them were scurvy, unfortunately.

00:32:13 Speaker_02
If there's one thing we've learned about podcasts is that people love a medical mystery.

00:32:16 Speaker_05
Yeah, this is basically like House MD, right? Okay. Yeah, exactly. Here you go. A 76-year-old man comes to his doctor complaining of pain in his back and thighs for two weeks.

00:32:26 Speaker_05
He has no pain sitting or lying, but walking causes severe pain in his lower back, buttocks, and calves. He has a fever. He's tired.

00:32:32 Speaker_05
He was told by his referring cardiologist that his recent test results, that since his pain started, he has a new anemia. So his blood levels are low and he has renal failure.

00:32:41 Speaker_05
And then a few days before the onset of the pain, he had coronary angioplasty. So he had a coronary catheterization of his heart and opened a vessel and he got heparin during that. And then we go over like the lab values and stuff.

00:32:52 Speaker_05
This is not an easy case here. This is something that I think every doctor would know. Well, do you want to try to solve it first? Sorry, I should have. I should have. My first thought was chlamydia. Post-cardiologist acquired chlamydia.

00:33:05 Speaker_02
Exactly. Kevin, any thoughts? I'm still going with scurvy.

00:33:09 Speaker_05
Okay, great. What was the real answer, Adam? Cholesterol emboli syndrome, of course. No, I'm just kidding. It's actually a very hard diagnosis.

00:33:17 Speaker_02
That was my second guess.

00:33:18 Speaker_05
Yeah, second guess. Yeah, I mean, and the point is, none of the cases are what are called zebras, right? They're none of the things that are often on house MD. They're all things that are tricky to figure out, but you will see and are real.

00:33:30 Speaker_05
The purpose wasn't really whether or not the humans got the diagnosis, but whether they went through those steps that are essential and generalizable to getting any diagnosis.

00:33:38 Speaker_03
So you give these little vignettes, these medical sort of mysteries to the doctors in the study, and the doctors are given the use of GPT-4 to try to help them diagnose and figure out what's going on with this patient. Then you also had just...

00:33:58 Speaker_03
GPT-4 by itself, with no help from human doctors, try to analyze the same cases, and then you compared the analysis or the diagnosis from both groups. Is that right?

00:34:09 Speaker_05
Exactly. And we also let them use any other resources they wanted.

00:34:12 Speaker_03
And were these doctors in the study chosen because they had interest in using AI for diagnosis? Were they mostly more tech-savvy doctors? Were they people who had used this stuff before?

00:34:25 Speaker_05
No, so we did the classic trick to get a good subset of doctors is we paid them. So these doctors were everyone. They had been in practice for a variety amount of time. Some people were experienced CHAT GPT users. Those were the minority.

00:34:38 Speaker_05
Some people had never used it before. Most people fell in between. And what were the findings? Yeah, so the findings were not the most optimistic if you want to make people better, which is that the AI model did not improve human performance.

00:34:54 Speaker_05
So humans using the AI model did about as well as humans alone. And then, of course, the finding that I think the reason that I'm here and that everyone is angry at me is that the AI model itself drastically outperformed both groups.

00:35:07 Speaker_03
Yes, this was the headline, you know, of a lot of the coverage about it was that the AI had beaten the doctors. Even if you gave the doctors access to AI, the AI by itself appears to do better at diagnosing these things.

00:35:19 Speaker_03
Now, obviously, we should make some caveats. This is a small study. We obviously would want more studies to sort of confirm this result. But this really stuck out to me because it seems like

00:35:28 Speaker_03
sort of reading the study, what happened is that basically the human doctors did not believe the AI could be as good or better than them at diagnosing.

00:35:38 Speaker_03
And so they would go in and sort of second guess what the AI had said and end up getting the diagnosis wrong as a result. Is that consistent with the findings?

00:35:46 Speaker_05
Yeah, I say there are two, well, maybe three reasons, two reasons. So one, some people, I mean, despite the basic training, some people didn't quite know how to use a language model to get the most use out of it. So probably some of that is training.

00:35:59 Speaker_05
Number two, though, when we look at the data, people liked it when the AI model said, oh, this is your idea. These are the things that agree with it. But when the AI model said, hey, man, you might be wrong. These things don't fit. They disregarded that.

00:36:13 Speaker_02
Here's why that resonates with me. Have you ever been in an Uber and they have the Google Maps open and Google Maps is like, you might want to take this route. And they say, no, no, no, no, I actually know a better way.

00:36:24 Speaker_02
And the next thing you know, it takes you an extra 30 minutes to get you wherever you were going. Human, I firmly believe there is no Uber driver who can outsmart Google Maps.

00:36:33 Speaker_02
And we may be moving into a situation where most doctors cannot outsmart Chat GBT.

00:36:39 Speaker_05
Well, and that brings us to reason number three, the reason that people are angry at me, which, you know, I don't think it's the case now. It might be the case with O1, and it's certainly going to be the case in the next one to two years.

00:36:49 Speaker_05
Like, maybe AI models are better at making diagnoses than human doctors. Like, I don't think that's the case with GPT-4 Turbo, which was the model that was used here, but it's going to be true at some point, and we're quickly approaching that.

00:37:02 Speaker_02
Yeah, and we should say, this study took place last year, right? So, like, all of the models that doctors have access to are now almost, they are 12 months better than they were, you know, when you read the study.

00:37:11 Speaker_05
Yeah, this is the classic academic publishing lag. And of course, I'm talking about this trial now and doing really other cool stuff. But, like, the models have continued to improve, especially in diagnostic domains.

00:37:20 Speaker_05
Like, they're saturating our benchmarks, right? Everything that we can throw at them and we're like, this is what humans should accomplish. By the way, humans are like 45, 50 percent. The new models are like, well, just kidding, I'm 90 percent. So...

00:37:32 Speaker_02
Well, so I have a question about that, which is, you know, ChatGPT released this O1 Preview model, which does better reasoning, that's what they tell us, and I have not been able to figure out any prompt that I, as a mere journalist, actually seem to have any need of it.

00:37:50 Speaker_02
As a doctor, are you already turning to this model for reasoning through difficult medical questions?

00:37:56 Speaker_05
Yes. Yes, and I have a preprint that will come out in the next couple of days that shows how dramatic it is.

00:38:01 Speaker_03
Yeah, I mean, the reason that this study caught my eye and fascinated me so much is that I think it's possible to imagine that a version of this finding could be found in many different fields.

00:38:14 Speaker_03
It's not just going to be medicine where the AI reaches a point where it is better than either the human practitioners in that field or the human practitioners using AI in that field.

00:38:28 Speaker_03
And I think when that point happens for many white-collar sort of knowledge workers, there's this question of like, how do you as the practitioner react? Do you get defensive and say, oh, the AI has to be flawed.

00:38:40 Speaker_03
It couldn't possibly be better than me. I'm not going to use it. Do you rebel against the AI and say, we can't, you know, these things make things up. They don't always get the thing right.

00:38:49 Speaker_03
Or do you embrace it and try to get good at the technology and use it in your work? Is that scenario playing out among doctors?

00:38:57 Speaker_03
Do you see doctors who are really happy about these findings because they say, oh man, we're going to be able to give patients such better care? Or do you think most of them are sort of reacting from a place of fear and confusion?

00:39:09 Speaker_05
So yes, yes, and yes. Different people are reacting differently. Obviously, the reason that I'm doing this work is I want better care for my patients. And again, I like making diagnoses. I'm a huge nerd. I'm like the prototypical internist.

00:39:20 Speaker_05
I pace around my patient's room like a crazy person trying to figure out what's going on. But if this algorithm helps me take better care from them, I will give that up. Other people are resistant.

00:39:30 Speaker_05
Like, to insult doctors a little bit, we're a profession that really prides ourselves on our cognitive abilities. It gives us a lot of societal power and power over our patients. And this is a professional challenge to my field.

00:39:46 Speaker_05
I am a pain in the butt, so that's fine. I don't care about that. But there are a lot of people that do. Right now, I'm at the Macy Foundation Conference.

00:39:53 Speaker_05
It's all the top medical educators to try to figure out what AI means for how we train the next generation who's going to be practicing medicine for 30 years. These are things that the field is fiercely debating and arguing about right now.

00:40:05 Speaker_05
I'm just happy we're having the conversation.

00:40:07 Speaker_02
Well, I have to say, I mean, the results are fascinating, but I do find myself siding in some ways with doctors who might be exasperated with these findings.

00:40:17 Speaker_02
And the reason is, you know, Kevin, you and I say all the time, hey, don't bet your career on anything that a large language model is telling you. These things do hallucinate. They make up facts all of the time.

00:40:29 Speaker_02
You and I don't really use them in our work in the context of we look up a quick fact and just drop it into our story. We actually are always going to second guess the LLM.

00:40:38 Speaker_02
We're always going to try to find a second source before we're like, OK, we actually feel like we can trust this piece of information.

00:40:44 Speaker_02
And Adam, in your study, basically what you found is that people who did that, which we've been advocating for as the best practice, were worse at diagnosing diseases. I know.

00:40:54 Speaker_05
I know. It's really so I, to be clear, I was shocked at the results. My hypothesis going in was that people using it would be the best. So I am surprised by this. So In the psychological literature on diagnosis, it kind of makes sense.

00:41:09 Speaker_05
Humans are resistant to things that disagree with them, and we have all these heuristics and cognitive shortcuts that we take.

00:41:15 Speaker_05
So it's not surprising to me that what people did was they anchored on what they thought and what the first things that they thought, and they were resistant to something that was giving them a second opinion.

00:41:23 Speaker_05
Maybe that's something that's actually optimistic because we can align models or try to figure out how to present that information to make humans better. That is what I am trying to do.

00:41:31 Speaker_05
And I think all the short-term uses, like, let's be clear, like, if the headline is doctors are over, chat GPT is good. No, absolutely not. There's a million things we do. This is just one part of it. And they're not capable of operating without us.

00:41:44 Speaker_05
I'm not discouraged by this. I'm still working to figure out ways we can use these technologies to make better care of our patients.

00:41:49 Speaker_03
Yeah, I mean, that's the question I'm curious about. It's like, what can the medical field do? I mean, I'm imagining a future where patients have access to this stuff.

00:41:58 Speaker_03
And maybe before you go into the doctor to get your hip pain checked out, you do sort of an exhaustive prompting exercise with the model and say, hey, what is this?

00:42:08 Speaker_03
And then you sort of bring the readout from the AI into your doctor and say, hey, could you give me this medicine and this medicine? And I need this operation because I, you know, and the doctor might say, well,

00:42:18 Speaker_03
You know, let's do some tests and you'll say, I don't need to. The AI already told me what to do.

00:42:22 Speaker_05
That's already happening. That's already happening. I mean, there was a Kaiser Family Foundation survey on how many patients are putting their information.

00:42:28 Speaker_05
But it's already happened in my life when people will even put their, to be clear, these things are not HIPAA compliant. Please don't put any of your personal health information. But people are doing it.

00:42:38 Speaker_03
Elon Musk told me I should be uploading all my MRIs to Grok. Are you saying he was not correct?

00:42:44 Speaker_05
Well, it depends on if you want someone else to own all your MRI images. So yeah, keep that in mind.

00:42:49 Speaker_03
Yeah, I uploaded it and it told me I had the woke mind virus. So that was weird. They don't work very well. Right.

00:42:59 Speaker_05
But yeah, people are doing it already. I've had patients who do it. This is not a future. Now, does it work that well? Sometimes, but not consistently, and you have to prompt them, right?

00:43:08 Speaker_05
But how far are we from somebody selling a commercial tool that's a doc in a box that works pretty well?

00:43:13 Speaker_02
I mean, that is the actual implication of your study, is that you are better off just asking ChatGPT and not your doctor. That's not my conclusion from the study if that's what you want to take from it.

00:43:28 Speaker_02
The conclusion is basically one in four doctors were not able to successfully diagnose this, but in 92% of cases, CHAT-GPT did. If I had to choose one of those two things, I'd probably choose CHAT-GPT because it also does other things for me too.

00:43:43 Speaker_05
I would say the difference is that the people who put the case together, like the information, if you want to think about the prompts, were expert clinicians.

00:43:50 Speaker_05
We organized it in such a way, like you can imagine, I'm assuming I don't want to talk about your past medical histories, but I've had problems and we don't, humans don't always describe things the right way.

00:44:01 Speaker_05
We don't know how good CHAT GPT is about getting that information out of us. I think it's going to happen, but I don't think that CHAT GPT can do that now.

00:44:09 Speaker_03
I'm curious, if these AI tools do become part of the clinical model in hospitals all over the place, as it sounds like they are going to, what is it going to mean to be a good doctor in a world where AI is better at diagnosing than you are?

00:44:26 Speaker_05
So I'll give you the, there's the darkest timeline, but we'll go with the optimistic timeline. Give us both. Okay, well, let's go with the optimistic view first, because this is what I'm hoping.

00:44:36 Speaker_05
And inspired by, oh, I'm a huge nerd, this should not be a shocker, but inspired by you a little bit, Kevin, it's the Star Trek computer, right? So you have a computer system that's listening in on all times, and it's saying, hey, hey, Adam,

00:44:48 Speaker_05
You might be showing some unconscious bias here. Adam, I think you should ask if this person, like, makes their own snuff because eosinophilic pneumonia is on their differential.

00:44:56 Speaker_05
Like, something that's listening in, cueing me to be better, trying to make me a better human, but also listening to the patient and getting more information from the patient.

00:45:03 Speaker_05
A computer system like that is something that makes the medical encounter more human, which I hope is what we want. You want the darkest timeline next?

00:45:12 Speaker_03
Yes, please.

00:45:13 Speaker_05
So I don't know if you guys know this. So AI technologies are already being rapidly spread out in clinical care. They're listening to doctors' encounters with their patients and writing notes.

00:45:22 Speaker_05
They're writing the first drafts of, like, when you talk to your doctor on a portal, of those messages.

00:45:28 Speaker_05
I just wrote a piece in the New England Journal of Medicine where I originally called it Language Models and the Inchidification of the Electronic Medical Record.

00:45:35 Speaker_05
It turns out the New England Journal of Medicine doesn't consider that an academic term, so they changed it to degradation.

00:45:40 Speaker_05
What we're seeing so far is not the model that I am advocating for and what I'm researching and pushing for, but a system that's obsessed with efficiency, isn't really worried about some of the downstream effects on what this means for our relationships, and is just going to, like, yeah, you'll get these more efficient tools, so you'll see twice the number of patients in a day.

00:45:58 Speaker_05
We'll just put this AI text in the chart so we can bill off of it. And a system that might use these powerful, efficient tools to, like, squelch out the tiny bit of humanity that remains in medicine.

00:46:09 Speaker_05
So that, to me, is the darkest timeline and what I want to avoid. I don't think there's—you have to engage with this technology. It's going to change every single, like, white-collar field. We're, like, the ultimate white-collar field.

00:46:19 Speaker_05
It's going to change our field. And I see a way that we end up, like, with Dr. Kresher on the Enterprise, but I also see a way that we end up—I don't know, what's the dystopia?

00:46:27 Speaker_05
Like, I'd say Blade Runner, but I don't think there are any doctors in Blade Runner, so this analogy is going to fall apart.

00:46:32 Speaker_02
You know, I mean, to me, like an optimistic gloss on all of this is the upside in making this kind of care much more accessible, right?

00:46:40 Speaker_02
Like, if all of a sudden, I can just check my basic symptoms with chat GPT, maybe that does provide me some benefit. Now, obviously, a lot of people have been doing this for decades with WebMD.

00:46:49 Speaker_02
And there are, you know, sort of a lot of jokes about that a lot of people are sort of quick to use WebMD to assume that they have the very worst condition, and also constantly seeking medical care can create its own set of problems.

00:47:01 Speaker_02
But if you're just sort of the median person, I can also just imagine checking in with my virtual doctor a couple of times a month to get some tips about how to live a healthier life. Oh, so yeah, absolutely.

00:47:12 Speaker_05
We're not there yet, but I think that's the way things are going. And to be clear, the reality is terrible. Like, how long does it take you guys to see your primary care doctor? I'm a doctor and it takes me forever.

00:47:20 Speaker_05
So maybe we'll have a system that can do those basic things, but also recognize when it needs to step you up, like triage you appropriately.

00:47:28 Speaker_05
And maybe you'll have a system where instead of referring you to a specialist, your PCP will be able to work with that system to, you know, answer something that you would have needed a specialist before.

00:47:36 Speaker_05
Or a system that says, hey, you don't need to go through the referral system. Go straight to the orthopedic surgeon. So I think there's a lot of hope, and I acknowledge the baseline is terrible.

00:47:44 Speaker_05
Our medical system isn't really serving our patients, and if we're thoughtful about this, it's okay if my power is eroded.

00:47:51 Speaker_05
We'll get better care for everybody if we're thoughtful about it, which, if you've looked at the history of how medicine has happened in this country, that's not always the case.

00:48:00 Speaker_03
And I'm curious, you're a doctor. You trained for many years to become a doctor. You amassed a lot of knowledge that has made you good at that job. What is your emotional reaction to the findings of your own study?

00:48:14 Speaker_05
Yeah, I mean, it's a lot of emotions, right? I'm both excited and I'm freaked out. I'm not the typical doctor. a historian. I deeply care about how people think. I feel like I'm on the edge of something new, which is exciting.

00:48:29 Speaker_05
But to me, I love talking to people. I love meeting new people. But one of the things that I love is the intellectual part of my work. That's what makes it. I don't love sitting down and writing billing codes and saying, is this a level two?

00:48:40 Speaker_05
I hate that part of my job. But the part where I get to talk to somebody and figure out what's going on with them so I can make them better, that's my favorite part. But at the end of the day, I'm here for my patients.

00:48:49 Speaker_05
So I'm conflicted, but it's clear to me what the right thing to do is, which is do the right thing for the patient, even if it means giving up something that is dear to me.

00:48:59 Speaker_03
Yeah. I mean, that strikes me as a good model for people in all kinds of industries. As the AIs do get better at doing our jobs, it seems like the North Star should be like, what is the actual work that I am performing?

00:49:13 Speaker_03
And if an AI can do that better than I can, then maybe that's better for the world.

00:49:17 Speaker_02
Well, you know, there is another approach that I wondered if you consider, which is to say that, you know, essentially these chatbots were trained on a bunch of work that real doctors did. Those doctors are not being compensated.

00:49:28 Speaker_02
The primary effect of these chatbots being in the world is that the salary of a doctor could go way down.

00:49:34 Speaker_02
Has there been any talk among doctors of saying, let's actually get together and stop these things from draining all of the money out of our industry?

00:49:42 Speaker_05
Yeah, we're deaf. I mean, yes, those are you I think that in the grand scheme of things that doctors are worried about threats to their career This is low right now.

00:49:49 Speaker_05
These are all theoretical talks, but I suspect we're going to hear more of that it's weird like when you're in a profession, uh, I Wait, am I allowed to swear on hard fork? Yes, yes.

00:50:02 Speaker_05
I actually believe in that old bullshit about like the doctor-patient relationship being the most important thing above all else. I believe that. That's why I'm such a pain in the ass.

00:50:11 Speaker_05
So like, if this thing can do a better job than me at making my patient's life better, to me it seems like, regardless of those guilt issues, right? It seems to me that's what the right thing to do is.

00:50:25 Speaker_02
Yeah, it'd just be interesting if we live in a world where the actors have successfully prevented movie studios from replacing them with AI, but the doctors are like, well, I guess that's fine. That might happen.

00:50:36 Speaker_03
So, after having done this study and continuing to do work in this area of AI and medicine, do you feel

00:50:44 Speaker_03
more optimistic about the future of medicine or do you feel like we're headed into this kind of dark timeline where AI is just making all the decisions and we sort of suck the humanity out of the healthcare system?

00:50:58 Speaker_05
I see the market forces at bear here. And my worry and the way that I see things being rolled out now is that we're veering not directly towards the darkest timeline, but that we're heading in that direction.

00:51:11 Speaker_05
And I think that we need to be really thoughtful. And the we is not just doctors. Patients need to have a voice in this also. This is ultimately who this is about, about what type of health system we want and how we want these technologies to be used.

00:51:22 Speaker_05
But I'm actually worried about us heading Like, the current timeline's pretty dark, guys. You get 5 minutes, 10 minutes with your doctor and they don't look at you and they type on the computer. Like, that's not good.

00:51:33 Speaker_05
So, medical errors, like up to 800,000 Americans are killed or seriously injured each year because of medical errors. Every one out of five dollars that Americans make go into the healthcare system. So this darkest timeline thing isn't that far away.

00:51:49 Speaker_05
And I'm a natural pessimist, but I'm trying. I'm like Don Quixote. I'm trying to go for the good timeline even though it probably won't work.

00:51:59 Speaker_03
If any young people are listening to this who may have been interested in becoming a doctor or entering the healthcare profession, what would you advise them? Should they not become doctors because AI is going to take that job?

00:52:10 Speaker_05
Well, the problem is, what are you going to suggest that they do instead? Like, if we're talking about technologies that can do this. At this conference, they played something on Notebook LM with a fantastic podcast host. So I don't know.

00:52:22 Speaker_05
So I think that We're talking about tasks of doctors that might be automated, and it's going to be working together for a while. And we're not talking about the job as a whole.

00:52:32 Speaker_05
And fundamentally, it's still a job about human connection and making people better. And if that is what you want, I would do that. Also, surgeons and proceduralists are not going anywhere.

00:52:41 Speaker_05
So I wouldn't dissuade somebody from medicine, but they should know that's what they're going into it for. And it's not going to be like Dr. House I actually have never seen House.

00:52:51 Speaker_05
I always just use this example despite having never seen the show, so I'm phony. But it's just, it's not going to be that cognitive part. It's going to be something different. And that's scary because I can't predict.

00:53:01 Speaker_05
And I would love, I mean, medical students ask me this, and I don't have an answer for them.

00:53:07 Speaker_02
Well, it's a fascinating conversation, Doctor, but I will be seeking a second opinion, actually. I think it's just important.

00:53:13 Speaker_05
You should ask Chachi, PT.

00:53:17 Speaker_03
Thank you so much, guys.

00:53:18 Speaker_02
Well, that was fun. Thank you, Adam. I learned a lot. When we come back, crime doesn't pay, but it does play on the Hard Fork podcast.

00:53:31 Speaker_03
I see what you did there. Yeah.

00:53:43 Speaker_02
Well, Kevin, in the criminal justice system, the people are represented by two separate yet equally important groups, the police who investigate crime and the media who turn those crimes into podcasts.

00:53:54 Speaker_02
And from time to time here on Hard Fork, we like to survey the landscape of crime and punishment for a segment that we call Hard Fork Crimes Division. Right now in this segment, we seek justice.

00:54:13 Speaker_02
I think it's fair to say we will not be solving any crimes, but we will describe them. Or certainly we will describe what has been alleged.

00:54:20 Speaker_03
I was just saying, we have not yet solved a crime, but it's not out of the realm of possibility for the future.

00:54:26 Speaker_02
Not at all. We are always gathering evidence. And perhaps we should turn to our first case, Kevin. Yes. Let me crack open this case file. The FBI searches the home of the founder of the Polly Market betting website. Did you see this one? I did.

00:54:42 Speaker_02
This was juicy. So Polly Market founder Shane Coplin had his home searched by the FBI last week as part of a criminal investigation into whether Coplin was running Polly Market as quote, an unlicensed commodities exchange, which is apparently illegal.

00:54:56 Speaker_02
And they seized Coplin's electronic devices, including a phone. Yeah, that's not a good thing when that happens to you. Now, Kevin, after Shane Copland's phone was seized, he posted the following on X. New phone, who dis?

00:55:11 Speaker_02
So Kevin, remind us who this Shane Copland character is.

00:55:14 Speaker_03
So this is the young founder of PolyMarket, which is the sort of leading crypto prediction betting market platform. It rose to prominence during the election, where people wagered millions of dollars on who was going to win the election.

00:55:29 Speaker_03
And as my colleague David Yaffe-Bellany told us on the show a few weeks ago, it was sort of nominally illegal in the US, but lots of Americans were using it anyway through VPNs and things like that.

00:55:44 Speaker_03
And it was sort of an open secret that it had this large base of customers in the U.S. despite not technically being allowed here.

00:55:51 Speaker_02
Yeah, so I think that the FBI has some questions about that. But a PolyMarket spokesman said, why not, that the raid was, quote, obvious political retribution by the outgoing administration.

00:56:03 Speaker_03
Yeah, the theory here, at least the one that's being sort of advocated by Polly Market's fans and defenders is that, you know, the Biden Justice Department and FBI were so mad about the election and the fact that people on Polly Market had predicted that Trump would win that they, I don't know, went after the company on some bogus charges.

00:56:22 Speaker_02
And here's why I don't think that's true. Could you imagine explaining Polly Market to Joe Biden? Like, Mr. President, it's a prediction market. People bet cryptocurrency on the outcomes of various events.

00:56:35 Speaker_02
Not in the United States, but they would VPN into it. By the time you've gotten to VPN, Joe Biden has truly fallen asleep. I don't think so. I bet Joe Biden has used a VPN. You think so?

00:56:44 Speaker_02
To what, like watch Netflix movies that are unavailable in the United States?

00:56:47 Speaker_03
BBC Mysteries. So what do we know about why they are being investigated here?

00:56:55 Speaker_02
Well, because if it is true that large numbers of Americans are illegally betting on elections by using VPNs, that could be a violation of the law. You know, DYB told us that people were openly describing how to get around the ban on U.S.

00:57:12 Speaker_02
bettors in the polymarket discord. So I think at the very least, the FBI is going to say, you need to tighten this up a little bit and make it a little bit harder for Americans to use this service.

00:57:23 Speaker_03
Yes. And some reports have said that this investigation predates the election. This was in process long before. It's also not Polly Market's first run in with the law. They previously settled with the CFTC, the Commodity Futures Trading Commission.

00:57:39 Speaker_03
in 2022 and paid a fine as part of that settlement. But this is something new. This is bigger, and I would say if you are a polymarket fan in the U.S., you probably should stop doing that.

00:57:51 Speaker_02
Can I tell you how I think this one resolves? How? Shane Copland running the Federal Reserve. Stay tuned. It's going to be a wild 2025. Yes. Let's open case number two, Kevin. What do we have?

00:58:04 Speaker_02
Well, Kevin, Razzlecon, Crypto's most embarrassing rapper, some say, is going to prison. Remember Razzlecon?

00:58:11 Speaker_03
I sure do.

00:58:12 Speaker_02
Heather Razzlecon Morgan, who is a former blogger at Forbes and part of the Forbes to prison pipeline.

00:58:19 Speaker_02
and creator of cringy, crypto-tinged rap videos, was sentenced to 18 months in federal prison this week after pleading guilty last year to helping her husband, Ilya Dutch Lichtenstein, launder 120,000 Bitcoin he stole by hacking the crypto exchange Bitfinex back in 2016.

00:58:40 Speaker_02
Do you know how much 120,000 Bitcoin were worth in 2016?

00:58:45 Speaker_03
2016, let's see. Probably not as much as they are today.

00:58:48 Speaker_02
They were worth $71 million back then. They are worth $11 billion today. That's quite a haul. Old Dutch and Razzlecon really almost got away with it.

00:58:58 Speaker_03
They would be living large. I was obsessed with this story when it came out, when they got arrested, because it was sort of out of a very pulpy spy novel. They had fake passports, and they were this sort of

00:59:13 Speaker_03
Bitcoin Bonnie and Clyde and they were just like these cringy Millennials who were trying to get famous on the internet, but also Stealing a bunch of Bitcoin to make themselves very rich.

00:59:25 Speaker_03
It was just a I Might my our friend Nick Bilton has a documentary coming out about this This case on Netflix that I'm very excited to watch because I'm truly obsessed So we show we hear a little bit of razzle Kahn's work. Let's do it.

00:59:37 Speaker_03
Yeah, if we could hear it clip, please.

00:59:38 Speaker_02
Oh

00:59:47 Speaker_00
language.

01:00:00 Speaker_02
See, now, to me, this just goes to show how much the culture has changed because there was a time when people would have looked at what Razzlecon did and simply said, she's being Fergalicious.

01:00:12 Speaker_02
But in sort of the woke moment that we're in now, stealing 120,000 Bitcoin gets you a year and a half in jail.

01:00:18 Speaker_03
Yeah. Really sad. Do you think that, like, her rap career will be an asset in jail?

01:00:24 Speaker_02
Absolutely. Like, to her reputation? I would not be surprised if Razzlecon is the most popular person in the prison that she's in, and if it fuels the next phase of her journey.

01:00:31 Speaker_02
And, in fact, she posted on X that she will, quote, soon be telling my story, sharing my thoughts, and telling you more about the creative and other endeavors I've been working on.

01:00:41 Speaker_02
So, you know, I don't know what that means, but I will say I would love to see a Razzlecon jukebox musical. Tell the story of Razzlecon in her own words through her own music.

01:00:50 Speaker_03
Yes, and I should say I look forward to Razzlecon's appointment to head the Securities and Exchange Commission.

01:01:00 Speaker_02
Next crime, what do we have? Kevin, Gary Wang, a top FTX executive, has been given no prison time. What did he do? Well, Gary Wang, Kevin, was the last of the legal cases against FTX.

01:01:13 Speaker_02
You might remember some of FTX's more famous co-founders, such as Sam Bankman-Fried, who was sentenced to 25 years in prison for his role in FTX fraud, or Caroline Ellison, who was sentenced to two years in prison.

01:01:27 Speaker_02
Most recently, Ryan Salami was sentenced to seven and a half years in prison.

01:01:32 Speaker_03
It's Salem. David Yaffe-Bellany literally has to put a pronunciation guide in his stories for this name, because everyone calls him Ryan Salami, but it's Salem. Do you know what they called the case against Ryan Salem, Kevin?

01:01:44 Speaker_03
I think I know where you're going with this.

01:01:45 Speaker_02
The Salem Witch Trials!

01:01:47 Speaker_03
Yes. I knew that was gonna happen. More like the Salem Rich Trials, am I right?

01:01:57 Speaker_02
That's a good, better one. Wait, I got to snort out of you for that? That was good. That was good. So anyway, so that leaves Gary Wang, the fourth member of the crew here. Actually, that's not even true.

01:02:13 Speaker_02
There's another guy, Nishad Singh, who was sentenced to time served. So Gary Wang was the last of these cases to be resolved, and it was resolved this week, and he was given no prison time.

01:02:24 Speaker_02
And the reason is he snitched so hard on SPF that the government basically gave him a standing ovation.

01:02:30 Speaker_02
During the sentencing hearing, one prosecutor said that Weg was, quote, the easiest cooperator they've worked with and provided essential information to them. So he basically got the best snitch award and it kept him out of jail.

01:02:42 Speaker_03
Which is a good reminder that cooperating with the government in a fraud investigation can have benefits.

01:02:47 Speaker_02
Now, Kevin, the FTX legal saga has really, you know, taken place from the start of this podcast, you know, and now it's sort of wrapping up. So do you have any sort of feelings of nostalgia or other reminiscences from two years of FTX?

01:03:00 Speaker_03
You know, I have been just very interested in this whole saga, not just because I think it was a big deal in the world of crypto, but because it has had all of these strange ripple effects, including

01:03:13 Speaker_03
I was talking with someone this week about this, but the investment that SBF made in Anthropic, the AI company, has essentially paid back all of the investors who would have lost money on the FTX fraud, because that stake has turned out to be worth a ton of money.

01:03:31 Speaker_03
And so even though Sam Bankman-Fried was a fraudster and is now serving time in prison, it turns out he was actually a pretty good tech investor.

01:03:38 Speaker_02
If he gets out of prison and you just run into him and he's like, you know where you should put your money, would you listen to him? Yes, honestly, I would. You know what, I might too. I believe in second chances for people.

01:03:52 Speaker_03
And Sam, if you're listening, I would love your investment advice. I could really use some updates to my portfolio.

01:03:57 Speaker_02
Sam, if you're listening, you're not supposed to have a cell phone in there, so be careful. You don't think you can get podcasts in prison? That'd be the worst part about going to jail. Well, Kevin, we have one more case to look at.

01:04:08 Speaker_02
A phone network has employed an AI grandmother to waste scammers' time with meandering conversations.

01:04:15 Speaker_03
Yes, as you know, there are now these scammers who will call people using an AI voice pretending to be, you know, a long lost cousin or their grandmother or something, and just try to steal money from them by impersonating someone.

01:04:30 Speaker_03
But this is a story that comes to us from the UK, where the largest mobile

01:04:35 Speaker_03
phone operator in the UK, O2, has created a new AI system called Daisy to trick scammers into thinking that they are talking to a real person who basically has been given the goal of just rambling and keeping them on the line for as long as possible, so wasting the scammer's time.

01:04:55 Speaker_03
I'm sure you've seen, there are all these YouTube videos now of people whose whole shtick is that they take scam phone calls and then they try to scam the scammers.

01:05:04 Speaker_03
But that is labor-intensive, and so now O2 has come along and said, we can actually build an AI that just wastes the scammers' time for you. And I think that's a great development.

01:05:12 Speaker_02
I agree.

01:05:13 Speaker_02
I've read that they've sort of designed it to keep the scammers on the phone for as long as possible, but they're also trying to learn what tricks and techniques the scammers are using so that they can share that with maybe their customers, maybe the police, and help prevent people from falling for these things.

01:05:34 Speaker_02
O2 said that Daisy has managed to keep some people on the phone for up to 40 minutes.

01:05:38 Speaker_03
I'll just say it. If an AI voice is keeping you on the phone for 40 minutes, you're a bad scammer. Terrible scammer. You're bad at your job. Because you can tell instantly when it's an AI on the other end of the line. At least I think I can.

01:05:52 Speaker_02
Well, there's usually like some sort of delay, right? And presumably that's going to disappear. But for now, I guess I feel somewhat confident.

01:05:58 Speaker_02
Now, I will say that consumers cannot use DAISY, but what O2 did was add it to the list of what they call easy target numbers used by scammers. So sort of sharing it around and saying, hey, you know, this DAISY is a really easy mark. So that's cool.

01:06:14 Speaker_02
But I will say it does make this feel a little bit more stunty to me, although I guess as I think about it, I'm not exactly sure how consumers would be able to, I don't know, flip a button to get, you know, DAISY to answer their scam.

01:06:27 Speaker_03
Because you know how Apple or other mobile devices can now sort of say scam likely when someone calls you from an unknown number? Yeah, you could just press a button and it would put Daisy on the line and it could just waste their time.

01:06:39 Speaker_03
I think we should deploy this.

01:06:40 Speaker_02
Wait, that's actually genius. Like, I want to do this. Yes.

01:06:44 Speaker_03
Do you like these sort of vigilante schemes to take back the power?

01:06:48 Speaker_02
You know, I mean, look, there is always pleasure in seeing a justice done. Yes. And injustice being righted. You know, I have to say, I have enjoyed YouTube videos of like porch pirates being apprehended. The glitter bombs. The glitter bombs.

01:07:04 Speaker_02
I find that very satisfying.

01:07:05 Speaker_03
This is when you disguise something as a package, someone steals it, they open it up and it sprays glitter everywhere and sets off an alarm and sets off horrible smelling stuff. This is a very popular genre of YouTube video.

01:07:20 Speaker_02
Most people do not have, very often, an experience of justice. It's like you see injustice everywhere, but the moment that you actually see a wrong being righted is transcendent.

01:07:33 Speaker_02
I remember one time I was on the freeway, and everyone was trying to merge onto a different freeway, and so you're just sitting in bumper-to-bumper traffic, and you're going forward at one inch an hour, and somebody gets impatient, and they pull onto the shoulder so they could just get around everybody, because I guess they had somewhere to be.

01:07:47 Speaker_02
And about one second after the person pulled onto the shoulder, I saw siren lights go up, and a police officer just went and pulled that person person over and, you know, got them in trouble. And that was, like, my greatest experience of justice.

01:08:00 Speaker_02
And that happened 20 years ago, and I think about it all the time. I'm so glad that happened.

01:08:06 Speaker_03
Anyway, thanks, Daisy. And the sooner I can have you on my phone to deter the scammers, the happier I'll be.

01:08:13 Speaker_02
And that's the Hartford Crimes Division. Case closed.

01:08:30 Speaker_03
Before we go, we have a special request. If you can, we would really appreciate if you filled out a quick survey. You can find the survey at nytimes.com slash hardforksurvey. Your answers will not be published in any way.

01:08:46 Speaker_03
They will just sort of help us make the best show we possibly can and understand more about who listens to the show in the first place. Again, you can find the survey at nytimes.com slash hardforksurvey. We'll also drop the link in show notes.

01:09:03 Speaker_02
Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyant. This episode was fact-checked by Ina Alvarado. Today's show was engineered by Alyssa Moxley.

01:09:13 Speaker_02
Original music by Marion Lozano, Diane Wong, Leah Shaw Dameron, and Dan Powell. Our audience editor is Nel Galogli. Video production by Ryan Manning and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash hard fork.

01:09:27 Speaker_02
Special thanks to Paula Schumann, Pwee Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at heartfork at nytimes.com with whatever disease Chat GPT just told you that you have.