So, one of my best friends from high school and my boyfriend are very... Wait, you have a boyfriend?I don't know if I've mentioned this.It's incredibly hot.So, they're similar in certain ways.
Like, their personalities are, in ways, it's very pleasing to me.It's like, oh, this person I've known my entire life.And my boyfriend, you know, they're just birds of a feather, as Billie Eilish once said.
And up until now, this has always been great, but then I found out that they were similar in a new way, which is they both love going to the naked hot springs.Have you heard of this hot springs?Is it, which one?Well, there's one called ORR, O-R-R.
Yes.And there's apparently like three or so of them like in Northern California, like that are, you know, within a few hours drive of San Francisco.Yes.
And so, you know, in the spirit of adventurousness and leaning into new experiences, I decided, okay, I will go to the hot springs.And so we went to the hot springs and I learned two critical things about the hot springs, Kevin.
One is that there is not internet at the hot springs.There's no cell reception, there's nothing.If there's an emergency, you're calling from a landline, okay?
Number two, hot springs water smells like rotten eggs, which I knew, but I'd forgotten, and so the experience is, you don't have internet, and you're sitting in the rotten egg water,
And everyone around you is telling you that you're having a great time, and that this is a very relaxing thing that you're doing, and it's very good for you.And so I spent a full 48 hours getting gas lit, and then came home.
And this is what I hate to admit, I really did not like not having the internet access, but I did spend 48 hours staring at nature and going on hikes and everything, and I actually thought it was great for me.
Yes, I think there is something to this idea of like, go stare at a tree and it doesn't seem like it'll be good for you, but it'll be good for you.It's sort of like, people tell you to meditate.
You're just like, wait, am I supposed to clear my mind for 10 or 20 minutes?How's that supposed to fix anything?But then you do it and you feel better.Staring at trees is mystical in the same way.
Well, I'm very excited for your next visit to the nude hot springs where you will bring a Starlink terminal with you so that you can watch TikToks in the egg water. I'm Kevin Roos, I'm a tech columnist at the New York Times.
I'm Casey Noon from Platformer.And this is Hard Fork.This week, Jeff Bezos and the other tech billionaires hedge their bets in the election.What are they so afraid of?
Then, former OpenAI readiness chief Miles Brudage stops by to tell us how his old company is doing at getting ready for superintelligence and whether we should all keep saving for retirement.
And finally, The Times' David Yaffe-Bellini joins us to discuss the rise of PolyMarket and whether prediction markets can tell us who is going to win the election.And for some reason, crypto's involved.
You know, I bet that would happen.
There's a second big moment that's coming up, and that is, of course, Election Day.Early voting is already underway across the country.
And, Kevin, everywhere I go, people are asking the same thing, which is, if I had a billion dollars, how would I try to influence the results of this election? I know you've been wondering it.
Yeah, it's something I think about a lot as a person who is probably going to have a billion dollars someday.
It's important to get ahead of these questions.So last week, we talked a lot about how Elon Musk is approaching this challenge.
This week, we want to talk about what some of the other most prominent tech billionaires are doing, or as has more often been the case, not doing.Yes.And Casey, why do we want to talk about this?Why is this important?
Well, Kevin, we live in a country where there is unchecked influence of money on politics.And I would say that's basically the explanation.Why do you want to talk about it?
Well, so look, we are not just talking about a steel magnate or someone who made their money, you know, in logging or something like donating to a presidential campaign.
These are people with not only real money and influence, but direct control over some of the most powerful channels for people getting their information about the election.
That's right.And so in the event that the aftermath of the election is tumultuous and that there are many competing claims on both sides, it will be important how those billionaires choose to moderate the content on those platforms.
What will they allow?What will they not allow?What will they crack down on?What will they turn a blind eye to?These really will be, I think, some of the most fundamental questions about the aftermath of the election that's about to happen.Yes.
So with that in mind, let's talk about the story that was on everybody's minds this week, which is what happened at the Washington Post.So the Post, of course, is owned by Amazon founder Jeff Bezos.
And Kevin, do you want to walk us through what happened at the Post over the last week?
Yeah.So last Friday, The editor of the opinion section of the Washington Post, David Shipley, announced in a meeting with the editorial board that going forward, the Washington Post would no longer make endorsements for presidential candidates.
This sort of reversed a decades-long policy of endorsing presidential candidates. It's not just the Washington Post.
Most major newspapers in this country make presidential candidate endorsements in election years, including the New York Times, whose editorial board endorsed Kamala Harris for president this year.
But basically, people started looking into what was going on behind the scenes at the Washington Post, and it eventually emerged that the editorial board of the Washington Post had written an endorsement of Kamala Harris that was supposed to run in the paper and were just sort of waiting for a final sign-off.
And it also emerged that this new policy of non-endorsement had come not from the opinion editor, not from anyone sort of involved in the day-to-day operation of the opinion section of the Washington Post, but from Jeff Bezos himself, who was basically making this new policy two weeks before the election to sort of stop the publication of this editorial.
So that obviously raises a lot of questions about why Jeff Bezos was pulling The Washington Post out of the endorsements game.
He then followed up with an opinion column of his own explaining his position on this, basically saying, you know, Americans don't trust the news media.He said that presidential endorsements basically don't tip the scales of an election.
Nobody decides who to vote for based on what The Washington Post editorial board says, and that what they actually do, these endorsements, is to create a, quote, perception of bias.
So that made a lot of people very angry, and people inside the paper were very upset.Carl Bernstein and Bob Woodward, the famous Watergate reporters, called the decision surprising and disappointing.
Lots of other current and former Washington Post honchos came out and said this is not what we should be doing. And the readers of the Washington Post and the subscribers to the Washington Post were also very upset.
NPR's David Fulkenflik reported on Tuesday that more than 250,000 Washington Post subscribers had canceled their digital subscriptions.That is roughly 10% of the entire subscriber base of the Washington Post.
All right, so that's what happened.Let's talk a little bit now about the reaction and what we make of it.
So, yes, I actually didn't think there was that much in the Bezos op-ed that I disagreed with on a principles level.
I thought it was, you know, it's a reasonable position to take that newspapers, you know, which are seen as being polarized and biased by a lot of Americans, should sort of drift away from this kind of unsigned editorial model where there's just kind of the opinion of the paper.
I think that does create some confusion on the part of readers.I think the time to announce that policy is not two weeks before an election.
Right.And also, if that's really going to be your argument, shouldn't you just take it to its logical conclusion and get rid of your opinion writers?Right?
It's like, if the argument is people sharing their opinions in the newspaper is causing readers to lose trust in us, why do you have opinion writers at your newspaper?Right.So there was something a little bit suspicious about that.
I'm not particularly invested in whether a newspaper publishes an endorsement or not, but I am always really interested in how do billionaires see the world?
What is the calculus of a billionaire trying to navigate through this very tumultuous period that is going to come?I think there's maybe four different ways of thinking about this editorial.The first is just to
Like take it on its own terms, which I know people will throw their hands up and say, you shouldn't do that.Like, you know, what he said in his response is not the real reason that he prevented the Post from publishing this editorial.
But I do think that there is an argument in here worth reckoning with, which is, well, why is it that the public has such low trust in the media in general?And can you restore that trust by sucking opinion out of the journalism?
So what do you think about that question?
Look, I think we clearly are at a moment where a lot of people mistrust the news media in this country.
And I think that we should be honest and reflective about that and ask if whether there's anything we can do as journalists to help restore public trust and credibility in our occupation.
I don't know that stripping out all of the opinion from the journalism itself is the way to do that.I just don't know if that's effective. But I'm curious, what do you think?
He calls it a bid for trust that advertises the viewlessness of the news producer, and that it sort of arose in this country as a response to this two-party system where you would always have two sides sort of pulling against the journalists, like working the refs.
And a lot of journalists decided the way that we're going to build trust is we're not going to have any views whatsoever.We're just going to report the facts. And that's a very – you can see like in the abstract why that's an appealing idea.
But in practice, that didn't really do much to inspire trust in the public, right?Because simultaneously, there was this huge campaign on the right in particular to delegitimize the mainstream news media, right?
There's this entire right-wing noise machine that does nothing but wake up every day and say the media is lying to you.And that has had a very powerful cumulative effect.
And for a journalist to simply say, hey, well, I'm not one of those, I don't have any opinions at all, doesn't actually seem like it builds trust, right?
I also think there's like a difference between neutrality and independent journalism of people understanding and appreciating that
the journalism they're reading was not produced in cahoots with some powerful person or, you know, the journalists are not being paid off.
I think, you know, what I worry a lot about in the context of trust and media is not the perception of sort of bias.It is the perception that we are all kind of like just bought and paid for, right?
I think there's now this whole industry of kind of content creators and influencers that have started doing some things that look like journalism, but with lots of compromises along the way, where they are taking money for sponsored content or they are doing advertorials or something.
They are not independent voices in the sense that, like, I think we would hope that journalists would be.
And by the way, that reminds me, this Halloween, you can get 25% off cruise missiles at Raytheon.com by using the offer code HARDFORK.But anyways, Kevin, you were saying?
So look, I think there are a lot of things that are going to need to be done to restore public trust in journalism.But I think that, you know, one concern that people have is that journalism is operated at the whims of a few very powerful people.
And if you are Jeff Bezos, you are not helping the case that journalism is objective and independent by inserting your own preferences into the strategy of the newspaper that you owned two weeks before the election.Yeah, that is a perfect point.
And you know, Kevin, I think it's ironic to me that before all this happened, the Post had been figuring out what I think is a much better way of building trust, which is rallying everyone around this tagline, democracy dies in darkness.
And it's gotten a lot of criticism and mockery over the years.People think it's a bit overwrought, right?The Post started using this during the Trump years.But I think the larger question is, what are the values of this newspaper going to be?
Are the journalists there going to be allowed to draw conclusions from their own reporting?Are the opinion writers who are paid to have opinions going to be allowed to share those opinions?And all of that right now seems very unsettled.
Yeah, I mean, I'll just say the obvious, which is like, this is not going to hurt Jeff Bezos, right?Canceling a subscription to The Washington Post does not affect Jeff Bezos or his financial life or his view of the world, one iota.
What it does do is affect the lives of the great journalists who work at The Washington Post.You are effectively punishing them by canceling your subscription.
I imagine this will lead to some real financial hardship at The Post, at least in the short term. And I just think that is like an incredibly short-sighted protest mechanism in this case.
But I do think that it reflects the feeling that a lot of people have, which is that they don't want to support something that seems like it's being operated by, you know, the whims of a billionaire.Yeah.
So Casey, you said there were four things that stuck out to you about this Bezos thing.What are the other three?
Well, so that's the sort of question of, OK, let's just take Bezos at face value.Is this about trust?And I think you and I agree that, well, OK, you want to make it about trust, Jeff.
This was not the best way to go about improving the post's standing.But there are three other ways, I think, that you could explain why Bezos would have made a decision like this. One is that he's just hedging his bets in the normal corporate way.
Most of the time, most business leaders, when there's an election, they stay out of it.They don't say anything positive about either candidate because they want to be prepared to work with whoever wins, right?
So I think that can explain a little part of it.
Then there's like a real politic aspect of this, which is that there's a very high chance that Trump will retaliate against anyone who attacks him, and there's a very low likelihood that Kamala Harris would do the same thing.
And that's important for somebody like Jeff Bezos, who has a lot of business before the government. And also, by the way, this fear of retaliation from Trump is not an abstract question.
In 2019, Amazon sued the Trump administration, blaming Trump's animosity toward Bezos for its loss of a $10 billion cloud computing contract.And we know that that came in part from what the Washington Post was publishing about Donald Trump, right?
So this is a $10 billion question, at least to Jeff Bezos.And so I think that that's really important. And cloud computing is not the only business that Bezos has before the administration.
Blue Origin, which is his rocket company, has a $3.4 billion contract with NASA for a lunar lander, which, by the way, we'd love to try.Blue Origin, get in touch.But there's one more reason.
And this is something that I don't see get talked about a lot, because I think it doesn't slot as neatly into maybe liberal arguments for Bezos just letting his editorial board do whatever they want.
It's not really going too far to say that the Biden administration has been trying to break up not just Amazon, but also Meta, Google.They also have an antitrust case against Apple.This administration has not been friendly to tech, right?
So if you are a Tim Cook, a Sundar Pichai, a Zuckerberg, a Bezos, I do understand why you're looking at the candidates and saying, look, I know
You know, what I actually think they believe is, of course he's completely incompetent, but I know he will leave me alone, or I think I can get him to leave me alone, and I don't know how to get Kamala Harris to leave me alone.
That it will serve Kamala Harris's interests not to leave me alone, and Trump might just decide he doesn't care.
I mean, to me, it just says that if Donald Trump does win the election, the tech industry will behave very differently than it did in 2016.I mean, you remember in 2016, Trump wins the election.
There's sort of this, you know, panic among the the titans of industry.There are these advisory councils that Trump assembles that people like Elon Musk join and then back out of because they can't like stomach the idea of working with Donald Trump.
I do not think we will see nearly as much resistance if Trump wins a second term in office.
And I think we have seen very clearly now that the CEOs of the tech industry are going to be very obsequious, are going to throw themselves at the mercy of the Trump administration, are basically going to do whatever they can to curry favor.
I don't think they will do nearly as much internal sort of hand-wringing and soul-searching about that.
Well, and so here's why I think that this deserves real scrutiny in the days ahead, Kevin, because something that I can guarantee you is going to happen is that in the event that Kamala Harris wins or appears to be winning on election night or the morning after, there's going to be a concerted effort on the Republican side to delegitimize the results of the election.
This is not really even a prediction.They're doing it now.They have been doing it all year. And we saw this in 2020, and it led to violence.And January 6th, there's a very real possibility that that could happen again.
And while I'm not somebody who believes that social networks are the root of all evil and the cause of all of our problems, they are part of how Americans understand their politics and the news.
And so it matters what people like Sundar Pichai and Mark Zuckerberg decide they are going to allow people to say.
It matters if they are going to append notes and warnings, you know, directing people to high-quality information about the actual election results.
It matters if they are going to turn a blind eye or, through neglect, fail to realize that militias or other violent groups are organizing on their platforms.So this is where I hope that
a huge degree of attention turns right in the aftermath of the election, because while, you know, what does a billionaire think about Donald Trump is a somewhat abstract question with, like, little impact in the days before the election, it actually could be hugely consequential in the days afterward.
I think that's right.I mean, what do you make of the kind of wave of CEOs sucking up to Trump in the days before the election?
As you mentioned, we've got Jeff Bezos, we've got Mark Zuckerberg, but also people like Tim Cook, who Donald Trump has been claiming has called him and said all these nice things.
Trump is not always totally faithful to the sort of exact wording of these conversations, but what do you make of this sort of last minute scramble to get in Trump's good graces?
I think that if you're the CEO of a big business, you spend a lot of time every month talking to people that you secretly think suck.
You know, I think that you are constantly being forced to have uncomfortable conversations, to try to negotiate for better positions with people who might openly despise you, right?
Like, there's a lot of just diplomacy and politics in being a CEO these days.
And one of the things that actually drew me to tech coverage in the first place was that starting in the 2010s, as these platforms started to grow bigger and bigger, all of a sudden, the CEO of a business started to look a lot more like a head of state, right?
They're governing hundreds of millions or even billions of people on their platforms.
And so, they have to navigate the world, not as just somebody who's trying to, like, return value to shareholders, but as someone who has to maintain good relations with the actual heads of states in governments all around the world.
And, you know, there's a lot of horrible world leaders, but in countries where you want your business to be.So, unfortunately, this is just part of that job. Yeah.You know, I want to say one more thing about billionaires and authoritarian leaders.
I was reading on Threads this week a thread that was posted by Jonathan V. Lass, he's editor of The Bulwark, and he was telling the story of this Russian billionaire named Mikhail Khodorkovsky.I'm not familiar.
So he became an oil billionaire after the Soviet Union dissolved, and among other things he did, he founded this civil society organization named Open Russia to promote democracy and human rights.
And two years after he founded it, Vladimir Putin, who had by then become the authoritarian leader of Russia, had him arrested and charged with fraud.His businesses fell apart.He was sentenced to nine years in prison.
He eventually got out, and today he lives in exile in London. And Lass was writing that the whole point of this for Putin was to show the class of people with enough money and power to threaten him that he could destroy their lives.
They got the message quickly and so the oligarch class became his courtiers rather than potential rivals.
And if you look at what's happened over the past few years, Kevin, it's that a lot of those oligarchs and billionaires who spent the past decade or two sucking up to Vladimir Putin, they've died under mysterious circumstances.
This is a really dark thing, and I'm not trying to be glib about it, but I am just pointing out that we have a history.
in the sort of authoritarian or fascist movements of when that leader takes control and as they consolidate control, as they shatter democratic norms, as the rule of law gets broken, no one winds up being more vulnerable than these billionaires, the people who have the money and the power to credibly challenge someone like a Vladimir Putin.
And as I said in my column, I suspect that some of those oligarchs in Russia wish that they had resisted then instead of the state of affairs they find today, which is that they just have to submit to Vladimir Putin forever.
Yeah, I think that's a really good point.
It speaks to something that I've been feeling, which is just sort of the game theory of the decision to support Donald Trump or at least to refrain from criticizing Donald Trump or endorsing Kamala Harris if you are a billionaire or a tech leader.
I think what they are failing to include in their game theory calculation is just the scenario you described, where they suck up to Trump, he wins, they get some benefit out of it for a while, and then down the line it becomes a liability for them.
I think that is not something they are factoring into their calculus at all.And I think it's really important to note.I do too.
When we come back, we've got miles to go before I sleep.Specifically, Miles Brundage.I don't know.I felt like it was worth a shot.Yeah, that was a good V1.Yeah.Let's see what V2 has.Great.When we come back, we've got your reward, Miles.That's right.
We're rewarding you with an interview with Miles Brundage, formerly of OpenAI.
Okay.Let's keep going on the Miles thing.
I would walk a thousand miles if I could just see you tonight.
I'd walk a thousand miles if I could just see you tonight, but I'd only interview one miles on this podcast.There are 26 miles in a marathon, but only one on the Hard 4 podcast.You'll hear him right after the break.
Well, Casey, it's another day ending in Y, so another high-ranking leader has left OpenAI.Kevin, this hasn't happened since yesterday.Yeah, it probably won't happen again until tomorrow.
But this is one that I actually thought we should talk about because this was not some sort of junior employee.This was Miles Brundage, who is the senior advisor on AGI readiness at OpenAI.He's been at the company for six years.
Miles announced last week that he was leaving the company to focus on independent AI policy research and advocacy.
So this is not just one of many senior leaders who have left OpenAI this year, but it is someone who's directly in charge of this thing called AGI readiness.
Yeah, which is an effort at the company to ensure that as it builds ever more powerful forms of artificial intelligence, it's able to release those safely, make sure that society captures the benefits, and that they mitigate any risk.
But as you know, Kevin, the real story of this team and its predecessors at OpenAI has been one of real chaos over the past couple of years, right?
This team is continually reorganized, it keeps getting different names, people keep quitting it, some of them have started their own AI companies.And so we've had a lot of questions about what is it actually like inside this company right now?
How is it changing?And how does that impact OpenAI's efforts to try to build safe superintelligence?
Yeah, and the announcement that Miles made that he was leaving OpenAI attracted a lot of attention.He's been a very vocal and prominent person, calling attention to some of the risks of these systems.
But he also, in the process of leaving, said that he does not believe that OpenAI or any other frontier AI lab is ready for AGI, and that he also does not believe that society at large is ready for AGI.And that caught a lot of people by surprise.
Yeah, it's when somebody who has been working on this project so hard for so long is saying that openly, it does raise questions about what they're seeing that the rest of us can't because we don't work there.
So we asked Miles to talk with us this week about his decision to leave OpenAI and why he is choosing to pursue an independent research path, but also about some of the things that he saw and experienced while at the company that made him feel like neither OpenAI or the world at large is ready for what's coming.
Now, something to know about Miles is that while he's left OpenAI, OpenAI has offered to support his new efforts financially.
And so I think that does affect the nature of what he can tell us, that it may prevent him from telling us certain things that we wish he would.
So you may hear a little bit of hesitation in his voice during that interview, but I still think it is worth it to try to get as much information as we can about what he learned while he worked at that company for the last six years.
And I think the trend that we've observed over the past year or two is that as these systems get more powerful, as they get more agentic, as they start to be able to do things like use computers, the urgency coming out of the big AI companies, of people saying, you guys don't understand, this stuff is all happening much sooner than you think, has only increased.
And I think this is an issue that is often taken less seriously by people outside the industry than people who are inside working on this technology.
Yes, even though it could have some real effect in your life, including when you might be able to retire, which is something Kevin and I think about constantly.Do you think about that constantly?I'm so tired, Kevin.
And as always, we should disclose that the New York Times is suing OpenAI and Microsoft for copyright infringement.Can you guys wrap that up already?Listen, it's not my area.All right, let's bring in Miles Brundage.
Miles Brundage, welcome to Hard Fork.
Yeah, thanks for having me.
So I want to start with why you recently chose to leave OpenAI and pursue your own research into AGI readiness.But I want to start by having you define AGI readiness for us.What is it that you do?
Yeah, so basically I'm a researcher and I try to understand where is AI as a technology heading and what are the impacts going to be and are we ready for those impacts in terms of how do we address the safety issues, how do we address the economic issues and so forth.
That's about what you actually did on this AGI readiness team.You think some thoughts about what might happen if AI develops certain capabilities.You put it in some sort of document, and then what happens?Does it get handed over to the product team?
Did OpenAI make changes to the things that it was doing based on the kind of work that you were sharing with them?
Yeah, so first we kind of were involved in the red teaming and kind of like adversarial testing to make sure that that these models are safe and started working with external experts.
So kind of built up this function of like, how do we kind of get experts and disinformation and bias and so forth and kind of give them early access to these technologies, do a bunch of analysis and then kind of publish it to the world so people can be like, OK, this is this is what they did.
These are the known risks and so forth.And
Did a bunch of stuff on that, also published a lot of ideas about how to govern AI and, you know, the idea of, like, for example, computing power, the kind of actual physical AI chips as, like, a convenient point of leverage because, you know, they're countable, they're physical, and so forth.
So, you know, published a lot of ideas and, you know, did a lot of internal stuff.
So you had what I think to many people would seem like a dream job for someone who has an interest in AGI readiness, working at OpenAI, leading up their AGI readiness team and efforts and policy research.Why'd you leave?
to be clear that I did have a great time at OpenAI, and I feel like my team and I accomplished a lot.But basically, the reasons I decided to leave were threefold.
So one is that I wasn't able to work on all the stuff that I wanted to, which was often kind of cross-cutting industry issues.So not just what do we do internally at OpenAI, but also what regulation should exist and so forth.
And it's easier to kind of think about that on the outside and, you know, not kind of be distracted by all the day-to-day internally.Second reason is I want to be independent and less biased.
So I didn't want to kind of have my views, you know, rightly or wrongly dismissed as, you know, this is just a corporate, you know, hype guy, you know, whatnot.
And then the third is that I felt like I was kind of reaching a point where I had done much of what I set out to do internally in terms of kind of saying, okay, like here, you know,
here are the trend lines, here are the challenges, these are the different pillars of what it means to be ready for AGI and so forth, whereas it felt like externally, outside the walls of OpenAI, a lot of people aren't even thinking about that, and I felt like there was a lot more to be said and done.
We've also heard a pattern of people leaving OpenAI in the past year with concerns about the company's commercial emphasis overtaking the safety mission.
Was that part of your decision here is that you feel like OpenAI specifically was not going about things in a safe way?
So, it wasn't so much about OpenAI specifically, although obviously, like, my perspective on what the gaps are in the industry are informed by my experiences at OpenAI, but I'm pretty confident that there's no other lab that is totally on top of things.
And, you know, if you read what people are saying, you know, they're not saying that they're on top of things.
So, that kind of crazy situation of, like, really fast progress plus, you know, the people who know the most saying that we're not ready is kind of my focus.
What does it mean to be ready for AGI, and how has your answer to that question maybe changed over the past few years, if it has?
Yeah, so I think that over the next few years there will likely be, you know, AI systems that are built that, you know, whether you call it AGI or not, like the trend line is clearly towards systems that
can basically do anything a person can do, you know, remotely on a computer, you know, can operate the mouse and keyboard.It can, you know, even look like a human in a video chat and so forth.I think people should be thinking about what that means.
Governments should be thinking about what that means in terms of, you know, sectors to tax and education to invest in.And, you know, what is even the point of education in a world in which some of the jobs that are kind of
remaining and, you know, there are going to be new ones to be clear.
I'm not saying, you know, all jobs are going to disappear, but it'll be disruptive and people need to think ahead about what does this mean in terms of, you know, the purpose of education.Is it training people to be good citizens?
Is it about having people understand the world?If it's not preparing for jobs, that's going to be really hard to, you know, skate where that puck is going.Yeah.
As you look across the landscape of the big labs that are building foundation models, how would you grade the industry to date on safely building AI systems?How do you think it's doing?
I think it's hard to grade the industry in isolation from what incentives are being set by policymakers.
So I think there are a lot of companies that are kind of doing their best within the environment that they're in, which is kind of very cutthroat competition and relatively little guidance other than
you know, some voluntary commitments that they've made that are, you know, often somewhat vague as well as like some kind of regulation looming on the horizon in the EU.But the details are TBD.
So I think that, you know, there is good work happening and there's, you know, there's more transparency than there was a couple of years ago in terms of, you know, here's here are the tests that we did and here are the evaluations of these risks.
But I would say it's also clear, if you kind of read what these companies are saying, that in some areas they're falling behind.They're not solving all the problems that they know about.
And I think that's a sign that this is kind of a, in some respects, out-of-control competitive situation.
So, like, what are they having trouble solving, for example?
Yeah, so, I mean, just as one example, hallucinations are, you know, is the term for basically AI systems making stuff up.And this is not a comment on OpenAI.
It's the industry standard, I would say, is to kind of have systems that often make stuff up.And then there's kind of a little fine print at the bottom that says, you know, Claude can make mistakes or Chad G.B.can make mistakes.
And, you know, it's clearly a sign that there's not much that is forcing these companies to play it safe.
Right.And you mentioned there is also this cutthroat competition, What is that doing to the effort to build AI safely?
So it is definitely complicating it.So I think there are multiple dimensions along which companies are competing.So it's not just trying to get to market fastest.
It's also people want to work at a company that ships and that kind of gets stuff out on the market.So there's kind of competing for talent.And then they do consider safety, but it's not always the focus.
Put another way, like, they keep pushing forward the frontier.And I feel like your perspective is, well, we're not exactly sure how to safely advance the frontier.So it feels like there's a disconnect there.
I agree.That's why I'm going to kind of focus on making the situation better.
Yeah. Do you think part of readiness for powerful AI involves slowing down AI progress?Like, are you one of the people who thinks maybe we should put a pause on the development of these systems?
Because part of readiness to me is like, well, yeah, it's going to be easier to get ready for something momentous if you have more time.
Yeah, so my position is that it would be premature to slam the brakes, but we should be installing some brakes, and right now there are no brakes.
So that means things like knowing where the compute is, knowing what the state of the art of the capabilities is, thinking through different policy proposals.Like, for example, I publish a paper called Computing Power and the Governance of AI.
And we talked about one idea, which was have a compute reserve, kind of like the Federal Reserve, where you kind of like put more compute or less compute onto the market to kind of speed things up or slow things down.
I don't know if that's the right idea.Maybe it's taxes and maybe we don't actually do any of these things, but we think them through and be ready to do them if things are going too fast.
But right now, I think the debate is pretty simplistic and it's kind of on one end is like go way faster and the other is, you know, slam the brakes.
And I think, you know, the truth is we don't really know their tradeoffs and we need to think through, you know, what is happening, what are the benefits of, you know, going faster, slower, etc.
I mean, one thing that we find when we talk about, whenever we talk about AGI or powerful AI on the show is that there's a certain segment of people who just think this is all science fiction, right?
That are not convinced that AI is approaching human level intelligence, that don't see the usefulness of chat GPT or other tools in their life.We think this is basically marketing hype coming out of the big AI labs.
What do you think is the issue in understanding there?Or what are these people not seeing that you see?
It's hard for me to say.I am very interested in better understanding this gap between, you know, what people in industry think and what people outside industry think.
And obviously there are exceptions, but generally people who are in industry at the frontier labs think things are going very quickly.And, you know, some of them are very excited about that, some are very concerned, but there's no
dispute that, you know, significant progress has been made towards AGI and that there will be much more progress in the next year or two.I feel quite confident that it's not just hype.
Certainly some people are trying to hype, you know, their new startup and so forth.But, you know, there are lots of people formerly at these companies, such as myself, who have no incentive to hype things, and that is notable.
I'm curious, Myles, a lot of what you've spent your career and your time at OpenAI thinking about is how institutions can get ready for more powerful AI, like companies and governments.
But I'm also curious what you think individual people can do to get ready.Like I was at a college last week talking with a bunch of college students and they were all sort of asking the question of like, what should I do?Like, what should I study?
What kind of career should I pursue?What skills should I be developing now that will sort of have value in the post-AGI world?What should individuals do?
So I'd say at least three things.One is just, you know, try out these systems.I think if you haven't used, you know, the latest version of, you know, Gemini or ChatGPT or Claw, then, you know, you're out of date.
And, you know, I say that as someone who no longer is, you know, in industry, it's not just hyping this. Genuinely, this is going to transform the world and you need to know what it can do and what it can't do.
The second is thinking about what it means for your career.And I think that means look at the kind of trend lines of like, for example, opening a published blog post on a system called O1.And, you know, look at what it can do.
Look at some of these examples of where things are headed.You know, what does that mean for your career?You know, should you be thinking about, you know, kind of a more future proof path? wherever books are sold.
And then the third, I would say, is like protecting yourself and your loved ones from things like deep fakes and so forth.
And again, like knowing what the state of the art of the technology is, is key to that so that you don't get duped by, you know, a phone call that sounds like your grandma and it's not.
So like knowing what's possible is important in order to, you know, not get duped. Yeah.
People have specific questions like, should I keep saving for retirement if we're going to have powerful AI in a couple years that renders all money irrelevant?Should I be planning for my kids to go to college?
Should I be lobbying my congressperson to do some particular thing about AI?What are sort of the practical action steps here if you are a person who
believes, as I think many people in the industry do, that this technology is going to keep getting better.What does that mean for our individual choices and planning?
Yeah, it's hard to say.I mean, even the people who know the most about AI have a lot of uncertainty about, you know, how quickly it's going to go.I think most people who know what they're talking about agree it will go pretty quickly.
And like, what does that mean for society is not something that can even necessarily be predicted.It's partly a policy and societal question.
Like, do we want to establish social norms around protecting certain jobs and, you know, saying only humans can do this?And, you know, or do we want to, you know,
Will we get greater benefits from AI if we want to say that, oh, actually, AI can write prescriptions and it can give legal advice and so forth.It doesn't need a law degree or whatever.
So, I think it's basically impossible to fully predict, but I do think people should save money.
And I don't think that like- So, you are saving for retirement personally.
I think that retirement will come for most people sooner than they think in the sense that, you know, it'll, I think- Finally, some good news on this podcast.
Yeah, assuming, you know, we're all here to be talking about retirement, you know, when, you know, AGI is here.
But, yeah, I think that it will be technologically possible if, you know, and politically possible if kind of policymakers do their job to have this huge economic bounty and for people to basically retire early and, you know, have a high standard of living.
That being said, you know, you might have more kind of, you know, robot butlers if you, you know, have some savings from before than if you didn't.And I think there will always be things that people want to buy.
They'll always, you know, there'll be a demand for, like, well, I want to have food served to me by a human and, you know, not a robot.And so I do think... That's what Kevin wants.Yeah, so I think money will still be a thing, would be my guess.
Do you want to share your thoughts on how society would work in a world where AI can do everything of economic value?Like, how are people's needs taken care of?The government provides?
Or our robots do everything for us?I think...
what is probably going to happen, you know, assuming that we're all here and safe and healthy and so forth, is that there will be a much larger economy, that AI will, you know, each year over the next, you know, few years, GDP growth will significantly increase.
And, you know, if we do the right thing in terms of taxation and, you know, having a robust safety net where that looks like basic income or something else, that people will be able to share in that bounty.
And people will be able to choose to just kind of take that basic income and live pretty comfortable lives, the kinds of lives that a lot of people work very hard and work very long hours to live today.
So I don't know how it'll play out, but I think that'll be technologically possible.
Hmm.Hmm.What else have you done personally in your life as a result of having internalized your beliefs about AI progress?
Like, how are you making choices differently in your day-to-day life than someone who maybe is not attuned to AI progress the way you are?
I mean, besides spending basically all day every day thinking about AI progress, not that much.I think I have a pretty normal life and have cats and watch TV and so forth.But yeah, in terms of work hours, that's the main thing.
It's so interesting to me because I always feel like this is the signal that I keep looking for from the AI labs, you know, that I'm not seeing is like when people start actually behaving differently at the labs based on their beliefs about AI.
Like when OpenAI gets rid of their 401k match.
Yes, or when people are behaving as if this is imminent rather than just saying it's imminent.
And I don't know what that would look like, but it just strikes me that even the people who are the best plugged into what this technology is capable of still have some cognitive dissonance when it comes to rearranging their own lives based on that.
Well, we do know that Sam Altman built a doomsday bunker. That gives us one signal I guess.Yes.
Have you been to the doomsday bunker?
I have not been to the doomsday bunker.That's a shame.Have you been?No, would love to go.Would love to go.
Should we podcast from the doomsday bunker?
I bet it has great acoustics.Iconic, yeah.You can offer a thought on that if you want.Like why is it that people at these labs who understand and are betting on this technology getting so much better so soon.
Why is it that they wake up and go to work every day like the rest of us and don't seem to be making sort of radical choices in anticipation of those changes?
I think it's because AI is this super general purpose technology that kind of pervades everything.And it's not like this robot marching down the street that's going to kill us.
You need to put sandbags in front of your door to protect yourself or whatever.It's something that's being deployed on the Internet and it's all over the place.
So there's not really that much to do to kind of prepare for it other than to understand it, to talk about it, to kind of push for policymakers to set good guardrails.
I guess that leads to the sort of last question I want to ask you, which is just like, if you could snap your fingers and let's say two things.
If you could make one thing happen, pass a law, set up a new governing body or something, shut down a AI lab.If you could do one thing to increase our society's readiness for AI, what would it be?
And then the second question is like, if you could convince everyone in the world of one idea about AI to make them better equipped to handle what's coming, what would they be?
So thing you would do, an idea that you would kind of incept into people's brains.
It's really hard to answer the first one.I think there's a lot of things that need to happen.So I'll kind of pick something that is a pretty broad umbrella.
So for during the Biden administration, there was this executive order passed that started to gather information about where the big data centers, where the big models, are you doing a good job on safety and so forth?But it's currently vulnerable.
It might be undone in the in the Trump administration, for example.And even without that, there are likely going to be legal challenges because it invokes this thing called the Defense Production Act, and that's somewhat contested.
So if I had a magic wand, I would have Congress pass a version of that that
not only kind of puts it on really solid legal foundation, because there's a law that is passed that kind of allows the government to be on top of these things, and I would throw in some additional components like long-term solid funding for the AI Safety Institute.
So it's basically the AI brain of the U.S.
government, the one part where there's a critical mass of technical experts who know what's going on and are kind of testing these systems, putting all of that on a solid foundation rather than having it be at the whims of
you know, the next election, that would make me very happy.And there would be a lot to do after that, but that would be a good start.
And then the one sort of bumper sticker that you could that you could convince everyone in the world of with a wave of your wand.
Yeah.Sorry, I forgot about that part. I think the main thing is that AI is a real thing and not science fiction.
Like, there are literally systems today that you can use for free to solve interesting reasoning puzzles and to give feedback on papers and to draft emails and to solve LSAT problems, as well as a lot of legal students.And that's crazy.
And that wasn't true five years ago. So just appreciating the weirdness of the situation and the fact that anyone who knows what they're talking about agrees that we have not stopped this progress.And, you know, we should think about that.Yeah.
Well, I'll tell you what I'm doing to improve my AI readiness, guys.I'm learning martial arts.I think that in this world, I'm going to need to be able to defeat any laptop in hand-to-hand combat.And I like my odds.I like the chances.
So I'll keep you posted on how that's going.
Have you seen the Boston Dynamics robots?I don't think you would do well against them.
Look, I actually haven't.I've been studying where you can kick them to knock them over.And I'm not gonna say more on an unsecured microphone, but let's just say I know a few things.
All right, well, Miles, thank you so much for coming on.I don't necessarily feel more ready for AGI as a result of this conversation, but I at least know where the gaps in my readiness are.I was born ready for AGI.
You think I wanna work all the time?
Yeah, you've been ready for early retirement for about 10 years now.
Even when I was a little boy, I knew I wanted to retire early.
All right, thanks for coming on.
Thanks, Miles.Yeah, thanks so much for having me.When we come back, a segment you're probably going to want to listen to.
Well, Casey, we've talked about how rich people are preparing for the election, but how about people who are trying to get rich by betting on the election?
Yeah, the sort of the average Joes out there, Kevin, just average Internet users who are hoping to pass the time by making a little bit of sweet cash.
Yes, so today we're talking about betting markets for the election and in particular... You know, I actually predicted you were going to want to talk about this.Oh, I see what you did there.
So we have talked about prediction markets on this show before.I went to a big prediction markets conference.We talked about it last year.
These are these sort of markets where people can go on and bet on real world events, you know, celebrity romances, things of that nature, but also more serious things like presidential elections.
And I would say that one of the breakout stories of the 2024 U.S.presidential election has been the rise of these prediction markets for elections.
Yes, of course, everyone in this country and many people around the world are obsessed, rightly so, with what is going to happen in the U.S.presidential election.
And everyone is looking for someone who can just say to them definitively, here is what is going to happen. And while I don't know if these markets are doing that, they certainly are making some people feel that way.
Yes, so this has sort of become one way that political followers are attempting to handicap the race, is literally by looking at the bets that people are making on these platforms.
Elon Musk has been a big proponent of prediction markets, and in particular has been posting a lot of screenshots of PolyMarket, which is one of the big prediction markets websites where a lot of this gambling is going on.
Oh, I thought that was a place where you could meet a thruple.
No, there are different apps for that.
But I thought we should talk about this today because as people are getting ready for the election next week, as people are trying to figure out who is going to win, I would say this is a place where a lot of election anxiety has gone to sort of sit in the days and weeks leading up to the election.
And I think they raise a lot of really interesting questions about how faithfully they are tracking the polls and the places where they're diverging.
Yeah, and while these markets have a lot of big fans and backers and boosters, there's also been some reporting recently that these platforms deserve scrutiny and that they might not be providing quite as accurate an assessment of either candidates odds of winning as as maybe you might imagine from just looking at the raw numbers.
Yeah, so as usual, when we are talking on this show about some sort of complicated financial scheme involving cryptocurrency and betting, we are going to talk today with our friend and colleague David Yaffe-Bellany of The New York Times.
He and my colleague Aaron Griffith recently wrote about these election betting platforms and the sort of rise in popularity they are seeing during this election season.
It's a rare chance to talk to David about a crypto story before the person involved goes to prison.Yes, let's bring him in.
David Affey-Melany, welcome back to Hard Fork.Thanks so much for having me.So let's talk about betting markets and the election.
I would say this has been one of the biggest changes in this election cycle is that all of a sudden everyone is talking about these prediction markets.So can you just sort of help us understand how this happened?
Yeah, so it came out of nowhere, sort of.
I mean, suddenly, you know, at least from my perspective, you know, everybody that I've followed on Twitter, every crypto person that I was interested in was tweeting out the odds on this website called Polymarket.
And it seemed to show that, you know, Trump had this massive lead over Kamala Harris.And, you know, it's really kind of blown up and become sort of the way a lot of people on the Internet are understanding the state of the race.
Yeah. I did some research into betting markets and prediction markets last year when I was writing about Manifold, which we talked about on the show.
And I was surprised to learn that around the world, betting on elections is legal and happens in a lot of countries, including the UK, where it's been legal for many years. But this has not been traditionally legal in the U.S., correct?
Yeah.And I mean, its legality is still the subject of a lot of dispute in the U.S.A small elections betting platform actually recently won a lawsuit against a government agency allowing it for the time being to offer these betting markets.
But that suit hasn't been totally resolved yet.And so there's still sort of a legal cloud over this type of betting.
And if you're new to the world of these prediction markets, why do people who follow them closely believe that they offer some sort of information that you can't find other places?
The argument is basically that markets aggregate a lot of information, that the people who are placing bets are doing so because they've digested the polls, they've thought about recent news stories, they've thought about historical context.
Basically, they've processed all that information and made a bet, and if you have enough people doing that and doing it in an intelligent way, then you end up with a prediction of the future that could be more reliable than any of those sort of individual metrics.
That's the theory.I mean, whether these sorts of prediction markets actually have a better chance of forecasting the election than a traditional poll, that's not clear yet, and it's something that academics are still studying.
Yeah.So let's talk about Polymarket because you recently wrote a story about this one particular prediction markets company.Tell us about Polymarket.
So Polymarket is a betting platform founded by a 26-year-old NYU dropout.It was founded back in 2020, so it's been around for a few years.And it has become the most popular of these betting markets.It's not the original one.You know, you may have
heard of Predict It, which was around for a long time, but it's become the sort of most used, most cited kind of elections betting market in this cycle.And that's for a couple of reasons.
One is that unlike other betting markets, it does not cap the amount of money that you can gamble.So if you want to bet $30 million that Trump will win the election, you can do that on Polly Market.On Predict It, you're capped at $850.
And also I think it's just sort of marketed itself successfully.It has a kind of charismatic young founder who's gone around about preaching the benefits of prediction markets.
And so it's really become a kind of internet phenomenon in this race, and it's been trumpeted by Elon Musk, by Trump himself, and a lot of people are paying attention to it.
And it's a crypto prediction market, right?I mean, it's the reason that you are covering it as the crypto reporter is because your bets are made in cryptocurrency, right?
Yeah, you place bets in USDC, which is a stablecoin.The identities of the people who are placing the bets are not exactly public, but you can see the sort of anonymous accounts that they're placing the bets from.
So there's a level of transparency that's lacking on some of these other platforms, It allows outside groups to kind of examine betting patterns, and it's something that has excited a lot of people in the research community about it.
And how much money has been bet on the US presidential election so far on poly market?
The figure as of a week ago for the outcome of the electoral college contest was in the region of $100 million.It's probably gone up since then.
Do you have a sense of how much of that is coming from Americans betting on the American presidential election, or how much of it is foreigners?
So officially, PolyMarket is banned in the US.It was hit with a fine by the Commodity Futures Trading Commission a couple of years ago, and as part of that settlement, it agreed not to offer these products to people based in the US.
Look, I spoke with a former employee of the company who said that it was an open secret in the office that people could use VPNs to place bets.
If you look at the topics of some of the markets that the website offers, it's stuff that Americans are very interested in.You go on their Discord, it's full of tips about how to evade the ban on American users.
It's clear that people in the United States are using this platform.How many of them, how much money comes from them, that's not totally clear though.
Your story says that PolyMarket told you that they use industry-leading compliance measures to prevent American customers from betting, and I think it's funny that those measures don't include stopping Americans from talking about how to evade those measures in their own Discord.
Well, to be fair to PolyMarket, they also told us that they have hired some sort of company to clean up their Discord and go in and delete some of those posts, and yet, a couple of days before the story came out, I was still able to find plenty of examples of people talking about it.
So right now, if I go on to polymarket.com, I see that their top election prediction market, who will win the presidential election, has Donald Trump ahead 66% to Kamala Harris, who has 34%.
That is obviously a much wider spread than any of the sort of traditional polls that I've seen, at least.
Why is it such a big spread?
Yeah, this sort of divergence from the polls, which basically show that it's a dead heat between the two candidates, is one of the things that got me interested in polymarket and which has sort of catapulted it to a new level of attention in this election cycle.
And, you know, the sort of dubious thing about what's going on is that polymarket has now revealed that a series of large bets by a single person were at least partially responsible for swinging the odds so far in Trump's direction.
We don't know exactly who that person is, but we know that they bet somewhere in the region of $30 million on a Trump victory, and that they're French, and that they have a trading background.
This also sort of raised the specter of, could this be some sort of effort to manipulate the market?Polly Market has come out and said, no, this is someone who just really believes that Trump is going to win, and is making a bet based on that.
So help me understand this, because the sort of thing that you will hear and that I heard when I was reporting on prediction markets is that this kind of market manipulation of prediction markets is hard to do, because essentially, if you do think that these odds on polymarket are out of whack, couldn't you just make money betting on the other side?
Yeah, and that's part of what the company and its backers have said in response, that if these odds were really so skewed, then you would see people placing Harris bets and, you know, you would reach some kind of equilibrium.
And they might be right, or it just could be that like polymarket doesn't have a large enough user base, or like a rational enough user base, or there are issues structurally that are preventing that sort of equalizing effect from happening.
And we just don't know yet.I mean, if Trump wins the election in a huge landslide, I think a lot of people will point to polymarket and say they were kind of ahead of the curve on this.
And it's also possible that this French guy has some bit of information about what's gonna happen that we don't have, that he knows about the October surprise about Kamala Harris that's coming down the pike.
That's sort of how people theorize prediction markets should work and what should make them good forecasting mechanisms.
Well, I would find that very disturbing.Ever since the Louisiana purchase, I've said to France, you stay out of our politics.And so for them to get involved in this way, I find really, really disturbing.
Also, how many French people even have $30 million?You know, the list is basically like Catherine Deneuve.And then now I'm out of people. So that seems strange to me.But no, to the point you're making, Dave, it's like, let's think about it.
Okay, so the people that are betting on Polymarket are, let's face it, probably mostly Americans who have figured out a way to use a VPN to make a bet.
And I just believe that the group that sort of self-selects there, they actually just want to influence the election by creating the appearance that Donald Trump is running away with the game.
So, I find it very hard to believe that what we're seeing on polymarket is a more accurate representation of the race than the actual polls that are being done across the country by, you know, experienced pollsters.
I guess one thing that's been striking to me about polymarket odds is that until recently, the odds were basically the same as the polls.I mean, they showed that it was an incredibly close race with about a 50-50 chance on each side.
And so at that point, you sort of wonder, like, well, what new information are we actually getting from this that we don't get from traditional sources?
And then at the point where there's a big swing, we discover that that's because one guy in France bet a ton of money.And, you know, again, it raises some questions.And, you know, we'll see what happens on November 5th.
But I think, you know, the concerns Casey's raising are totally valid.
I think this is a real concern because we know that the perception of elections in the days and weeks leading up to those elections matters for things like donations and volunteer support and just general turnout.
But I also think there's another way in which it may matter, which is that it is now setting up the expectation that there is going to be this lopsided Trump victory, at least among the people who believe in prediction markets,
And so if the race is not called right away, or if it is close, or if Kamala Harris is deemed the victor, I think that does lay the groundwork for some nefarious attempts to sort of deny the results of the election.
Absolutely, again, and I've said this a few times over the past couple weeks, but I just think it's important to keep pointing out to people, there is an effort underway to delegitimize the results of the election, and one of the things that you're gonna see in the aftermath of the election, should Harris appear to be winning, is people pointing to Polly Market and any other thing they can that made it seem like Trump was gonna win, and say, she can't possibly have won
Look at what Polly Market was telling us, right?
Yeah, and this is something, you know, people are always looking for something to blame after their side loses an election.And I think Polly Market could be blamed by whoever, whichever side loses the election.
There are some other exciting betting markets on Polly Market, including, you know, whether Taylor Swift will get engaged this year.There was a market on who the HBO documentary would identify as the person behind Satoshi Nakamoto, you know.
We could probably spin up some hard fork markets.
We have hard fork markets on manifold, but we always forget to resolve them.So now people are just mad at us.
Well, on that point, though, David, like are there notable cases where a poly market prediction market told us something surprising?
So the example that Polly Market has sort of promoted as, you know, a time that the service proved its value was that as soon as the first presidential debate over the summer happened, you know, the odds of Biden dropping out, you know, swung kind of heavily in that direction.
But of course, at the same time, every pundit in the universe was talking about how that was potentially on the cards.
I think it's true that maybe Polly Market sort of anticipated that that really was going to happen a bit earlier than the public discourse did, but it was still directionally kind of the same.
That is a case where I'm perfectly willing to believe that these prediction markets might help you learn something 5% faster than if you were not paying attention to it.But in most cases, that 5% advantage doesn't really matter.It's kind of boring.
Unless you're some sort of financial analyst, I don't think it matters that much in your life.
Part of what's also happening here, it's about the sort of financialization of everything.You know, why can't we bet on every little thing that happens?
You know, let sort of markets sort of dictate our behavior in even more spheres of life than they already do.And, you know, to people who are big supporters of crypto, that's a utopian vision of the future, but not everyone feels that way.
Yeah.So one other thing we should talk about with respect to polymarket is this issue and this concern of wash trading.
Some crypto research firms have recently looked into this, and according to Fortune, which published an article on this this week, a significant amount of the trading volume on polymarket's presidential betting market consists of potential wash trading.
So David, what is wash trading and why is it happening on polymarket?
So wash trading is when trades happen back and forth that are essentially artificial.It's sort of an effort to create the illusion of trading activity.And that can be helpful to a platform, cuz it makes the platform seem more popular.
It can be helpful to the people sort of facilitating the wash trading if they're gonna get some sort of benefit from it.So in the crypto world, sometimes if you're a really active user on a platform, if that platform creates a token,
token later on, you might get like preference, you know, as you try to accumulate a supply of that token.So, you know, we've reported that PolyMarket has explored the possibility of launching a token.Others have reported that as well.
And so, you know, the company doesn't really make any money right now.And so it's looking for ways to generate revenue.That's a plausible path.
I was surprised by that.Why don't they just charge a small fee on these transactions?
I think partly it has to do with the legal gray area around this sort of thing.
At the point where you not only have traders who are using VPNs to get around the geo-blocking and trade from the US, but you're also profiting from that activity, then you're potentially in more legal hot water.
So the platform is not taking a cut of every bet made on the platform the way that like a sportsbook at a casino would.
But they are benefiting from a lot of hype and discussion around the platform as potentially an important signal of who's going to win the election.
And they're about to raise another $50 million from VCs and you know they're burning through that sort of startup capital the way that lots of tech companies do and sort of figuring that eventually some point down the line they'll figure out a way to make money.
Have they considered selling a t-shirt that just has polyamorous on it?
You know, Shane Copeland, the CEO, has made jokes about the fact that he should have started a sex podcast instead of a betting website.He has the URL.Maybe you guys should get together with him and, you know, figure out.We would love to.
Shane, call us.What do we know about the politics of polymarket itself?
The politics, if you will.
Because, you know, they have taken money from Peter Thiel's founder fund, was an investor in theirs.People on the left have used that to say that maybe they are sort of secretly in cahoots with the Republicans.
Did you find anything in your reporting about the politics of Polymark at the company?
So we reported like everybody else has that, you know, one of Teal's firms is backing Polymarket.
Shane did not speak with us for the story, but after it came out, he posted a long tweet about the story in which he said, you know, there's nothing to this, you know, the partner at Founders Fund who backed us, you know, wasn't, you know, just a different person than Teal and that, you know, this is a, you know, a ridiculous claim about the platform.
He was photographed, though, at the RNC socializing with Donald Trump Jr., and that photo, I think, got a lot of traction for a while, but he's since also been photographed with his arm over Tim Walz's shoulder at some fundraiser.
To me, it looks more like a guy who just is really good at networking and enjoys being around famous and powerful people and trying to kind of use their star power to promote his venture more than it does like some sort of right wing conspiracy.
Well, I'll tell you what I told Kevin, which is that if Kamala Harris wins, this will be the last segment we're doing on prediction markets on the show, because we've done two of them now, and I'm still not convinced that they're telling us anything we couldn't already find out from other memes.
Well, I think they do reflect this sincere belief among at least a lot of tech people that you can get better information by making a market around it, right?
That people will have better research behind their views if they have money behind those views.
And if you want to look at the way things are going in the world and find out what's happening in the world, you shouldn't look at what people say, you should look at what they bet on.
I think that is true in the case of the stock market where you have some of the smartest people in the world and vast sums of money who are trying to absolutely maximize their returns.
I think it is less true on these rinky-dink platforms with a group of self-selected nerds who are betting relatively small sums of money on relatively few events.That I don't actually think you can get that much information from.
No, I think that's fair.And I also think like we know that markets can be distorted and manipulated and also just like full of bad information, right?People who bet on sports are not always betting on who they think is going to win the game, right?
Maybe they're betting on their favorite team or their favorite
Are you saying you've met sports gamblers who aren't totally rational?
Yes, people are not rational actors in these markets.People are not even rational actors in the stock market, right?Like look at GameStop.
That was a case in which, you know, a bunch of people decided to sort of behave irrationally together and push the price of the stock up.So I just look around and I don't see a lot of markets that look extremely rational to me.
No, but what I do see, Kevin, is Americans becoming degenerate gamblers.And I do think that that is a piece of this story as well.I read a piece in The Lancet this week.
It is a journal of public health, and it had a paper projecting that consumers globally will lose around $700 billion on gambling losses by 2028.So I think prediction markets will probably be a pretty small part of that relative to
Sports gambling, for example, but I do think we should take note of just how many different aspects of American life gambling is creeping into, because as always with gambling, it is usually the house that wins and not the consumer.
Well, in this case, it sounds like the house is not even really making any money.
Yeah, let's just say I've seen smarter houses out there.Well, David, I predict your time here has come to an end with 100% certainty.
Yeah, I'll take the other side of that.I want to have him on for another half hour.
No, I didn't mean he could never come back.I just meant for this segment.Oh, OK.Yeah.
Well, I'm going to place a bet that I'm staying on for a couple more seconds.I think I win.
Yay!All right, Dave, you have to be Bellamy.Thank you, as always, for coming on.
Hard Fork is produced by Whitney Jones and Rachel Cohn.We're edited by Jen Poyant.We're fact-checked by Ina Alvarado.Today's show is engineered by Chris Wood.Original music by Marion Lozano, Diane Wong, and Dan Powell.
Our audience editor is Nel Gilogli.Video production by Ryan Manning and Chris Schott.You can watch this full episode on YouTube at youtube.com slash hard fork. Special thanks to Paula Schumann, Hui-Wing Tam, Dalia Haddad, and Jeffrey Miranda.
You can email us as always at hardfork at nytimes.com.