Skip to main content

Prediction Markets and Beyond AI transcript and summary - episode of podcast a16z Podcast

· 120 min read

Go to PodExtra AI's episode page (Prediction Markets and Beyond) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Go to PodExtra AI's podcast page (a16z Podcast) to view the AI-processed content of all episodes of this podcast.

a16z Podcast episodes list: view full AI transcripts and summaries of this podcast on the blog

Episode: Prediction Markets and Beyond

Prediction Markets and Beyond

Author: Andreessen Horowitz
Duration: 01:49:49

Episode Shownotes

This episode was originally published on our sister podcast, web3 with a16z. If you’re excited about the next generation of the internet, check out the show: https://link.chtbl.com/hrr_h-XCWe've heard a lot about the premise and the promise of prediction markets for a long time, but they finally hit the main stage

with the most recent election. So what worked (and didn't) this time? Are they really better than pollsters, is polling dead? So in this conversation, we tease apart the hype from the reality of prediction markets, from the recent election to market foundations... going more deeply into the how, why, and where these markets work. We also discuss the design challenges and opportunities (including implications for builders throughout). And we also cover other information aggregation mechanisms -- from peer prediction to others -- given that prediction markets are part of a broader category of information-elicitation and information-aggregation mechanisms. Where do domain experts, superforecasters, pollsters, and journalists come in (and out)? Where do (and don't) blockchain and crypto technologies come in -- and what specific features (decentralization, transparency, real-time, open source, etc.) matter most, and in what contexts? Finally, we discuss applications for prediction and decision markets -- things we could do right away to in the near-future to sci-fi -- touching on trends like futarchy, AI entering the market, DeSci, and more. Our special expert guests are Alex Taborrok, professor of economics at George Mason University and Chair in Economics at the Mercatus Center; and Scott Duke Kominers, research partner at a16z crypto, and professor at Harvard Business School -- both in conversation with Sonal Chokshi. As a reminder: None of the following should be taken as business, investment, legal, or tax advice; please see a16z.com/disclosures for more important information. Resources:(from links to research mentioned to more on the topics discussed)The Use of Knowledge in Society by Friedrich Hayek (American Economic Review, 1945)Everything is priced in by rsd99 (r/wallstreetbets, 2019)Idea Futures (aka prediction markets, information markets) by Robin Hanson (1996)Auctions: The Social Construction of Value by Charles SmithSocial value of public information by Stephen Morris and Hyun Song Shin (American Economic Review, December 2002)Using prediction markets to estimate the reproducibility of scientific research by Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian Nosek, and Magnus Johannesson (Proceedings of the National Academy of Sciences (November 2015)A solution to the single-question crowd wisdom problem by Dražen Prelec, Sebastian Seung, and John McCoy (Nature, January 2017)Targeting high ability entrepreneurs using community information: Mechanism design in the field by Reshmaan Hussam, Natalia Rigol, and Benjamin Roth (American Economic Review, March 2022)Information aggregation mechanisms: concept, design, and implementation for a sales forecasting problem by Charles Plott and Kay-Yut Chen, Hewlett Packard Laboratories (March 2002)If I had a million [on deciding to dump the CEO or not] by Robin Hanson (2008)Futarchy: Vote values, but bet beliefs by Robin Hanson (2013)From prediction markets to info finance by Vitalik Buterin (November 2024)Composability is innovation by Linda Xie (June 2021)Composability is to software as compounding interest is to finance by Chris Dixon (October 2021)resources & research on DAOs, a16z crypto Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Full Transcript

00:00:00 Speaker_01
These markets are actually very good at producing predictions which tend to be more accurate than polls.

00:00:08 Speaker_02
There is a sort of subtle distinction between wisdom of a random crowd and wisdom of an informed crowd.

00:00:16 Speaker_01
Instead of having politicians decide what policies to have, politicians and voters would just decide on what our metric for success is going to be.

00:00:27 Speaker_02
As you're deciding, like, which thing do we build first? Or, like, as we're progressively decentralizing, like, what do we prioritize? You actually have to understand the market context you're working in.

00:00:36 Speaker_00
We, as humans, love making predictions. And to improve our predictive power, we've built up mechanisms that leverage the wisdom of the masses, whether it be political polls, financial markets, even Twitter or ex-community notes.

00:00:50 Speaker_00
One such mechanism that had its moment this year was prediction markets, with search queries for platforms like PolyMarket or Kalshi going hyperbolic ahead of the election.

00:01:00 Speaker_00
Now today's episode is all about prediction markets, including where they're useful and where they're limited, but also how they coexist with other mechanisms like the polls.

00:01:10 Speaker_00
So was the attention they received an election year phenomena or a sign of something to come? And what's the difference between gambling and speculation anyway? And what implications does that question have on their future in the United States?

00:01:23 Speaker_00
Given that this episode was originally published on our sister podcast, Web3 with A16Z, we also explore where Web3 and decentralized networks play a role here.

00:01:32 Speaker_00
Finally, if you're excited about the next generation of the internet, be sure to check out Web3 with A16Z wherever you get your podcasts. All right, on to the episode.

00:01:47 Speaker_03
Welcome to Web3 with A6NZ, a show from A6NZ Crypto about building the next generation of the internet. I'm Sonal Choksi, and today's episode is all about prediction markets and beyond.

00:01:59 Speaker_03
Our special guest is Alex Tabarak, Professor of Economics at George Mason University and Chair in Economics at the Mercatus Center, and Scott Commoners, Research Partner at A6NZ Crypto and Professor at Harvard Business School.

00:02:12 Speaker_03
Prediction markets hit the main stage in the recent election, which we cover briefly, especially to tease apart the hype from the reality there, since people have talked about the promise and premise of these for a very long time.

00:02:23 Speaker_03
But we also go more deeply into the how, why and where these markets work, challenges and opportunities, including implications for designers throughout. We also briefly cover other information

00:02:34 Speaker_03
aggregation mechanisms and discuss applications for all these markets, including touching on trends like putarchy, AI entering the market, Deci and more. About halfway through we discuss where do and don't blockchain and crypto technologies come in.

00:02:50 Speaker_03
And as a reminder, none of the following should be taken as business, investment, legal, or tax advice. Please see a6nz.com slash disclosures for more important information. Be sure to also check out the show notes for this episode.

00:03:01 Speaker_03
We have a really rich set of links, including all the research cited in this conversation. But first we begin with a quick overview of what prediction markets are. The first voice you'll hear is Alex's followed by Scott's.

00:03:15 Speaker_01
So I think a prediction market is a very simple idea. The bottom line is that we're interested in forecasting. Lots of people are interested in forecasting things and prediction markets

00:03:27 Speaker_01
are some of the best methods of forecasting which we have yet created. They tend to be better than complicated statistical models. They tend to be better than polls, or at least as good. And there's one reason for that. Suppose that a model was better.

00:03:45 Speaker_01
Okay, suppose I have the Nate Silver statistical model of predicting elections, and it's better than the prediction market. Let's suppose that were true. Well, if that were true, I could make money.

00:03:58 Speaker_04
Yeah.

00:03:59 Speaker_01
I could use Nate's model to go and make bets. And in making bets on the prediction market, I push the prediction market closer to the truth. So almost by definition, the prediction markets have to be at least as good.

00:04:13 Speaker_01
And typically they're better than other methods of forecasting.

00:04:17 Speaker_02
And actually, you know, sort of that is an illustration of why we think of these things as information aggregation mechanisms, right? What are they really doing? They're aggregating information from all of the people in the market, right?

00:04:28 Speaker_02
And so if many different people are out there doing their own private forecasts and like calibrating their own models, there's Nate Silver and like Jonathan Gold, Melanie Bronze, you know, we'll make up all of our variations.

00:04:38 Speaker_02
You know, they all have their own models. They all have their own estimates, which they trust with some degree of confidence. They come together, right? They're all buying or selling the prediction asset based on what their model leads them to believe.

00:04:50 Speaker_02
And so as a result, the asset is sort of like aggregating all of this information. It's price discovery, just like we think about in financial markets and commodities markets. Everybody's demand together discovers the price at which the market clears.

00:05:03 Speaker_02
And here, because what the value of the asset is depends on probability, right? It's like its value is sort of like a function of the probability of the outcome of the event. The price aggregates people's estimates of that probability.

00:05:15 Speaker_01
Yeah, exactly. I think it's useful that these are markets and actually all markets do this. And we learned this going back to Hayek's 1945 article, the use of knowledge in society.

00:05:29 Speaker_01
This is a Nobel prize winning paper, which doesn't have a single equation in it. So anybody can go and read this paper and they should, it's a fantastic paper.

00:05:38 Speaker_03
I'll link it in the show notes.

00:05:39 Speaker_01
Awesome. And so what Hayek said is, you know, prior to Hayek, people think what prices, you know, they coordinate and they make demand equal supply and production consumption. Hayek said, no, no, no, you're thinking about the price system all wrong.

00:05:52 Speaker_01
The price system is really about aggregating and transmitting information. And he said, look, There's all this information sort of out there in the world, and it's in heads, right?

00:06:02 Speaker_01
It's in people's heads, like what people prefer, their preferences, but also people know things. They know what the substitutes are, what the complements of everything are, how to increase supply, what the demands and the supplies are.

00:06:14 Speaker_01
It's all in heads. And for a good economy, you want to use that information, which is buried in people's heads. But how do you get it out? Because it's dispersed. It's dispersed in millions of people's heads.

00:06:25 Speaker_01
The information, sometimes it's fleeting information. It's sometimes tacit. It's hard to communicate to a central planner. So what Hayek said is that markets do this because markets give people an incentive

00:06:41 Speaker_01
through their buying and selling to reveal this kind of information, to pull this dispersed information from millions of people. And people who are buying, they're pushing the price up. People who are selling, they're pushing the price down.

00:06:54 Speaker_01
Suppliers, consumers, they're all in the same market. And so all of this dispersed information comes to be embedded in the prices. And kind of remarkably, the price can sort of know more than any person in the market.

00:07:09 Speaker_03
I just want to pause on that for a quick second, because you guys might take it for granted, but that's a very profound insight.

00:07:15 Speaker_03
Like what you're basically saying is that it's really surfacing what people know collectively at scale and getting at the truth in that way. I mean, that's a very profound thing. want to pause on that for a quick second.

00:07:27 Speaker_01
Exactly. What economists have found is that these markets are actually very good at producing predictions which tend to be more accurate than polls.

00:07:38 Speaker_01
So if you go to a prediction market, for example, the recent election with Trump and Harris, you can buy a asset which pays off a dollar if Trump wins and nothing if Trump doesn't win.

00:07:50 Speaker_01
Now you think about how much are you willing to pay for that asset? Well, if you think that Trump has a 70% chance of winning and you go to the market and you see that the price of that asset is 55 cents, you're going to want to buy.

00:08:04 Speaker_01
because you're buying something you think is worth 70 cents, 70% chance that Trump wins, and you get a dollar, and you can buy it for 55 cents, so you expect to make 15 cents.

00:08:14 Speaker_01
And by doing that, you push the price closer to 70 cents, so you can interpret the price as a prediction. And in the most recent election, the prediction markets were tending to predict a Trump win even when the polls were closer to 50-50.

00:08:30 Speaker_03
Actually, the poly market CEO said a lot of people trust the market, not the polls, at least when it came to the elections. Like, do you guys agree with that or no?

00:08:37 Speaker_03
I'm just curious, because if that's a place where we can quickly tease some hype versus signal.

00:08:42 Speaker_01
I don't think polling is dead. Polling is one of the inputs into a particular. It's pretty useful. I do think people need to be more sophisticated about how they poll and who they poll.

00:08:53 Speaker_01
It's pretty clear that a lot of people now obviously are not answering their telephones and a lot of people don't want to talk to the pollsters.

00:09:00 Speaker_01
So there needs to be some new sophisticated techniques, but there has to be ways of drawing information from asking people questions. That's not going away.

00:09:09 Speaker_03
I saw on Twitter that like landline poll response rates in the olden days were like above 60 percent.

00:09:14 Speaker_03
But today, the response rates are like five percent, which means you're getting like a very bad sample bias in terms of who's willing to answer a call on a poll. Like, I'll hang up right away if someone tries polling me.

00:09:26 Speaker_02
Yeah. And in particular, it's not that like prediction markets will outmode polls. It's actually they're going to lead to revolutions in technology for doing this well.

00:09:34 Speaker_02
If anything, like the availability of prediction markets increases the incentive to conduct polls. Right.

00:09:39 Speaker_02
Like, you know, as we literally saw with the whale, you know, they went out and ran their own poll precisely because they thought they could use it usefully in this market.

00:09:47 Speaker_03
That's fantastic. I have to ask though, so this may seem obvious to you, but the key point is that you're putting a price on it where people are putting skin in the game essentially with their opinion or prediction, so to speak.

00:09:58 Speaker_03
And that seems very interesting and useful. How is that different from betting? I mean, can prediction markets be incredibly tiny amounts that don't have big value to be valid?

00:10:08 Speaker_03
Like how does the pricing part of this all work in terms of the incentive design?

00:10:12 Speaker_02
Well, so at some fundamental level, the pricing works exactly as Alex described. If you think the probability that Trump is gonna win is 70%, you see the price at 55 cents. If you believe your prediction, you now have an incentive to show up and buy.

00:10:26 Speaker_02
And if enough people have beliefs of different types and they all come into the market and they all purchase, eventually the price sort of converges according to the convex combination of all of their different predictions.

00:10:38 Speaker_02
But when you ask about the size of the market or the size of a betting market, it doesn't matter once people are there and they've already formed their opinions, but it might affect their incentive to gather information, for example.

00:10:50 Speaker_02
If the size of the market is capped at $1,000 and you think the probability is 70%, you're not going to invest $10,000 to get a more precise estimate, right?

00:10:59 Speaker_02
If the maximum possible upside for you is on the order of $1,000, you can't possibly invest more than that. to learn new information that will change your estimates and thus potentially inform the information in the market even more.

00:11:13 Speaker_02
And it's funny, we've been talking a lot in the wake of this most recent presidential election about prediction markets as having been very strong predictors in the trend direction of what actually happened.

00:11:24 Speaker_02
But of course, if you look at, say, the 2016 election, that didn't happen at all, right? You know, the prediction markets totally didn't call Trump, and they also didn't call Brexit, which happened sort of, like, I think the preceding summer or so.

00:11:34 Speaker_02
Oh, yeah, yeah. And, like, there people were asking, like, well, what happened? Like, how did these miss this? And at the time, I wrote an opinion column

00:11:43 Speaker_02
where I argued that this information thing was a key part of the story, that like, at least at the time, prediction markets were relatively narrow, both in terms of the total amount that could be, you know, sort of the total upside, the total amount that was enclosed in the market, and in terms of who participated in them.

00:11:58 Speaker_02
It was sort of concentrated in a small number of locations. And those participants, because the upside was not necessarily that high, didn't necessarily have an incentive to go out and find out what's going on in other parts of the country.

00:12:11 Speaker_02
And so you end up aggregating information just from the people who were already there, which might not be a good estimate in that circumstance.

00:12:17 Speaker_01
I want to push back a little bit on what Scott said.

00:12:20 Speaker_02
Ooh, yeah, that's what I want. Cards on the table. I am much more of a like prediction market bear than Alex is. We're both really excited about them, but there's a stack rank in our estimate. Oh, funny.

00:12:32 Speaker_01
Well, so I agree. You want a thick market, of course, and it helps to have people willing to bet a lot of money because then they're willing to invest a lot in making their predictions accurate.

00:12:42 Speaker_01
The part which I want to push back on, however, is this idea that The market did not predict well if it predicted a 40% chance of Trump winning. And Trump actually won, right? Because this is what people always do. It's frustrating, right?

00:12:56 Speaker_01
Because you can go back and look at individual examples and say, well, did the market predict well? But that's just like you flip a coin and it says 50% chance of coming up heads and it came up tails.

00:13:09 Speaker_01
You say, oh, well, your probability theory isn't very good, is it? They would have a 50% chance. It came up 100%. So what's the real test?

00:13:20 Speaker_01
Well, the real test is you need a large sample of predictions, which could be predictions from political markets, but prediction markets predict other things as well. You need a large sample, and then you have to say,

00:13:33 Speaker_01
In the sample of cases in which the market predicted 40% a win of the Republican or whatever, of that, how many times did the Republican actually win? And what you find is that pretty close, 40% of the time,

00:13:50 Speaker_01
that the market predicted to win, 40% of the time Republicans actually won in those cases. So in other words, there's sort of a linear relationship that when the market predicts a high chance of winning, that happens a lot.

00:14:01 Speaker_01
When markets predict something with a low chance of winning, that doesn't happen very often. But of course, sometimes it does happen, right? Something which happens with only a 5% probability ought to happen one every 20 times.

00:14:14 Speaker_01
And that's exactly what you see with these prediction markets. They tend to be more accurate than other methods of forecasting, and they tend to be not systematically biased.

00:14:25 Speaker_01
We can talk about there's some odd biases which are possible, but they tend not to be systematically biased. So it's not the case that something which is predicted 40% of the time actually only happens 20% of the time.

00:14:36 Speaker_01
The markets systematically get 40% of the time it's predicted, 40% of the time it happens. Let me give a simple non-market example, which I think illustrates this kind of a famous, people have heard of the wisdom of the crowds, right?

00:14:48 Speaker_01
And so you ask people, how much does this cow, does this cow weigh? And people are not that good at figuring out how much a cow weighs. Some are too high, some are too low. But if you take the median, prediction of how much the cow weighs.

00:15:04 Speaker_01
The median prediction tends to be very, very accurate. So in a sense, the crowd knows more than any individual predictor knows. And in the same way, markets do the same thing. They embed in the price more information than any single individual knows.

00:15:21 Speaker_03
Right. And just to be super precise, like you're specifically saying the median, not the mean, not the mode. It has to be like the exact middle point, literally, not like averaging out from the extremes.

00:15:31 Speaker_02
In that particular example, yes. In that particular example, but it varies by context. Got it. Exactly. Let me build on that and illustrate it again through a simple example, but in the language of the price system.

00:15:43 Speaker_02
So when you're going around and polling people about the weight of a cow, you have to go around and ask them, and they don't necessarily have a strong incentive to figure it out, but suppose you have a very large amount of money to invest in commodities or commodities futures or something of the sort.

00:16:00 Speaker_02
And you have a predictive model that tells you what you think is going to happen to these markets. You have reason to believe that there's going to be a big shortage of oil or surplus of orange juice or something of the sort.

00:16:12 Speaker_02
You can buy and sell in the market in a way that reflects that estimate that you have, and it pushes the price accordingly, right? So if you think there's going to be a big shortage of oil, you're going to stockpile oil today.

00:16:24 Speaker_02
You're going to buy a lot of it today, and that's going to push up the price. Because suddenly there's more demand than there was before.

00:16:32 Speaker_02
And so when you see the price of oil going up, it's like it's a signal that somehow people think oil is more valuable right now than it was five minutes ago. By the way, of course, these are all hypotheticals. None of this is investment advice.

00:16:44 Speaker_02
People should not go out and buy a bunch of oil or oil futures or whatever. Conceptually, that's how the price reflects the information.

00:16:53 Speaker_02
And the more strongly you believe that there's going to be a shortage, the more you're going to be willing to pay to buy right now. Right.

00:16:59 Speaker_02
And thus, the sharper the price movement, even sort of the stronger the inference about the information that that buyer brought to the market.

00:17:06 Speaker_01
Yeah. If you want to know whether there's going to be a war in the Middle East, keep an eye on the price of oil.

00:17:11 Speaker_03
I remember that as a child in the 80s.

00:17:13 Speaker_02
And it's still true today. That's exactly right. And know that the oil market is a little bit of a prediction market, too. Right.

00:17:18 Speaker_02
The oil market is revealing information about people's beliefs, about things that are correlated with the availability of oil. Yeah. Like whether there's a war in the Middle East.

00:17:27 Speaker_03
Well, that actually goes perfectly to the question I was about to ask, because I still want to dig a little bit more into the economic and market foundations, and then we can go more into the challenges of prediction markets and where they're going.

00:17:36 Speaker_03
But on that very note of oil, actually a great example, Scott, the question I wanted to ask you both is where does this break? Because in the oil example, one could argue, well, it's not like a quote, pure market.

00:17:48 Speaker_03
You have cartels, you have other forces at play. Now you might be saying it doesn't matter because all that matters is people's opinions. which is what the prediction market is putting as inputs into the market, or doesn't it matter?

00:18:00 Speaker_03
I guess my question is really getting at what are the distortions that can happen here? Like there are things that can manipulate it or other distortions where people's behavior changes so significantly that they untether the market from reality.

00:18:12 Speaker_01
Yeah, sure. I mean, one of the things about these markets, you know, oil predicting possibility of war in the Middle East, of course, they're not designed to do that. Right. In those cases, the information is sort of a leakage.

00:18:25 Speaker_01
It's an unintended consequence of market behavior, which is very useful. You know, it's very useful for economists to be able to pull information out of these market prices.

00:18:35 Speaker_01
It's with the creation of prediction markets, which was really the first ones go back to the Iowa political prediction markets created in 1988. It was there almost for the first time that a market was created in order to produce information. Right.

00:18:54 Speaker_01
So there's a much more direct connection between the output of the market, the prices on the market and the predictions, because that's what they were designed to do.

00:19:04 Speaker_01
Now, of course, you're totally correct that if you want to get a market to predict the future, you're going to want, as Scott said earlier, to have lots of people because you're going to take advantage of all the dispersed knowledge.

00:19:17 Speaker_01
Because, you know, there are people in Pennsylvania who have extra knowledge, you know, about what their neighbors are talking about.

00:19:25 Speaker_01
You know, that can give you a little bit of insight, right, that you might not have if you're living in New York or San Francisco. So you want lots of people to participate.

00:19:34 Speaker_01
And you want the markets to be quite thick because you want people to be able to want to kind of invest some time and energy, but the production should be to maybe apply some models, perhaps do it, things like that.

00:19:45 Speaker_01
And of course you want it to be free and open and you have to be a little bit worried about manipulation.

00:19:50 Speaker_02
Yeah, there are some like funny edge cases that we've seen crop up occasionally.

00:19:55 Speaker_02
In fact, there were even allegations that maybe that was going on here, where if there's some external outcome or even some like internal like behavioral outcome that conditions on the prediction.

00:20:08 Speaker_02
So if political candidates are going to decide how hard to campaign in a given state based on what the prediction for that state says, you might want to influence the price not for the sake of earning money in the prediction market, although that might happen too, but rather because you just want to place the prediction in a given position.

00:20:28 Speaker_02
Now, that's very hard to do, because you actually have to change beliefs from doing that. In the, I think it was the Obama versus McCain campaign, somebody tried to sink a bunch of money to move the McCain percentage.

00:20:38 Speaker_02
And then, you know, people who had estimates that the Obama probability was higher, just sort of arbitrage that out over a period of an hour or two, right? Like, you know, markets work.

00:20:47 Speaker_02
If you see something that looks to you like a market anomaly, you buy or sell accordingly.

00:20:51 Speaker_03
Yes, yes. I mean, we're all market purists here. So that seems like that's working.

00:20:55 Speaker_02
Yes, but if the market is thin or if the information signals are very dispersed, maybe you can convince people, right?

00:21:02 Speaker_02
If you have enough money to like swing the market in a very sharp way, especially if you're doing it through symbols, like many identities, if you're doing it through many identities, so it looks like a surge of people who have a given belief, you might actually change the beliefs of the market participants in a way that actually distorts the probability and could have various other impacts.

00:21:21 Speaker_02
And then the other thing is this idea that the oil markets are leaking information. We'll stick with that example. The oil markets leak information about potential conflict in the Middle East, right? That's a feature and a bug, right?

00:21:35 Speaker_02
The fact that it's an oil market that is informative about the Middle East, on the one hand, as Alex said, it means that the market is not optimized for specifically answering the question, what's going to happen in the Middle East?

00:21:46 Speaker_02
There's lots of other stuff that affects the oil market, like how popular electronic vehicles are at that given moment in time. Right, so you have this very complicated signal extraction problem, right? You see a big spike in the oil price.

00:21:57 Speaker_02
Is it because there's a potential conflict coming in the Middle East? Or is it because there's just been like some new electronic vehicle test that failed and like somebody knows that and so they know that oil is going to be more important next month?

00:22:09 Speaker_02
Whereas if you have a market that's just predicting, will there be a conflict in the Middle East? That's all it's predicting. But of course, that's now a zero sum market.

00:22:17 Speaker_02
It's sort of a harder market to participate in if you only have dispersed information, right?

00:22:22 Speaker_02
If you don't actually know whether there's a conflict in the Middle East forthcoming, but know that some things that are happening like sort of suggest that, for example, you saw an oil price change.

00:22:31 Speaker_02
You have to do a much more complicated and you're taking a slightly, in some ways, a riskier bet. by participating in a prediction market where you're staking everything on this one outcome, rather than on something that's heavily correlated, right?

00:22:42 Speaker_02
Where there are many different things that could have related, sort of related predictions could be mostly correct even if your main prediction is wrong. So the takeaway is prediction markets narrowness is a feature in a bug.

00:22:54 Speaker_02
It's sort of dual to the sense in which ordinary markets sort of broadness is a feature in a bug, right? Because a prediction market is a narrow zero sum contract on a specific event,

00:23:06 Speaker_02
Many people's information about that event is actually coming from all these correlates. It's not that they know specifically, like, is there a conflict coming in the Middle East? They see a lot of potential signals of it.

00:23:15 Speaker_02
And so, if you're buying and selling in a market that responds to those signals, that sort of, like, ensures you a little bit, right?

00:23:23 Speaker_02
If you get the main estimate wrong, but all your signals were correct, you know, you're at less risk than if you go into a prediction market and had all the signals right, but the final estimate wrong, and then, you know, you're just betting on the wrong side of the event.

00:23:35 Speaker_01
I think what Scott said also has implications for when we have prediction markets and everything. I mean, if these markets are so great and they work so well at predicting things, you know, why don't we have more of them?

00:23:45 Speaker_01
And I think Scott was basically giving the answer there. This is how I would put it. You know, if you have the market for oil, then there are lots of people who are buying and selling oil who are not interested in what's going on in the Middle East.

00:23:58 Speaker_01
Okay.

00:23:58 Speaker_00
Yes.

00:23:59 Speaker_01
They're not trying to predict that, right? But it's precisely because you have lots of sort of organic demand and supply that this provides a subsidy to the shorks who go in there in order to make the price more accurate. Or take the example of wheat.

00:24:20 Speaker_01
There are lots of farmers who are buying and selling in the market for wheat just to insure themselves, just to hedge themselves.

00:24:29 Speaker_01
And it's because of that native organic demand that the market is thick enough that you then have all of the sharks, who are not themselves farmers, but they go in there and they use models and techniques and whatever to predict which way the market for weed is going to go, and they make that market more accurate.

00:24:48 Speaker_01
Now, if you didn't have the organic demand, then you're going to have a market with just sharks in it, no farmers and just sharks. And who wants to be in a market where you're only with other sharks, right?

00:25:03 Speaker_01
If I know that the other guy is just trying to predict this one thing as much as I am trying to predict it, you know, I don't want to be in a market with Scott. He's just too smart, right?

00:25:11 Speaker_02
I would say the same thing about you.

00:25:15 Speaker_01
And that's why the market wouldn't work. That's why the market wouldn't work.

00:25:18 Speaker_01
So some of these markets, even though they might be forecasting something which is useful, there isn't enough organic demand where you have to subsidize it from outside the market in order to get a useful prediction out of it, in order to get the sharks willing to go against one another to try and predict this thing.

00:25:36 Speaker_03
And that's why we don't have markets and everything yet, potentially. This is maybe jumping ahead a little bit, but I just have to ask at this point, I mean, Scott, you're like a market design expert.

00:25:45 Speaker_03
So on the market design front, what does that mean if there isn't organic demand? Is there a way for market designers to essentially create markets in situations where there isn't that kind of latent organic or existing thing to harness?

00:25:58 Speaker_03
Like, can you actually manufacture that market without distorting it and kind of create conditions that could design a market into place?

00:26:06 Speaker_02
That's a great question. I mean, there are two different ways to get at it. One of them is, which is sort of what the framing of the question is pointing at, is could you find a way to create latent demand?

00:26:16 Speaker_02
And Alex was saying you could subsidize it, right?

00:26:17 Speaker_02
You could basically somehow subsidize the experience of some people trying to predict this event, subsidize a bunch of college students developing forecasting models, so then they have a lower cost of entering the prediction market or something of the sort.

00:26:30 Speaker_02
Again, not advocating this specific policy.

00:26:32 Speaker_03
Right, although in Alex's example, that subsidy was not intentionally a subsidy. It's just a result of the behavior. Like, it wasn't like people are trying to subsidize. It was just subsidizing because of their natural behaviors.

00:26:43 Speaker_02
True. No, no, exactly. So like, we started this conversation with the recent presidential election and all of these other associated elections.

00:26:50 Speaker_02
Those have proven, at least in practice, to be much thicker markets because there are some people who seem just interested in betting on them, right? A lot of people have some amount of information, some amount of opinion.

00:27:00 Speaker_02
And so there's a little bit of that latent demand that sort of comes from people's general interest in the question.

00:27:05 Speaker_04
Yeah.

00:27:05 Speaker_02
One could try and create that for other contexts, right? You can try and like help people feel that something is interesting or feel that they have an opinion about it enough that they're willing to participate in a prediction market.

00:27:14 Speaker_02
The other thing you can do is you can use other types of information elicitation mechanisms. Prediction markets are one of many ways of doing incentivized information aggregation.

00:27:25 Speaker_02
And others are things like incentivized surveys or peer prediction mechanisms.

00:27:29 Speaker_02
There's a whole class of what are called peer prediction mechanisms, where what you're in effect doing is asking people what they believe about an outcome and what they think other people will believe about an outcome.

00:27:40 Speaker_02
And then you use sort of their beliefs about others as a way of cross-examining whether they were telling the truth. Because you survey a lot of people, you get sort of like your crowd volume, you sort of know the aggregate belief of the population.

00:27:53 Speaker_02
And you can check whether someone's own belief about the population is sort of the right mixture of that aggregate belief and their belief.

00:28:02 Speaker_02
So if you yourself think that Trump is more likely to win, then you yourself are more likely to believe that other people think Trump is likely to win, because the frame you have, your information sort of indicates that at least one more person, or at least one person in the market believes that.

00:28:18 Speaker_02
And so one can cross-examine your predictions with your estimate of what the population believes and what the population actually reveals that they believe. And then you can reward people based on how well they did.

00:28:30 Speaker_02
In effect, like, how good are you at estimating what everyone else thinks given what you think? And those sorts of mechanisms you can incentivize.

00:28:37 Speaker_02
You can pay people immediately, incidentally, unlike prediction markets where the event has to be realized, the payment's only realized at the end. Here, you're not paying people based on their accuracy about the event.

00:28:46 Speaker_02
You're paying them based on their accuracy about everyone's estimates. And so you can do that all at once, right? Collect all the estimates, pay people, they go home, you have your estimate.

00:28:56 Speaker_02
And these have been shown in practice to be very effective for small population or opinion estimates, things where there isn't a thick market and a very, very big public source of signal.

00:29:06 Speaker_03
That really answers that question.

00:29:08 Speaker_03
And by the way, it brings up a very important point that we did not address in the recent example of the election, which is the French quote, whale, who won by using the neighbor poll where, you know, their neighbors won't say what they think, but when you ask them, like, who do you think your neighbors are going to vote for?

00:29:24 Speaker_03
It's kind of a way to indirectly reveal their own preferences. And that's the so-called neighbor poll. I don't know if that's a standard thing or that just came up in this election. It's the first time I heard of it, but.

00:29:33 Speaker_03
It's a great example of something I did study in grad school when I was doing ethnography work, which is never trust what people say they're going to do, but what they actually do. This goes to your economist world of revealed preferences.

00:29:44 Speaker_02
Absolutely.

00:29:44 Speaker_03
Right. Very similar. But anyway, in that case, that person pulled his neighbors and then used that data essentially off chain to then go back onto the market, Holly market in that case. to up his bet and essentially won big as a result.

00:29:59 Speaker_03
So that would be an example of what you were mentioning. Although in that context, you were mentioning it in how can we address a case where there's a thin market, this is a case where that played out in a thick market of the election.

00:30:10 Speaker_02
Well, you might say that he was using this in the thin market of trying to understand his neighbor's sort of like local preferences and estimates, right?

00:30:16 Speaker_03
There you go, that's more precise, yeah.

00:30:18 Speaker_02
Although we actually don't know the details of how he produced these estimates. It doesn't sound like they were incentivized, so it's not exactly like what I was talking about with pure prediction.

00:30:25 Speaker_02
But you're right, it's the same core idea that like using people's beliefs about the distribution can be much more effective than using their personal beliefs a lot of the time.

00:30:35 Speaker_01
So I would underline two things there. One, yeah, the market is a way of bringing all of this dispersed information and creating an aggregation, but it's not the only way. That's kind of what Scott is saying. Right.

00:30:47 Speaker_01
And understanding this is one of the first. information aggregation mechanisms, which we have studied and understood reasonably well. But there are other ones.

00:30:58 Speaker_01
And so you can think about prediction markets as being one example of a class of mechanisms which take dispersed information and out of that pull some knowledge, which none of the people in the market or that none of the people you polled, none of them might be aware of it.

00:31:15 Speaker_01
And yet somehow it is in the air, as it were. That's fantastic. There are also other ways of subsidizing these markets, which is something that corporations may be very interested in doing because corporations are interested in forecasting the future.

00:31:35 Speaker_01
And some of them in the past have created their own internal prediction markets. So one famous example of this is Hewlett-Packard.

00:31:45 Speaker_01
They were interested in forecasting how many printers are going to be sold in the next quarter and the next two quarters, three quarters, four quarters and so forth.

00:31:55 Speaker_01
So they created a market where if you correctly predicted how many printers would be sold in which time period you could earn money and they subsidize that market.

00:32:06 Speaker_01
So everybody going in, which is just HP employees got like a hundred dollars to play with. So that's a way of trying to get more people involved and interested in playing on these markets to elicit disinformation.

00:32:21 Speaker_03
That example is actually really interesting to me because when I was at Xerox PARC, we talked about that. And one of the things that came up is it's a very useful mechanism to your point, Alex, for getting certain things right.

00:32:32 Speaker_03
But it is not a useful mechanism for actually figuring out the future in terms of what to invent, because it doesn't address the case of you don't know what you don't know. You only know what you know. And this came up just yesterday.

00:32:42 Speaker_03
Trump announced his candidate for attorney general. And one of the examples

00:32:47 Speaker_03
Someone cited on Twitter was it's the first time they've seen a poly market contract resolved to zero for all potential outcomes, because gets wasn't even listed among the 12 potential nominees in those range of possible outcomes.

00:32:58 Speaker_03
So that's an example in that case where you have to have the right information itself in that prediction market.

00:33:05 Speaker_03
And you maybe you guys can explain that a little bit more to really quickly, because I think that HP example is super interesting on multiple levels.

00:33:13 Speaker_01
Yeah. So these markets are good at when you figure people have got some knowledge and it's hard to aggregate that knowledge. The other thing they're good at, you know, the people have run these markets for predicting when a project will be complete.

00:33:25 Speaker_01
Right. And it's a classic case. When you ask people, they're going to be, oh, no problem. It'll be ready. But, you know, in five weeks, well, you know, whatever. Right. They're very optimistic.

00:33:37 Speaker_01
And yet they tell the boss it's going to be ready in five weeks. Well, they go back and tell their friends, oh, my God, it's delayed. We've got all these problems. But if you let people bid anonymously in these markets, then the truth comes out.

00:33:50 Speaker_01
So this is a way of the corporate leaders. can learn information that their employees know but are not willing to tell them. But to your larger point, yeah, I mean, nothing is more difficult to predict in the future.

00:34:10 Speaker_01
And, you know, Trump is a chaos agent, right? Whatever he's going to do, like, it is hard to predict. And I agree. I don't think anybody predicted Matt Goetz.

00:34:20 Speaker_02
Well, and indeed, actually, so this sort of highlights, you know, we were talking about what prediction markets are good at versus where you might want to use other sorts of information elicitation mechanisms.

00:34:28 Speaker_02
The two examples that Alex gave of within company prediction markets, you know, predicting sales or sales growth or something that's like, you know, a metric that many people in the firm are tracking and have different windows of information into predicting when a product is going to launch.

00:34:42 Speaker_02
We're like, you know, you might have product managers who know something, you might have engineers who know there's a hidden bug that they haven't even told the product managers about yet. Again, it's like these are contexts where

00:34:51 Speaker_02
Many of the people in the company have some information that only they have. And that the aggregate of all that information is a pretty good prediction of the truth.

00:35:01 Speaker_02
Because the actual outcome is the aggregate of all those people's information directly, right? It's like how many sales calls are you making that are succeeding? Or how is the coding for this specific feature going?

00:35:12 Speaker_02
By contrast, you mentioned with Xerox PARC, trying to predict whether a new sort of totally imagined product is gonna succeed. Well, that's really, really hard. And it doesn't rely on information in particular that the company has, right?

00:35:25 Speaker_02
Like, yes, the company has some idea of what products people might buy, but you might be like, you know, AT&T and invent the first picture phone or something of the sort.

00:35:32 Speaker_02
And like, you thought that was a great idea, but you don't actually know until you put in the market and see whether people are like interested in using it.

00:35:39 Speaker_02
And so the aggregate of all the information in the company, there there's a product they went through with, right? They concluded was a good idea based on all the signal that everyone in the company could see and it still flopped.

00:35:49 Speaker_02
the total information in the company wasn't high enough to actually like provide the right answer, even when aggregated. Right.

00:35:55 Speaker_02
But I do think there is a sort of subtle distinction between wisdom of a random crowd and wisdom of an informed crowd, right? Like, again, with our Hewlett Packard example,

00:36:07 Speaker_02
Hewlett-Packard sort of knows that if you're trying to figure out now whether a product is going to launch on time, a random person on the street has no information about this.

00:36:15 Speaker_02
You don't want to pull together a focus group of miscellaneous Hewlett-Packard customers and ask them, when do you think we're going to finish designing our new printers?

00:36:22 Speaker_02
I don't know, you released a printer last year, probably next year maybe, who knows? There is this question, are you learning things from the right crowd? You know, you could have the best incentivized information elicitation mechanism on the planet.

00:36:36 Speaker_02
And if you only survey people who don't know anything at all about the topic, you incentivize them, you'll learn what they believe truthfully, but you won't be able to do anything with it.

00:36:46 Speaker_03
Yeah. And then back to the future, like the whole idea of the best way to, you know, predict the future is to invent it.

00:36:51 Speaker_03
Like that goes, just like the jobs in the, you know, the phone, like no one, you can ask a million people, will they ever use a touch phone?

00:36:57 Speaker_03
People's behaviors can also evolve and change in ways that they themselves are not aware of, which is that better, that example.

00:37:02 Speaker_01
Yeah. Prediction market, it's like a candle in a dark room, right? I mean, it helps us see a little bit, but there's still areas which you can't see very far.

00:37:10 Speaker_03
Great. I'm going to ask a couple of quick follow up questions from you guys so far. So just to be super clear. So thin versus thick, you guys are talking about the depth of the market, like in terms of the number of participants.

00:37:20 Speaker_03
Thin is too few, thick is many. Is that correct? Or is there a better, more precise way of defining that?

00:37:26 Speaker_01
Yeah, so I mean, in the prediction market, a thin market is few people betting small amounts. And in fact, one of the problems we've had is that prediction markets are mostly illegal in the United States.

00:37:39 Speaker_01
So the biggest one in this past election was poly market, which it was illegal for US citizens to bet on that market. We're slowly changing, but we do have this kind of ridiculous situation. I think it's ridiculous anyway, that we have,

00:37:55 Speaker_01
Huge markets in sports betting, gambling, right? Huge, huge markets. And we allow that. And yet here we have a kind of gambling market, a prediction market, where the output is actually really quite useful.

00:38:08 Speaker_01
It's quite socially valuable and we don't allow it. So making these markets legal and open to more U.S. citizens would thicken those markets, make them more accurate, attract more dispersed information, and I think would be really quite useful.

00:38:25 Speaker_03
But to your bigger point, Alex. you're basically arguing that they can be a public good in the right context, informationally.

00:38:31 Speaker_01
Absolutely.

00:38:32 Speaker_03
And interestingly, if you think about some of these prediction markets that are getting served notices and whatnot, and we don't know why to be clear, but it's interesting because in some cases people might argue some people trying to get information is a manipulation of the market.

00:38:46 Speaker_03
But in fact, to your guys's entire point throughout this discussion, it's actually ways to provide more input of information into the market itself too. So that's kind of an interesting point on the public interest side.

00:38:56 Speaker_01
Let me give you another example on this public good nature of these prediction markets. One of the most interesting, fascinating uses of these prediction markets is to predict which scientific papers will replicate. Oh, yeah.

00:39:09 Speaker_01
You know, we have this big replication crisis in the sciences, psychology and other fields as well of, you know, lots of research and it doesn't replicate.

00:39:18 Speaker_01
Well, what some people have done is, it's expensive to replicate a paper, but one thing people have done is to have a betting market, a prediction market in which papers will replicate, and that turns out to be very accurate, and then you only have to replicate a few of those papers.

00:39:35 Speaker_01
papers in order to have the markets pay off. And for the rest of them, you use the prediction market result as a pretty good estimate of whether it will replicate or not.

00:39:46 Speaker_01
So this is a way of improving science, making science better and quicker and more accurate.

00:39:51 Speaker_03
I love that. I ran a lot of op-eds when I was at Wired on open access and science and kind of like evolving, you know, peer review and replication crisis and the whole category and theme.

00:40:01 Speaker_03
So it's very exciting to me to hear that that's something that we can do to address that.

00:40:05 Speaker_03
It leads to a quick follow-up question, which actually happens to be on my list of follow-up questions for you in the lightning round of this, which is when you guys were talking earlier about this, just kind of tapping into this intuition information,

00:40:17 Speaker_03
dispersed across many people into these prediction markets. One of the first questions that came to mind is, do you need domain experts or does that actually distort a market?

00:40:25 Speaker_03
And this actually comes up as a perfect segue from your point, Alex, that example of scientific papers, because that's a case where one would imagine that people in that industry or that domain or just other scientists who have the experience of analyzing research would be the best at predicting things.

00:40:40 Speaker_03
But is that necessarily true? And do we have any researcher data into domain expertise in these markets?

00:40:46 Speaker_02
I don't know the answer to that last part. Let me talk about the first part, because it also speaks to your thick versus thin. Great. Yeah, good, good.

00:40:52 Speaker_02
So when Alex said a thin market is small number of participants betting small dollar amounts, why is that a thin market? It's because the total information is small in two ways. One is that there are few people bringing their own individual estimates.

00:41:05 Speaker_02
You just have like a small number of people saying things. And second, Because they're betting small dollar amounts, it's sort of a signal that their information is not very strong signal or confident, at least relative to what it could be otherwise.

00:41:19 Speaker_02
If you are staking a very large amount of money on this, the market inference is that you have done the research, and indeed, you have the incentive to do the research. Why is the inference that you've done the research?

00:41:29 Speaker_02
It's because if you're staking a large amount of money, you should have done the research. Because otherwise, you're putting money at risk without full information.

00:41:37 Speaker_03
Like the French whale who did the neighbor poll to find out.

00:41:39 Speaker_02
Right, exactly. And one can argue about how good or bad that new poll was or whatever, like whether he should have trusted his information that much.

00:41:46 Speaker_02
But it's unambiguous that part of his confidence, and he said this, part of the confidence that he had to make that huge bet was that he thought he had a signal that was accurate and the market had missed.

00:41:57 Speaker_02
And so like thickness and thinness, like the proxy for it, the way we think about measuring it is how many people and how much are they staking? Like how much value are they putting behind their beliefs?

00:42:08 Speaker_02
Thickness and thinness is really in terms of the information. It's like, do we have a lot of different signals of information that are strong coming together and mixing to determine the price?

00:42:19 Speaker_02
Or is it really just like a very small number of pretty uninformed signals? That's this tension when Alex is saying,

00:42:26 Speaker_02
It's a problem that the biggest prediction market for the US election was not actually in the US and was not legal to participate in the US. Well, yeah, a lot of the information, a lot of the real signal is in the United States.

00:42:37 Speaker_02
And so without those people being able to participate in the market, you miss at least sort of a lot of that to a first order, right? People internationally will be figuring out ways to aggregate and sort and try and use it.

00:42:47 Speaker_02
But you miss a lot of the people who have that information already at their fingertips. And so you ask about domain expertise, it's not exactly domain expertise versus not, but rather information richness.

00:43:00 Speaker_02
And for example, in predicting scientific replication success or failure, domain experts are especially well equipped to do that.

00:43:09 Speaker_02
Like a random person chosen off the street, you can tell them a scientific study and maybe they'll have an instinct one way or another whether they think they believe it.

00:43:16 Speaker_02
But a lot of the detail of figuring out whether something will replicate comes from knowing how to read the statistical analyses, trying to understand the setup of the experiment and the surrounding literature.

00:43:26 Speaker_02
And so their domain experts have a particularly large amount of information.

00:43:30 Speaker_02
If you think about something like a political betting market, maybe domain experts who are focused in the world of politics and polls and so forth have like a big slice of information, they do.

00:43:39 Speaker_02
But there also might be other categories of people, like people who know that their neighborhood has like recently switched its political affiliation in a way that isn't yet captured in the national polls.

00:43:49 Speaker_02
Or our French whale who went and ran his own sort of poll using a custom chosen method. And so the context of the question the prediction market is trying to evaluate, and this is actually true for any information elicitation problem.

00:44:04 Speaker_02
This is just about prediction markets, right? The context of the type of information you're trying to learn tells you something about who has the most information to bring to the market, and thus who it's important to have there.

00:44:14 Speaker_01
Yeah, I agree with everything Scott said. One of the interesting things is you often don't know who the domain expert is until after the market has been run. So, of course, it's absolutely true that, you know, if you're going to be

00:44:29 Speaker_01
predicting political events, you want people who are interested in politics. If you're predicting scientific articles, people need to be able to read stats and things like that.

00:44:38 Speaker_01
But one of the guys in the scientific replication paper on markets, he made like $10,000, was just one of these super obsessive guys, right, who just really got into it and, you know, was running all kinds of regressions and was doing all kinds of things and stuff like that.

00:44:53 Speaker_01
And so when you say domain expert, I think one of the virtues of these prediction markets is that they're open to everyone and they don't try and say, oh no, only the experts, you know, get to have a voice, right?

00:45:08 Speaker_01
It's more only ex-post do we learn, hey, who really made some money in these markets? Absolutely.

00:45:15 Speaker_03
I'm so glad I asked you guys about the definition of thick versus thin, because you guys gave me so much interesting nuance to that.

00:45:21 Speaker_03
Because the people I think following this podcast definitely understood what you meant about thin versus thick early on, but you guys just took it to a new level.

00:45:28 Speaker_01
If you're so smart, why aren't you rich? Hey, I am rich.

00:45:31 Speaker_02
Yes, exactly. I made some money in this market. Well, and that again, that's about the incentives. We talk about like the dollar value staked, like the amount of money someone is staking on their prediction.

00:45:41 Speaker_02
In equilibrium, it should be a measure of their confidence, how confident they are in their own beliefs and how much effort they've put in to learn the information to be precise.

00:45:50 Speaker_02
And so exactly as Alex says, one person who might be really good at predicting a scientific replication failure is someone who works in that exact same area.

00:45:58 Speaker_02
Another one, it might be someone who just like enjoys doing this for fun and like has never had a real incentive to triple down on doing it, but now suddenly they can.

00:46:06 Speaker_03
Right, right. And by the way, Scott, does it have to be dollar and price incentives? I'm asking you this question specifically because you and I have done a lot of pieces in the past on reputation systems.

00:46:17 Speaker_03
And I almost wonder if the skin in the game can just be karma points and not even any money. Because I think from a pride perspective.

00:46:24 Speaker_02
So like Alex mentioned subsidy, right? Like one way that you can subsidize, I think he said Hewlett Packard subsidized by giving all their employees $100 and saying, spend it all on this market.

00:46:35 Speaker_02
You can subsidize people with cash, but you can also subsidize them with tokens or reputation or various other sources of value.

00:46:45 Speaker_02
And one of the advantages of using tokens is that that way you can deliver a subsidy that's sort of only useful in this market, right? If it's like a personal non-transferable token, but I give you a bucket of them,

00:46:57 Speaker_02
And the only thing you can do with it is use it to enter predictions and you just choose which prediction markets you choose to enter into and how much you spend in each one, right? And then you earn payoffs.

00:47:06 Speaker_02
Payoffs are also measured in tokens and maybe downstream you might get prizes for having large numbers of tokens or something.

00:47:10 Speaker_02
You get to join the elite predictor force or even just serves as a measurement of your reputation, how good you are at making predictions, which maybe you leverage into something else, right?

00:47:18 Speaker_02
Like people who win data science contests leverage that into data science jobs. Maybe you like leverage this into a forecasting job or something.

00:47:26 Speaker_02
All of that, so long as you find people who are willing to be incentivized by those types of outcomes, you can subsidize their participation in a unit that locks them into the market, right?

00:47:37 Speaker_02
That their one thing to do with it is to participate in the market and reinforces more and more participation among the people who are most successful and most engaged.

00:47:45 Speaker_03
That's super interesting, and I'm going to push back on you on that, actually, because I actually wonder if it necessarily needs to be crypto-based, and you can just do any kind of — Oh, yeah, no, it's any, like, internal marker.

00:47:54 Speaker_02
But for all the reasons we normally know, like, it's much better to do this in an open protocol form, because, for example, if the token is eventually going to be leveraged for reputation, you want anyone to be able to verify that you have it.

00:48:04 Speaker_03
Right. Audit it, see it, hence blockchains. Got it. Great. And we'll talk a little bit more about that. Just more lightning questions. Go for it. So where do super forecasters like Philip Tetlock's work come into all of this?

00:48:14 Speaker_03
Like, are they especially good at prediction markets? Because that's a case where they're like generally better at the general public and sort of, quote, forecasting and making predictions.

00:48:22 Speaker_03
Is there a place for them in this world or are they kind of the outliers here or does it not even matter here?

00:48:28 Speaker_01
I think there's two things. One, I think the basic lesson of Tetlot's work is most people, even the ones who are in the forecasting business, are terrible forecasters, right?

00:48:40 Speaker_01
I mean, he first started tracking so-called political experts and seeing what their forecasts were, you know, 10 years later, were they right? Are we five years later? And they were completely wrong.

00:48:51 Speaker_01
So he then shifted into looking for, is anybody ever right? Are there super forecasters? And yes, he found that some people, you know, not typically the ones in the public eye, but some people can definitely forecast better than others.

00:49:06 Speaker_01
One of the things those people can do is then participate in these markets. And by their participation, they push the market price closer to their predicted probabilities. So forecasters have an incentive. to be in these markets.

00:49:23 Speaker_01
And by being in these markets, they make the markets more accurate. Now, is the market always going to be more accurate than the super forecaster? No.

00:49:31 Speaker_01
I mean, Warren Buffett, you know, he has made a lot of money, even though markets are basically efficient. But Warren Buffett has shown that he, in many cases, is able to predict better than the market price itself and more power to him.

00:49:45 Speaker_01
And so there are going to be some super forecasters, but they're hard to find. They're rare. And And a virtue of the price is that everyone can see it, right? It's public.

00:49:56 Speaker_03
So this actually gets at a bigger, maybe more obvious point to you guys, but a recurring theme I'm hearing is it's not that the prediction market is only taking in like guesses and people's intuitions and bets and opinions and any information it has, but theoretically done well, it's taking in all information.

00:50:15 Speaker_03
It could be super forecasters contributing to it. It could be Nate Silver taking his 80,000 simulations and feeding his inputs and adding that signal into it. It could be people who are pollsters putting their data and predictions.

00:50:28 Speaker_03
Basically, it doesn't even matter how people get at their intuition. All that matters is that they're pricing that information into that market, essentially.

00:50:35 Speaker_01
Do you know that Wall Street Bets is a famous, everything is priced in post?

00:50:39 Speaker_03
No, I don't actually.

00:50:40 Speaker_01
I don't know this one either. Let me read it just a little bit. It's a fantastic post. It's like five years ago. It's called Everyone is Priced In and it says, the answer is yes, it's priced in. Think Amazon will beat the next earning?

00:50:52 Speaker_01
That's already been priced in. You work at the drive-thru for Mickey D's and found out that the burgers are made of human meat? That's priced in. You think insiders don't already know that?

00:51:02 Speaker_01
The market is an all-powerful, all-encompassing being that knows the very inner workings of your subconscious.

00:51:08 Speaker_01
Your very existence was priced in decades ago when the market was valuing Standard Oil's expected future earnings based on population growth.

00:51:19 Speaker_03
That is so great. Okay, you have to send me that link, Alex, and then I'll show notes. So you're basically agreeing that it's the markets do price everything in.

00:51:28 Speaker_01
Yeah, I mean, that's an exaggeration. But yeah, I mean, anything is fair game.

00:51:33 Speaker_02
I want to push back, but we're fine here. Because Anything is fair game, but you have to wonder who's going to show up to those markets and where their signals are coming from, right?

00:51:43 Speaker_02
Like if you're a super forecaster, maybe you work for like a super secretive hedge fund. And the last thing you want to do is directly leak what it is you believe.

00:51:52 Speaker_04
Yeah, yeah.

00:51:53 Speaker_02
And in fact, you would prefer that the market be confused by this public signal. We talked about manipulation.

00:51:58 Speaker_02
You might show up and tank the prediction in one direction or the other just to take advantage of that in the financial market off to the side.

00:52:06 Speaker_02
And so while in principle these things can be very comprehensive, you still have to think about who participates in which market where.

00:52:12 Speaker_02
And just like we see in other markets where like some people trade in dark pool, some people trade in public exchanges, and that selection sort of affects what information price is really aggregating where. That's fantastic, yeah.

00:52:24 Speaker_02
The other thing about public forecasters, super or otherwise, is that they're very salient to the average person. And so another thing we see in prediction markets is herd behavior, again, just like we see in other types of markets.

00:52:40 Speaker_02
If a lot of people are suddenly buying oil futures, does that mean that they all have knowledge that there's gonna be a conflict in the Middle East?

00:52:49 Speaker_02
Or does it mean they saw other people buying oil futures and are like, oh gosh, I'd better do this too. Or did they see one analyst report and they all saw the same analyst report?

00:52:59 Speaker_02
And as a result, they all went and bought oil futures because they believe the report. Or worse, did they see one analyst report that said, you know, like oil is going to be expensive next quarter.

00:53:10 Speaker_02
And they went and bought oil futures not because they believe the report. Maybe they even have information that it's not true, but they know everyone else is going to see the report. And so there will be purchasing pressure.

00:53:21 Speaker_04
Yes.

00:53:22 Speaker_02
There's this very famous paper by Morrison Shin in the American Economic Review called Social Value of Public Information. Okay, I'm going to put that in the show notes. It talks about information hurting, right?

00:53:31 Speaker_02
The idea is basically, if you have a market where everyone has private signals, and then there are some very salient public signals, and people have to coordinate, right? Are you going to run on a bank or not?

00:53:41 Speaker_02
Or what do you think is the probability of this thing happening? People might ignore their private signals if the public signal is strong enough that they think other people are going to follow it. Yes.

00:53:51 Speaker_02
And so when a very prominent forecaster makes a prediction, like as the sort of polls were coming in in the week leading up to the election, a new major poll would drop and then the prediction markets would judder around and sort of veer off at least briefly in the direction of that poll.

00:54:07 Speaker_02
And that's this like public information effect, right? This is salient. You expect a lot of market movement based on this information. And so the market actually moves even more.

00:54:16 Speaker_02
It incorporates not just the information, but also the fact that other people are incorporating the information too.

00:54:21 Speaker_03
And are there any market design implications for how to avoid that happening? Like if you're setting up the conditions of a perfect, great prediction market?

00:54:30 Speaker_02
Oh, gosh, that's a great question. I mean, first of all, it's not completely avoidable. You can't have a market where a sufficiently strong public signal doesn't generate some herd behavior, right? At that level, it's unavoidable.

00:54:43 Speaker_02
But you can try and do things to dampen the effect. Off the top of my head, I can think of two. There are probably others. One is you could basically like slow trading a little bit, right?

00:54:52 Speaker_02
You can sort of like limit people's abilities to enter or exit positions very, very quickly. So it sort of forces people to like average.

00:54:59 Speaker_03
Well, it's also kind of an example of slowing contagion, right? Yes. An infection spreading very fast.

00:55:05 Speaker_01
Totally.

00:55:05 Speaker_03
Kind of like the herding becoming viral.

00:55:07 Speaker_01
Yeah, contagion is a very good example of what Scott's talking about. You know, in stock markets, we have circuit breakers. Circuit breakers.

00:55:13 Speaker_03
Yes, yes, yes, exactly. Circuit breakers. There we go. So that's one of the ways.

00:55:17 Speaker_02
Another thing you could do. is try and refine your market contracts in a way that orthogonalizes, by which I mean it sort of extracts out the signal that is independent of that signal, right?

00:55:31 Speaker_02
So a prediction market contract somehow incorporates the information sort of like adjusted for whatever Nate Silver claims.

00:55:37 Speaker_01
Let me give you an example because my colleague, Robin Hanson, who is one of the founders of Prediction Markets. Right. Robin is usually many steps ahead. He has a very clever proposal for this, which I don't think anyone has ever implemented.

00:55:49 Speaker_01
But he says you have a prediction market, and then you have a second prediction market on whether that prediction market will revert in the future to something else.

00:55:57 Speaker_03
Yes. Oh, so genius.

00:55:59 Speaker_02
Yes, exactly. That's the way you do it. That's the way you orthogonalize. Perfect. That's way better than my example.

00:56:04 Speaker_03
That's so great because I was actually going to guess something like combining the reputation thing. And this is essentially a way of combining reputation by having a parallel market that verifies and validates.

00:56:13 Speaker_02
Exactly. Totally.

00:56:14 Speaker_03
That's so interesting. By the way, that's not Futarki, right? His new thing?

00:56:18 Speaker_01
One of the criticisms of Futarki was precisely the point that Scott made. And then Robin's response to that is, well, the solution to a problem of Futarki is more Futarki.

00:56:29 Speaker_03
Ah, OK. And by the way, just quickly define Futarki for me.

00:56:31 Speaker_01
Yeah. So Robin Hanson's idea is let's take these decision markets. and apply them to government. Let's create a new form of government. You know, there aren't many new forms of government in the world. You have democracy, monarchy, you know.

00:56:46 Speaker_01
Futurarchy is a new form of government. And the way it would work is that instead of having politicians decide what policies to have, politicians and voters would just decide on what our metric for success is going to be.

00:57:03 Speaker_01
So it might be something like GDP would be one metric of success, but you might want to adjust it for inequality or for environmental issues. So you're going to create some net statistic GDP plus.

00:57:17 Speaker_01
Then anytime you have a question, should we pass this health care policy? How should we change immigration rules? Should we have this new immigration rule? you have a market on whether GDP plus would go up or down if we pass this new law.

00:57:34 Speaker_01
And then you just choose which one. If GDP plus goes up, you say, OK, we're going to do that. And so people would just submit new ideas to the futarchy. Here's a proposal for immigration. Here's a proposal for health care. Here's one for science policy.

00:57:50 Speaker_01
And then you just run a prediction market. Would GDP plus go up with that or would it go down? And then you choose whichever comes out. So Robin expands this idea of decision markets to an entirely new form of government.

00:58:05 Speaker_03
That's fascinating. And it relates so much to one of our partners, collaborators work, Andrew Hall at Stanford. He studies a lot on on-chain and kind of liquid democracies and more. That's very interesting.

00:58:16 Speaker_03
Thank you for explaining that, Alex, because I've actually never fully gotten what futarki is. People toss it around and I'm like, but actually, what is it? I still don't get it. So that was very helpful.

00:58:24 Speaker_02
It also sounds like it could be the subject of like a Borges short story or something.

00:58:27 Speaker_03
Oh my God, yes, yes, absolutely. Oh gosh. What was the last one that we put in the last reading list, Scott, for the Founders Summit? Was it the Labyrinth short story collection?

00:58:35 Speaker_02
I think it was Labyrinth, right?

00:58:36 Speaker_03
Yeah, yeah, I think so. That's so funny. So a few more questions and I want to switch to crypto. So since we're talking actually about like kind of market theories and practice in this recent segment,

00:58:48 Speaker_03
Alex, did you want to say a little bit more about efficient markets?

00:58:50 Speaker_01
Sure, sure.

00:58:51 Speaker_01
So another fascinating example of how markets could leak information, which then could be used for other things, is if you've ever seen the movie Trading Places, you probably know that the main determinant of orange juice futures is what the weather is going to be in Florida.

00:59:09 Speaker_01
Of course. So Richard Rohl, who is a finance economist, had this interesting question. Well, can we use orange juice futures to predict the weather?

00:59:18 Speaker_01
And what he found is that there was information in those market prices which could be used to improve weather forecasts in Florida. Kind of an amazing example, because no one, again, knew this.

00:59:30 Speaker_01
No one was even predicting this, but this was kind of a leakage of this amazing information. Fantastic.

00:59:37 Speaker_01
Another fascinating one is Richard Feynman famously demonstrated that it was the O-rings which were responsible for the challenge disaster by dipping the O-ring in the ice water at the congressional committee.

00:59:52 Speaker_01
However, economists went back and when they looked at the prices of the firms which were supplying inputs into NASA and to the Challenger, they found that the stock price of Morton Thiokol, which was the firm which produced the O-rings, that dropped much more quickly and a much larger amount than any of the other firms.

01:00:15 Speaker_01
So the stock market had already predicted and factored in that it was probably the O-rings which were the cause of the Challenger disaster even before Richard Feynman had figured this out.

01:00:27 Speaker_03
And by the way, it's another that ties back to your HP example in a way, because if I recall part of the backstory with the challenger was also that it was a case of death by PowerPoint because of the way they were communicating information internally and that the format and the structure kind of constrained how that information was presented.

01:00:43 Speaker_03
I think Tufte gives a famous case study of this in one of his many books.

01:00:47 Speaker_01
So another way of putting that actually, which is kind of disturbing, but I think you're right in that The people on the ground, they knew this wasn't a good idea. They knew it was not a good idea to launch the Challenger on such a cold day.

01:01:05 Speaker_01
And if there had been a prediction market of what's going to happen or should we do this, then I think it is quite likely that that dispersed information, which no one was willing to tell their bosses,

01:01:17 Speaker_01
You know, nobody was willing to stand up and say, we should not do this. Instead, it got buried in PowerPoints. That dispersed information might have found its way to the top if there had been a prediction market. And is this launch going to go well?

01:01:30 Speaker_03
Exactly. Or said another way, the earlier definition of a prediction market. it would have been another way for management to elicit better information from their employees. And using just that as a mechanism for communication, essentially. Exactly.

01:01:42 Speaker_03
Yeah, the HP thing really kind of struck me because I just remembered that as like a communication no-no for how information is presented. And that's actually a good segue, by the way, to the crypto section.

01:01:53 Speaker_03
Because I want to ask you guys, and this is going to help me break some, you know, I love doing a good taxonomy of definitions on any podcast, because one of the things we talk about in crypto is the ethos of decentralization.

01:02:03 Speaker_03
Sometimes the information is public and a public blockchain. It's often open source, distributed. It can be real time.

01:02:11 Speaker_03
I don't know if it's necessarily accurate information, but the information can be corrected very quickly, which then makes it more likely to be accurate because of the speed of revision, which, by the way, we also saw in the recent election, I think, compared to media.

01:02:23 Speaker_03
One of the observations people made is that media didn't move fast enough to or even want to because of biases. their polls and predictions, whereas the prediction markets were faster self-correcting.

01:02:33 Speaker_03
So one question I have for you guys to kind of kick off this section about the underlying technology and how it works is first, let's tease apart all those words. I just gave you like a big buzzword bingo soup of words.

01:02:44 Speaker_03
What are the words that actually matter when it comes to this context of eliciting better information and aggregating that information in a market? Like what is the key qualities that we should start with?

01:02:54 Speaker_03
And then we can talk about the technologies underlying that.

01:02:57 Speaker_01
One way of answering that question might be like the largest prediction market was the poly market crypto prediction market. And the question is, is crypto a necessary part of this? And I think the answer is probably no.

01:03:12 Speaker_01
I think why was the crypto market particularly successful? Well, because it was open to anybody in the world barring US citizens, right? Yes. And the market, because of that, was much thicker than the other markets.

01:03:23 Speaker_01
So there are some prediction markets which limit people's bets to $1,000. And the crypto whale was betting millions of dollars on these markets. So that's why the crypto market I think as a kind of regulatory arbitrage became very important.

01:03:39 Speaker_01
And, you know, now the FBI is kind of looking at this. The French are looking at this. Was it legal? Is it violating some laws? But I think the crypto part of it was not actually necessary.

01:03:51 Speaker_03
Yeah, I'm glad you pointed that out to Alex, because I think people have been kind of hyping and over inflating the crypto part of it. And I actually agree with you completely.

01:03:59 Speaker_03
Like, I don't know if crypto was at the heart of the way that that market works, except in those qualities you mentioned. Scott, any thoughts on that point?

01:04:07 Speaker_02
So I totally agree with all of that. One thing that crypto does very well on top of being open and interoperable and transparent is it enables commitment, right? You can write a piece of software that is going to run in the exact specified way.

01:04:21 Speaker_02
It can be audited by all of the users, and then they can be convinced that it's going to run correctly. And some ways we do information elicitation have challenges with commitment.

01:04:31 Speaker_02
If you're going to survey people and pay them six months from now based on whether their survey estimate was accurate or not, they might be worried that you're not going to show up and pay them.

01:04:40 Speaker_02
And so long as whatever the information is can also exist on chain, right, the resolution of the uncertainty can somehow be visible on chain either through an oracle or if it were like an on-chain function to begin with, like just what is the price of this asset or something.

01:04:54 Speaker_02
You can commit in a way that you can't necessarily or you can't do easily without complicated contracts. You can just commit that it's going to run as expected.

01:05:04 Speaker_02
In order for that to work, your information elicitation mechanism has to be fairly robustly committed and often also decentralized.

01:05:11 Speaker_02
Like PolyMarket, by contrast, famously changed the terms of a couple of their resolutions because something happened that didn't quite make sense in the context of the way they'd said they were going to evaluate the outcome.

01:05:23 Speaker_02
And so they post hoc, this is after people have already bought in under the original terms of resolution, changed the terms of resolution.

01:05:31 Speaker_02
And so that's like a lack of commitment that's actually hard for markets to form when people don't trust that they're gonna be resolved as described.

01:05:39 Speaker_03
Right. I mean, isn't that the most basic rule of markets? Like you can't just suddenly change the rules under you. Isn't that why we always talk about why we don't trust governments that don't enforce property rights and whatnot?

01:05:48 Speaker_03
Like you just can't mess around.

01:05:49 Speaker_02
No, you're exactly right.

01:05:50 Speaker_02
And the same way that blockchains create a form of property right that you can trust even without sort of a very trustworthy entity having established it because, you know, the property right itself lives in this immutable ledger.

01:06:04 Speaker_02
Same thing here, you can, at least in principle, set up resolution contracts that are trustable and immutable and therefore expand the scope of the set of marketplaces we can configure, right?

01:06:15 Speaker_02
It's not just the set of tools we had when you have to be able to trust the market organizer, but actually now this sort of commitment enables you to go further.

01:06:23 Speaker_03
Just to break this down a little bit more, because I think you said some really important things in there, and I want to pause and make sure we flesh it out for our audience.

01:06:30 Speaker_03
So first of all, based on what Alex said earlier in the case of polymarket, one of the key points was public and the information being out there. That's one. I mentioned earlier the example of it being updated quickly. as compared to media, at least.

01:06:43 Speaker_03
You just mentioned the importance of credible commitments. And we've often described blockchains as a technology, that blockchains are computers that make commitments. So that's a third or fourth.

01:06:54 Speaker_03
I don't know the number count, but I'll just keep listing the features. And then you also mentioned potentially decentralized, but I couldn't tell if it really needed to be decentralized or not.

01:07:02 Speaker_03
Can you give me more bottom line on decentralization where you stand there?

01:07:05 Speaker_02
Yeah, it's a great question. And actually, maybe we should have started here. The necessity of all of these different features moves around with the type of market.

01:07:12 Speaker_02
The more complicated your information elicitation mechanism is, and this is especially important for the context where sort of pure information markets don't work, the more complicated your information elicitation mechanism is, the more likely it is that you want something that looks like crypto rails.

01:07:26 Speaker_03
Ah, that's actually good to know. Okay.

01:07:28 Speaker_02
So like if Hewlett-Packard is running an internal prediction market, first of all, it doesn't have to be open to the entire world because you're only trying to learn information from your employees, right?

01:07:37 Speaker_02
So openness is important within the firm, right? Maybe there's someone in the mailroom who knows something that you don't know they know. And so you actually want that market of people to be able to participate.

01:07:45 Speaker_02
But Hewlett-Packard does not necessarily care what a person on the street thinks about printer sales. and certainly doesn't need to build the architecture to bring in like random people's estimates of printer sales, right?

01:07:55 Speaker_02
And so, you know, you need some amount of transparency because you need people to be able to see what the current price is and like see whether they agree or disagree and they can sort of move the price around.

01:08:03 Speaker_02
But in other types of elicitation mechanisms, maybe you don't need transparency, right? If you're just going to pay someone based on the accuracy of their forecast down the line, you don't need them to be able to see what else is happening.

01:08:14 Speaker_02
You just need them to believe that you have committed and that the final accuracy is going to be transparent.

01:08:19 Speaker_00
Right.

01:08:19 Speaker_02
That they can verify that you didn't just, you know, stiff them by, like, the thing they predicted happened exactly, but then you said, no, it didn't, and then you don't pay them.

01:08:27 Speaker_02
And so transparency is important only there with respect to the resolution, not with respect to the interim states.

01:08:33 Speaker_04
Yes, yes.

01:08:33 Speaker_02
But by contrast, like, commitment is incredibly essential and needs to be believed or else the user won't even participate.

01:08:40 Speaker_03
Right. By the way, great that you gave the example of the transparency.

01:08:43 Speaker_03
And I'll let you finish your example in a second, but I'm just jumping in because it reminds me of how we talk about the things that can be done on-chain and off-chain when it comes to scaling blockchains and like provers versus verifiers when it comes to zero knowledge or whatnot.

01:08:55 Speaker_03
And it's really interesting you pointed that out because

01:08:57 Speaker_03
I want to make sure people are listening who are builders listen to that because that means you can do certain things on chain in order to whatever your goals of the design are, and then put other things off chain.

01:09:07 Speaker_03
Like you don't have to have this purist view of how truth must be transparent. It's very smart to point that out. Anyway, keep going with your other example.

01:09:15 Speaker_02
Yeah, and I completely agree, by the way. I mean, like one of the things when I talk to teams, I'm constantly trying to get them to think about Which features of the marketplace are the most essential for market function?

01:09:29 Speaker_02
And it varies by market context. And even if eventually you're planning on having all of these features, right? As you're deciding, which thing do we build first? Or as we're progressively decentralizing, what do we prioritize?

01:09:40 Speaker_02
You actually have to understand the market context you're working in.

01:09:43 Speaker_03
That's so smart because it's basically another way to hit product market fit too, because then you're not like overbuilding and over featuring something.

01:09:49 Speaker_02
Anyway, yeah, but keep going with your other side of that. Totally. So to get to the question of like, when does decentralization matter? Decentralization has lots of different components that might make it matter.

01:10:00 Speaker_02
One of them is just the ability to like make these commitments even more enforceable. Like it makes it possible to be confident and function and liveness and so forth.

01:10:09 Speaker_02
All of those things are important for a market because if your prediction market goes down the night before the election, you know, first of all, you lose the information signal from it.

01:10:18 Speaker_02
Second of all, you lose the ability for people to participate in the market, which would sort of adjust the price and move the signal around.

01:10:23 Speaker_02
Similarly, if you lose the ability to like resolve the truth, then maybe you can't finally resolve the market. And you have all of these bets that are sitting in limbo because the market doesn't know what happened.

01:10:34 Speaker_02
The key is everyone is bringing in their own information, but in order to finally resolve the contract and determine who gets the payout for the bet, you have to have the chain have a way to know what actually happened.

01:10:44 Speaker_02
Another place decentralization is sometimes very important is in that resolution function. Like, you know, if the market is on chain, you somehow have to get what actually happened onto the chain.

01:10:53 Speaker_02
And maybe the biggest better happens to also control the one resolution function. And so they can now sort of rob the prediction market by just lying about the resolution of the event.

01:11:04 Speaker_02
They tell the system like, you know, candidate A won when actually candidate B won. And then by the time people realize that this wasn't correct, they might not have a way to fix it. But even if so, that person might just be gone.

01:11:15 Speaker_02
So decentralization and resolution, just like we think about decentralized Oracle sort of mechanisms, this is basically an Oracle, right? You have to bring off-chain information on-chain in a lot of these contexts to resolve the contract.

01:11:26 Speaker_02
Or if you're doing this in a centralized platform, the users have to trust the centralized platform to resolve the contract correctly. By contrast, if the information does not need to be brought in through an oracle, right?

01:11:36 Speaker_02
If it already lives in a system that's verified and the resolution is provably gonna do what it's claimed, then you don't actually care about decentralization, say, in the discovery of the resolution.

01:11:45 Speaker_02
You're actually just reading information and your commitment contract takes care of everything else.

01:11:50 Speaker_03
And just really quick, Scott, you've said Oracle a few times. Can you actually properly define what you mean by Oracle in this context? I know we talk about a lot in crypto.

01:11:57 Speaker_02
Yeah, and indeed, Oracle is not a completely uniformly well-defined term. In this context, I'm talking about Oracles as like a truthful source of information about what the actual resolution of the event was.

01:12:09 Speaker_02
So if Trump won the election, the Oracle tells us Trump won the election. And if Harris won the election, the Oracle tells us Harris won the election. And the reason we're using that is because

01:12:18 Speaker_02
The election is, of course, not being conducted, or at least maybe in the future, we can dream. But in 2024, the US presidential election was very much not conducted on a blockchain.

01:12:26 Speaker_02
And so if you're going to have an on-chain prediction market, you somehow need the chain to be able to learn the information of what actually happened in the off-chain election. And so the oracle is like basically the source of that information.

01:12:40 Speaker_01
The key of the Oracle, as Scott say, is to bring it off chain and bring it on chain. I mean, the thing about off chain is that people can look at the New York Times, right?

01:12:51 Speaker_01
And so the New York Times is often considered a Oracle in that you go by whatever's printed in the New York Times. That would be a way of resolving a lot of bets. Like did the New York Times report that Trump won?

01:13:02 Speaker_01
That might be one way of resolving these bets. Yeah. Great. But the key problem is to bring that off-chain knowledge on-chain in a way in which the information is not distorted in the transmission.

01:13:17 Speaker_01
And the reason why that transmission, you're worried about it being distorted is precisely because It's the revelation where all the money is, right? So there are big incentives to distort the transmission of that information.

01:13:30 Speaker_01
In fact, a lot of the crypto hacks which have happened have happened because people found a way of distorting the oracle and then using that on the crypto market. So, you know, the market resolved in one way. And if you can

01:13:45 Speaker_01
change the Oracle, then you can make a huge amount of profit out of doing that. So there's a big incentive to mess with the Oracle. That's why it's really difficult.

01:13:54 Speaker_02
And we can stick with the New York Times example, right? A lot of people are going to make their morning trading decisions based on what they see in the New York Times and on the Bloomberg Terminal and so forth.

01:14:03 Speaker_02
And so if you could, in a coordinated way, feed the wrong information to that, it would change many, many people's behavior and you could trade against that because you knew that they were going to get the wrong information.

01:14:13 Speaker_01
Exactly. So this can happen in the off-chain world. And indeed, we saw there was one tweet, right, that the SEC is going to legalize, you know, ETF Bitcoin contracts. It looked like, you know, it was an official ruling and it turned out to be a hack.

01:14:27 Speaker_01
Turned out to be correct, but that wasn't revealed until days later. But yeah, so if you can distort an oracle, you can make money.

01:14:34 Speaker_02
Totally. Or, I mean, if we're talking about the New York Times, it would be remiss for us to not have the like, Dewey defeats Truman, right? You know, famous front page, like huge text headline that just turns out to be inaccurate.

01:14:46 Speaker_03
Right. That's a famous case of what we did in media at Wired, too. It's called the pre-write, and then you accidentally print it sooner and you get it wrong.

01:14:52 Speaker_03
There actually have been cases of someone, you write their obituary months or years in advance, and it goes out and says they're dead.

01:14:58 Speaker_03
Okay, you conflated earlier, and I agree they're generally connected and similar, but there are some nuances between decentralized and distributed.

01:15:06 Speaker_03
Like distributed can just be like redundant systems that have multiple, like the system going down is what you were giving the example the night before or something.

01:15:13 Speaker_03
That's a case where being distributed matters, but it doesn't have to be decentralized necessarily. Correct. Like i.e. there could be distributed nodes managed by a centralized entity, for instance.

01:15:21 Speaker_02
Absolutely.

01:15:21 Speaker_03
I just want to make sure we're very clear about the distinction between decentralized and distributed as well. Totally.

01:15:25 Speaker_02
Whereas by contrast with the oracles, for example, you might really care about being decentralized, right? You might care that no individual entity can sort of unilaterally change how the contracts resolve. Exactly. Just one other point.

01:15:37 Speaker_02
Another advantage of doing all this stuff on blockchains is that it's composable. It's not that we're just like intrinsically interested in some of these questions, like maybe so, right?

01:15:46 Speaker_02
Some people are just like, you know, intellectually curious, like who's going to win the presidency in a month. But rather, lots of other stuff depends on it, right?

01:15:53 Speaker_02
If you're making decisions about which supplies to order in advance, you need to have beliefs about the likelihood that terrorists are imposed under the next administration.

01:16:02 Speaker_02
And so having these things live on open composable architectures is useful because they can be wrapped with other information and other processes.

01:16:11 Speaker_02
You can tie your corporate operations in a very direct way into these sort of information aggregation mechanism signals.

01:16:18 Speaker_03
Yeah, to put it even a more basic way just because I don't know if everyone necessarily knows composable in the way that we talk about it.

01:16:24 Speaker_03
It's like the Lego building blocks, the markets on chain or the information on chain is a platform that people can build around, build with, bring in pieces of information, combine it with other tools, etc.

01:16:36 Speaker_03
And you can create like different things and that's a composability. And I'll put a link in the show notes to a post explaining composability as well. And then the other quick one is open source.

01:16:45 Speaker_03
Does the code itself have to be open source, auditable, public good?

01:16:52 Speaker_02
Again, it depends how much you trust the market creator.

01:16:55 Speaker_04
Yeah.

01:16:55 Speaker_02
And again, this is true across the board for applications that can be run on blockchains or not. You're always making tradeoffs between trust through reputational incentives and institutions and trust through code.

01:17:08 Speaker_02
For example, in actual commodities markets, there's a lot of trust through institution and legal contract, but there's an architecture in place to establish the trust between the institutions and the contracts and their enforceability via the institutions for those contracts to be real enough that people believe in them enough to pay money for them and to have all of these market features.

01:17:29 Speaker_02
Blockchains enable these sorts of trusted activities in lots of contexts where the institutions are not strong enough or present enough to do it for you.

01:17:39 Speaker_02
If you're having like $5 bets, like small money bets on some incredibly minor question like, will the horse that wins the Kentucky Derby have a prime number of letters in their name or something like this? right?

01:17:51 Speaker_02
You're not going to have necessarily an institution that is even able to evaluate and like set up that contract in a way that is worth doing at the amount of money it's going to raise.

01:18:00 Speaker_01
I like how Scott changes the Kentucky Derby into something he would be interested in. Oh, well, if it involved prime numbers. Horses, forget horses, but prime numbers.

01:18:10 Speaker_04
That's so funny.

01:18:14 Speaker_02
I love how well you know it. I will have you know, the Kentucky Derby is also interesting because it has all sorts of cool statistical questions going on. And cool hats. Oh, and fascinating hats. Absolutely fascinating hats. Undefinitely intended.

01:18:26 Speaker_02
I love it. So like substituting code for the source of trust for these like very unusual or sort of like micro or international, there's not a clear jurisdiction, right?

01:18:36 Speaker_02
All of these contexts sort of push you more into security via code rather than security via institution.

01:18:41 Speaker_01
Let me add one more point on the blockchain. So I think generally speaking, as I said, the blockchain is not necessary. However, as we're looking towards the future, it may become more and more useful to have these, you know, very decentralized rails.

01:18:56 Speaker_01
So Vitalik Buterin recently wrote a post on InfoFinance talking about prediction markets.

01:19:02 Speaker_03
And he credited you at the top as one of the people who reviewed it. But yeah, keep going.

01:19:07 Speaker_01
Exactly. And so one of the interesting points which he made is that AIs may become very prominent predictors. They may become very prominent participants in these prediction markets.

01:19:19 Speaker_01
Because if you can have a lot of AIs trying to predict things, well, that lowers the cost tremendously, and that opens up the space of possibilities of what you can use prediction markets for.

01:19:31 Speaker_01
And so the blockchain, you know, is very good for, you know, nobody knows you're an AI on the blockchain.

01:19:37 Speaker_03
Right, right, right.

01:19:38 Speaker_01
And so if we're going to have a lot of AIs interacting and acting as participants in markets, then the blockchain is very good for that.

01:19:46 Speaker_03
That's absolutely right.

01:19:47 Speaker_03
And we have a lot of content that's already on this topic, which actually gets at the intersection of crypto and AI and where they're a match made in heaven, in fact, not only because of AI centralizing tendencies and crypto's decentralizing tendencies, but because of concepts like proof of personhood, being able to, in privacy preserving ways, yet even if it on a public blockchain, find ways of

01:20:10 Speaker_03
adding attribution, and there's just so much more that you can do with crypto. I agree, Alex, and I'm so glad you brought that up.

01:20:15 Speaker_03
It's funny because when you were saying earlier, in the early definition of a prediction market as this way to kind of elicit information that's dispersed across many people, I immediately went to like, oh, that's the original AGI.

01:20:28 Speaker_03
If you think about artificial intelligence, let's just talk about human intelligence at scale. Like, that's what a prediction market can be. I do want to make sure we also touch on other applications a little bit on the future.

01:20:38 Speaker_03
One quick thing, though, before we do that. So now we've summarized some of the key features. We've talked about the election. We've talked about some of the underlying market foundations and some of the nuances.

01:20:47 Speaker_03
We've talked about what does and doesn't make prediction markets work. And also mentioned earlier that they're part of a class of mechanisms that can aggregate information.

01:20:56 Speaker_03
So I want to really quickly, before we talk about applications in the future, near future,

01:21:01 Speaker_03
I want to quickly summarize what are some of those other mechanisms that could get at this kind of information aggregation that aren't necessarily prediction markets.

01:21:09 Speaker_02
Awesome. So first of all, again, just to think about what is this class of information aggregation mechanisms, and Alex defined it earlier.

01:21:17 Speaker_02
These are mechanisms that bring together lots of dispersed information to produce an aggregate statistic or set of statistics that combine the information of many different sources. Ideally, that aggregate is informative.

01:21:30 Speaker_02
Now there are lots of ways to do that, right? Like some of the simplest ones we actually talked about earlier are just to like ask people for their predictions and later pay them based on whether they're correct, right?

01:21:39 Speaker_02
And you can do that with random people, wisdom of the crowd style, or you could do that with experts, right? And so like very simple types of information aggregation backends are just like incentivize people to tell you what they know.

01:21:51 Speaker_02
Or even just go and survey them, right? Surveying people like in an unincentivized context, but where people have no incentive to lie and just like have an opinion, right?

01:21:59 Speaker_02
They don't have to do any research or like invest any effort to know their version of the answer. You just run a survey.

01:22:04 Speaker_02
But then, you know, sort of there's this whole menagerie maybe of incentivized elicitation mechanisms that are designed around different elicitation challenges. So I mentioned earlier pure prediction mechanisms.

01:22:16 Speaker_02
These are the mechanisms where you ask people for their beliefs and their beliefs about other people's beliefs.

01:22:22 Speaker_02
And then you use people's estimate of the population beliefs to infer like whether they were lying to you about what they believe and or like how informed they were in aggregate.

01:22:31 Speaker_02
So you can use that to figure out where the person fits in the distribution. And pure prediction is like an incentivized version of that, right?

01:22:36 Speaker_02
So you're gonna actually like pay people based on how accurate they are, but you're not paying them based on how accurate they are about what actually happens in the future.

01:22:44 Speaker_02
Rather, you're paying them based on how accurate they are about the population estimate.

01:22:49 Speaker_04
Right.

01:22:49 Speaker_02
And so that enables you to pay people up front immediately. These are used for like, you know, subjective information or sort of like information that's dispersed among small populations.

01:22:57 Speaker_02
Maybe it's not big enough to have a thick prediction market, but people are informed enough that if you can directly incentivize them to tell you the truth, then you can actually like aggregate the information usefully.

01:23:08 Speaker_02
A couple of my colleagues at HBS, Reshma Hassam, Natalia Regal, and Ben Roth,

01:23:12 Speaker_02
have this beautiful paper where they use these peer prediction mechanisms in the field in developing country context where they ask people who in their community is likely to be the most successful micro entrepreneur.

01:23:23 Speaker_02
And then they allocate you know sort of funding according to these predictions. And it turns out that like the predictions are actually quite accurate. So like the incentivized peer prediction mechanism

01:23:32 Speaker_02
sort of produces answers that line up with, like, who actually ends up being successful in these businesses down the line in a way that is more effective, say, than just asking people and telling them, oh, we're going to allocate the money according to whatever you said, because then people will lie and say, oh, my neighbor or my friend is, like, you know, the best.

01:23:47 Speaker_03
I'll put that paper in the show notes, too.

01:23:49 Speaker_02
Yeah, it's a great paper. Super fun to read. Very readable, too.

01:23:52 Speaker_01
So one way in which the wisdom of the crowds doesn't work, of course, is when the crowd thinks they know the answer to a problem, but they actually don't. Oh, okay. Of course. Yeah.

01:24:03 Speaker_01
So there's this great paper by Freilich and Song and McCoy, and they give the example of, suppose you ask people, what's the capital of Pennsylvania? And most people will think, oh, well, it's probably Philadelphia, right?

01:24:16 Speaker_01
It's the biggest city, popular city, you know, American heritage, Liberty Bell, all that kind of stuff. But it actually is the wrong answer. So if you go just by the wisdom of the crowds, you're going to get Philadelphia, and that's wrong.

01:24:28 Speaker_01
The correct answer is actually Harrisburg, which most people don't know. However, a small minority of people do know the correct answer. So, how do you elicit this?

01:24:39 Speaker_01
So, their mechanism for doing this is what they call the surprisingly popular mechanism. And what you do is you do what God says, is you ask people not only what do they think is the correct answer, but what do they think other people will say.

01:24:54 Speaker_01
And most people, of course, will think, well, I think the correct answer is Philadelphia. Other people will say Philadelphia. But then you're going to see a bump, right, of Harrisburg. It's going to be very surprising.

01:25:04 Speaker_01
There's going to be a substantial number of people will say Harrisburg. And that will be quite different than what people expect. And if you choose that, the authors show that this can improve on the wisdom of the crowds.

01:25:19 Speaker_01
So the surprisingly popular answer, the answer which a minority chooses, in contrast to the majority, that can actually get you more information out.

01:25:28 Speaker_01
So depending upon the question, there are these clever ways of pulling this incohate information out of the crowd and eliciting the truth, even when most people in the crowd don't know the truth.

01:25:42 Speaker_03
That's fantastic. I'm obviously going to include all these things we're referencing in our show notes, but that one is really interesting. Right. That's wild.

01:25:50 Speaker_02
And then maybe one other piece in the menagerie, of course, that listeners to this podcast will be very familiar with are simple auctions, right? Auctions are information aggregation mechanisms too.

01:26:00 Speaker_02
We talk about price discovery in an ordinary, very liquid market as being an information aggregation source. But some markets aren't big and liquid all the time. They don't have lots of flow transactions. Maybe it's a super rare piece of art.

01:26:13 Speaker_02
But an auction is still exactly useful for figuring out what the art is worth in the eyes of the market. And you can often discover things, right?

01:26:22 Speaker_02
Like there's some artist that was not popular to the best of your knowledge and then they have a piece with like a major sale and people's estimates of the values of all of their other works change accordingly because of the information that's been revealed about people's change in taste or whatever from this one sale.

01:26:36 Speaker_02
While we're thinking things for the show notes, there's an incredible book.

01:26:39 Speaker_02
Oh, actually, I think this is my very first A16z crypto book list contribution called Auctions, the Social Construction of Value by Charles Smith, which talks about auctions from a sociological perspective as a way of establishing an understanding of value in a bunch of different contexts.

01:26:55 Speaker_03
That's great. And by the way, I do want to plug the episode you, me and Tim Ruffgarden did where we literally dug into auction design for Web3 for hours.

01:27:03 Speaker_02
That was so much fun.

01:27:04 Speaker_02
So as we've been like arcing through these different types of mechanisms, it's a really good reminder that the type of question you're asking, the type of market participants you have, and like this, we're just saying it shapes your decisions about how to like structure your market mechanism.

01:27:18 Speaker_02
It also shapes your decisions about what type of market mechanism to use, right?

01:27:22 Speaker_02
Like if you think that the population is not super informed on average, but like informed at the second order level, then this mechanism Alex was describing is like perfect. Exactly.

01:27:32 Speaker_02
Because the information's there, it's just not like immediately apparently there.

01:27:36 Speaker_03
Right. What I love that you guys are talking about, and we can now segue into some quick discussion of some applications in the future, and then we can wrap up.

01:27:43 Speaker_03
We've been talking about implications for design throughout this podcast, but I think it is very interesting because you've been saying throughout, both of you, that it really depends on the context and your goals, and then you can design accordingly.

01:27:53 Speaker_03
And that's actually what incentive mechanism design is all about, as I've learned from you and Tim Roughgarden and seen over and over and over again. But two quick things, just lightning round style that I want to make sure I touch on.

01:28:03 Speaker_03
One, multiple times you both have alluded to this payout feedback loop. I'm inferring from what you've said that the payouts have to be almost quick, that you get an instant feedback loop on your outcomes.

01:28:14 Speaker_03
Because you gave an example earlier where if it's delayed by two weeks or so and so, it may be less effective.

01:28:19 Speaker_02
Is that necessarily true depends on trust and attention, right? Some people have said that one of their concerns about prediction markets is that people like betting on sports because it's happening in real time.

01:28:30 Speaker_02
You know the answer within a couple of hours or in the case of a horse race within minutes. Whereas these prediction markets often take months to resolve the final answer or the time of resolution might not even be known, right?

01:28:41 Speaker_02
It might be sort of who will be appointed to this position. So there's possibility that speed is relevant for who chooses to participate in some context, whether they find it fun. The other context we're talking about is when time matters for trust.

01:28:53 Speaker_02
If you're in the developing world trying to figure out how to allocate grants, people might not trust or even just have the infrastructure support to participate in a mechanism where they're going to be paid six months out based on the resolution of some confusing outcome.

01:29:06 Speaker_02
Whereas if you can pay them today, they'll participate today. hence why they experimented with peer prediction mechanisms in that context in the first place.

01:29:13 Speaker_02
It was sort of a setting where you could, in principle, pay people based on the outcome, like how successful their neighbor was at being an entrepreneur with whatever grant they'd received.

01:29:22 Speaker_02
But a lot of complexity goes into actually doing that in practice because you have to track down the people again and all of that.

01:29:29 Speaker_03
Ah, yeah.

01:29:29 Speaker_03
One other quick buildery thing that came up, it again seems so obvious to you guys probably, but the best systems are where their prediction markets and such systems work when there is a discrete event like an election or something to be resolved.

01:29:42 Speaker_03
It probably wouldn't work for some ongoing kind of loosely defined nondiscrete event or?

01:29:47 Speaker_02
So the prediction market mechanism, sort of like the canonical prediction market as we've described it, is a mechanism where you're buying like an asset that has a payout as a function of a discrete event.

01:30:01 Speaker_02
But that is, of course, not even the average case of markets, right? Like, you know, when you're buying oil futures or something, most of the transactions in many of these markets are actually sort of in the interim.

01:30:13 Speaker_02
It's based on changes in people's estimates.

01:30:15 Speaker_02
And so if you have a market where it's possible to sort of continually update and trade as estimates change, then you can still gather a lot of information even if the value attained is in a flow or in stages or something of the sort.

01:30:29 Speaker_02
It doesn't have to be sort of a single cutoff date.

01:30:31 Speaker_01
I think you can design them in different ways. They do have to resolve at a point in time, but the way that they resolve could be based upon a stock price or something like that.

01:30:41 Speaker_02
Yeah. And you can have like dividends or something, right? Yeah. You can have things that pay out over time based on sort of interim steps.

01:30:47 Speaker_02
Like, you know, lots of things have continuous payouts based on like the growth of a company or something of the sort. And so you could imagine like prediction securities that are kind of like that.

01:30:55 Speaker_03
I.E. the stock market.

01:30:57 Speaker_02
I.E. exactly. I.E. the stock market.

01:30:58 Speaker_01
Right. The HP example I gave earlier divided the time into two month periods, right? So is it May to June or is it July to August? Is it September, October?

01:31:08 Speaker_01
So, you know, you can always take a continuous event and chunk it into five or six discrete periods.

01:31:16 Speaker_03
Yeah, yeah. Even if somewhat arbitrary, that makes so much sense.

01:31:19 Speaker_01
So, so far these prediction markets have been used just for what we've been saying, for predicting something, but you can also create, and here I'm going to riff off Robin Hanson again, my colleague on these questions, and he says we can also create these conditional markets.

01:31:36 Speaker_01
So, the question would be something like, as I said earlier with Futarki, what would happen to GDP if we put together this science policy? Now, we might not want to jump all the way from democracy into futarky in one go.

01:31:51 Speaker_01
We're probably not ready for that. We're not ready for the full Hanson.

01:31:54 Speaker_02
Not quite ready for primetime, I think.

01:31:56 Speaker_01
Yeah. But here's a fascinating idea of Robyn's, which I think we are ready for and which we should use. And that is, what would happen if we fired the CEO? So this is a huge question. that companies want to know.

01:32:12 Speaker_01
You know, we saw a few years ago, it was kind of remarkable when Steve Ballmer left Microsoft and the stock price went way up, you know, suggesting that the market thought that Ballmer was not a great CEO.

01:32:26 Speaker_01
Or we just saw, you know, with a Brian Nichols, he moved to Starbucks from five guys. He'd be extremely successful at five guys.

01:32:34 Speaker_01
He moved to Starbucks on the day that Starbucks announced that they were hiring Brian Nichols as CEO, the price of Starbucks jumped up. So why, however, do we need to wait? How about creating a continuous market, which says.

01:32:51 Speaker_01
At any given time, would the price of Starbucks be higher if they fired the CEO? And so you can create these decision markets, prediction markets.

01:33:01 Speaker_01
You create a prediction market in would the stock price be higher if we had the same CEO or would the stock price be higher if we fired the CEO? Now, that's an incredibly useful piece of information.

01:33:14 Speaker_01
You know, companies, this is billions of dollars every single day are based upon exactly this question. And that's a question which I think decision markets, prediction markets would be really good at answering. We already have the stock market.

01:33:29 Speaker_01
People are already investing billions of dollars in exactly this question. And we can make it more precise and more detailed and more usable.

01:33:38 Speaker_02
What I really like about that application is it leverages a type of information that people are already developing, right? Like people are spending a lot of time reasoning about what's gonna change the stock price of Starbucks.

01:33:49 Speaker_02
And they have a lot of different refined ways of doing it, but it uses it to address a question that's like useful sort of as a practical hypothetical. As Alex said, it brings the information forward in time.

01:34:01 Speaker_02
Normally in a current market context, we can only learn what happens if Starbucks replaces the CEO when they replace the CEO. But actually, that's like the least important time for us to learn that we actually want to know it.

01:34:12 Speaker_02
Like when they're deciding, should they replace the CEO?

01:34:14 Speaker_03
Yeah, exactly. You want to know it before. Yeah.

01:34:16 Speaker_02
And so being able to harness that same effort that people are putting into understanding what affects the stock price of Starbucks and like, you know, which companies are well run and which aren't and like pushing it towards this question.

01:34:28 Speaker_02
can reveal important information at a time when it's more useful, leveraging things people are already good at predicting. Exactly.

01:34:36 Speaker_03
That's such an interesting and such a useful and extremely real and possible right now thing to do. We're not just being crazy futuristic like 10, 15, 20 years from now. That's so great.

01:34:46 Speaker_01
Can I be crazy futuristic, push it a little bit more? Yeah, yeah, we actually want a little of that. Go for it. You're absolutely right. The should we fire the CEO market could be implemented right now and it would be extremely useful.

01:34:58 Speaker_01
And it's the first step towards making more decisions by like DAOs, by a blockchain consensus. Right. I mean, so if you can make a decision about should we fire the CEO? Should we expand into Argentina or into China?

01:35:13 Speaker_01
Should we have a new model this year? Right. You can start asking the market lots of different types of these types of questions. So let's start with should we fire the CEO, one of the biggest and most important, most salient of these questions.

01:35:28 Speaker_01
Whereas Scott says it's an information rich environment, people are already collecting lots of information on exactly this question. And once we've got some experience in this market, we can start applying it to further markets down the line.

01:35:42 Speaker_02
Footnote, okay, I love that application too. And that ties into the importance we talked earlier about, you know, maybe running these markets in like an internal currency.

01:35:50 Speaker_02
You know, an advantage there is you can use it to put everyone on the same footing at the outset, right?

01:35:55 Speaker_02
Like, you know, the Starbucks CEO question, there are many different sort of like very high value and high ability to trade entities that already are like participating in this style of question.

01:36:05 Speaker_02
Whereas for a DAO, you actually might have tremendous inequality in wealth of the participants, but you can make them wealthy in proportion to their reputation or something in the internal token, which can then be used to have them all participate equitably at the entrance to these decisions.

01:36:20 Speaker_03
I love this. And this is where I'm very proud that we have published a deep body of research across many people, not just our own team, into DAOs, what makes them work, what doesn't work, what's effective, governance mechanisms.

01:36:30 Speaker_03
I'm going to link to that in the show notes. Because also we're arguing that sometimes you can do a lot of these things, not just in the crypto world, but you can apply them to other decentralized communities.

01:36:38 Speaker_03
And I want people to remember that that's a useful use of DAOs, which are just decentralized autonomous organizations. Are there any other pet applications, either current or futuristic, that either of you have?

01:36:48 Speaker_03
I have one, but I'm going to wait till you guys are done.

01:36:51 Speaker_02
I mean, two other very quick hits. You know, we haven't touched directly yet in the podcast on the idea of markets for private data, right?

01:37:00 Speaker_02
For like, you know, another form of information aggregation is, you know, maybe a lot of people have information that will be useful in designing a new pharmaceutical or medical treatment. And they have their own private information of this form.

01:37:14 Speaker_02
And we'd like to be able to elicit it from them in a way that also fairly compensates them for their participation or something of the sort. And we have some mechanisms for this already.

01:37:22 Speaker_02
Like you might have surveys managed by a health center and they pay you sort of a show up fee for participating in the survey or whatever.

01:37:28 Speaker_02
But there's a possibility for much richer markets of that form that leverage sort of like individual data ownership and like permissioning and so forth.

01:37:37 Speaker_03
Yeah, one example, by the way, just concretely is like in the Deci movement, decentralized science, where people are putting their information like medical data, using blockchains to bring more ownership, transparency, consent, which they don't have.

01:37:49 Speaker_03
That's just one example. What's the other one you had, Scott?

01:37:52 Speaker_02
The other one, you know, is getting incentivized subjective beliefs.

01:37:58 Speaker_02
We've talked a lot about predictions of things that have an objective truth, but another big frontier for information aggregation is getting really good estimates of things that people believe that are fundamentally subjective.

01:38:12 Speaker_02
If you're trying to do market research for your product, do people want this? You know, one of the advantages of crowdfunding, for example, is that it's a better information elicitation mechanism, right?

01:38:20 Speaker_02
You could go and ask 10,000 people, do you want to buy this? And some of them might say yes, but unless you're actually taking money from them, you don't know whether that's like a truthful representation.

01:38:29 Speaker_04
Yeah.

01:38:30 Speaker_02
And so crowdfunding lets you learn about the total market for your sort of initial version of the product in a way that's incentivized. More broadly, I think like subjective elicitation is like a really important direction to go into.

01:38:42 Speaker_03
Can you quickly maybe give a very short definition in the uniquely crypto blockchain context of a Bayesian truth serum here? Because isn't this where Bayesian truth serums apply?

01:38:51 Speaker_02
Sure. I mean, the Bayesian truth serum is actually an example of those pure prediction mechanisms we described, and there are many different versions of it. But loosely, the idea is, if I ask you your opinion on something, did you like this movie?

01:39:03 Speaker_02
And then I ask you, what's the likelihood that, you know, another person I ask will say that they liked the movie? You might have a reason to lie to me about whether you like the movie or not.

01:39:12 Speaker_02
You might say, oh, I really liked it because, you know, you produced it. What am I going to do? But you actually hated it. Your estimates of everybody else's beliefs will be sort of tilted in the direction of them mostly disliking it.

01:39:22 Speaker_02
So long as I'm going to reward you to proportional to your accuracy, like, you know that you disliked it. And so everyone else probably will, too, because you're a Bayesian. And so I can detect

01:39:31 Speaker_02
looking at everybody else's responses, I can detect whether you sort of like told me a distribution of other people's beliefs that's consistent with what you said your belief is.

01:39:40 Speaker_03
Great. One of my quick applications, and kind of an obvious one, but I want to just call it out because I find it very boring when people say the same thing like, oh, media, da, da, da, whatever.

01:39:49 Speaker_03
What I find very interesting is, and people often talk a lot about having mechanisms for quote, finding truth, but sometimes I find it to be very pedantic and moralistic and equally as grating as a way that the very people they're trying to bring down.

01:40:03 Speaker_03
And so it's a pet peeve of mine when I'm on the Twitter discourse like, oh God, I'm so bored by this. But

01:40:08 Speaker_03
I do find it very interesting that some of the commentary surfaced that prediction markets were basically resolving more accurately and faster than mainstream media, but not having some of the same filtering of partisan interests.

01:40:18 Speaker_03
I mean, although this might be different with certain communities of DAOs, if you do predictions limited to certain DAOs.

01:40:23 Speaker_02
Yeah, again, it depends who's in your market.

01:40:25 Speaker_03
Yeah, exactly. This goes back to your point about thick and thin.

01:40:28 Speaker_03
But it's also interesting because it's a way to put a little bit more skin in the game, which is one of the biggest drawbacks in current media is like the people writing don't have skin in the game, which is why I've always been a believer in not having third party voices, but the experts write their own posts and then editing them is more interesting to me.

01:40:43 Speaker_03
So I do think it's very interesting to think about this use case of reinventing news media using prediction markets.

01:40:48 Speaker_03
And Vitalik's post actually had a great headline, which is that think of a prediction market as a betting site for participants and a news site for everyone else. Yep. That'd be my application.

01:40:59 Speaker_01
So I think more generally, it is odd how we do quite a bit of journalism.

01:41:03 Speaker_01
So, for example, it's totally standard practice for a financial journalist, right, for it to be against company policy for them to invest in the companies which they are recommending. Right.

01:41:17 Speaker_01
And as an economist, I kind of think, wait a second, don't you want the exact opposite? Right.

01:41:23 Speaker_03
You want more skin in the game. Exactly.

01:41:25 Speaker_01
More skin in the game. Right. So, you know, I say that a bet is a tax on bullshit. Right.

01:41:30 Speaker_03
I like that line. That's a great line. I love it.

01:41:33 Speaker_01
So, you know, how about you have to be upfront about it. You have to be honest about it, transparent about it. But maybe journalists should say, this is what I think will happen. And these are the bets which I've made. And you can see my bets on chain.

01:41:45 Speaker_01
Right. Yeah. And let's see what their past track record is. Right. Like, it's kind of amazing that We do not have any track record of opinion editorialists whatsoever.

01:41:57 Speaker_01
Only Tetlock, you know, started to create that and found that they were terrible, right? But how about let's create a series of bets and on-chain and this would, you know, change

01:42:08 Speaker_01
the types of people who become editorialists, who get these jobs in the first place, right? So let's start making sure you bet your beliefs, and then let's promote people whose bets turn out to be accurate.

01:42:21 Speaker_01
And that's going to change journalism entirely if we were to change the metrics by which journalists are evaluated.

01:42:27 Speaker_03
I agree. And Annie Duke talks a lot about this too. It's not just bets like in a binary true false way, but bets that are weighted in terms of likelihood, probability of accurate, like you don't have to make a binary, like it will be this or that.

01:42:40 Speaker_03
Absolutely. I believe 80% that X will happen. And that is also another way to kind of assess in a more nuanced way. And that gives a lot of room for the nuances that are often true when it comes to guessing the truth.

01:42:52 Speaker_01
Absolutely. Exactly. There's a big incentive to say this is never going to happen. This is impossible. Right. But then if you ask them, well, if it's never going to happen, are you willing to bet ten dollars that it might happen? Exactly.

01:43:05 Speaker_01
They should all be willing to. Of course, I'm willing. They're never willing to make those bets.

01:43:08 Speaker_03
That's right. Even people who hate Elon Musk as journalists will then start saying, well, actually, I'm going to bet on that guy for building X to happen because I saw that, you know, shuttle launch.

01:43:17 Speaker_03
And now I'm thinking, OK, maybe I'll increase that from 10 to 20 percent or whatever.

01:43:21 Speaker_01
Yeah, exactly. So betting could reduce the hyperbole. Yeah, that's exactly right.

01:43:25 Speaker_02
Yeah, totally. By the way, this order on some other really critical information elicitation mechanism that uses a different version of this sort of cross examining some people's beliefs against others.

01:43:35 Speaker_02
If you think about community notes on Twitter, that's an information aggregation mechanism, right? It's like getting a lot of people's opinions and then only deciding that they're correct if you have agreement from people who usually disagree.

01:43:48 Speaker_03
Yes, exactly, because that's where Wikipedia failed when they had the cabal of expert reviewers. They didn't have that kind of check and balance mechanism. Yeah, totally.

01:43:57 Speaker_01
Community notes is a great one.

01:43:59 Speaker_03
I have one last question for you guys because we don't have enough time to go into policy. You know, in general, like some of these became popular because they're offering contracts that were banned from the market.

01:44:07 Speaker_03
So a big question is whether the offshore crypto markets will follow the rules or not. So how do you sort of create like innovation, obviously, in that environment?

01:44:15 Speaker_03
To me, the core question here is what's the difference between gambling and speculation? Is there a difference? I'm curious if you guys have a thought as a parting note on this.

01:44:23 Speaker_02
I mean, so one very important thing to remember is that depending on the context, like you may be in a different point on a continuum, right?

01:44:32 Speaker_02
Like part of what makes sporting events like exciting and suspenseful is that there's a lot of stochasticity and like, you know, sort of the amount of information that any individual has is reasonably small, even if they put a lot of effort into figuring it out.

01:44:44 Speaker_02
But there might be some amount of like, you know, sort of informed betting in sporting events. And then as you move towards things where there's a lot of information to be had and a lot of like value also to knowing the answer, right?

01:44:56 Speaker_02
A lot of market value to actually figuring it out, right? Like how do we allocate goods and markets, right?

01:45:01 Speaker_02
Going back to the very beginning when we were talking about like the role of markets and, you know, determining the value of something and clearing supply and demand, right? Like there, there is value generated through the process of people engaging.

01:45:13 Speaker_02
Now, there's one really important caveat about speculation. We talk about this like a lot in crypto land, right? There is speculation of the form

01:45:22 Speaker_02
I have beliefs, and I'm investing to support a product that I think will exist and that I want to exist, and that I think other people will want.

01:45:31 Speaker_02
And then there's also speculation on speculation, where you're actually not so much betting based on your own beliefs, you're betting on what you think other people will choose to bet on, like we talked earlier about herding.

01:45:41 Speaker_02
You might place bets because you think other people are gonna place bets in a given direction, not because you actually have any information about what's gonna happen, just because you have information about how the market might move.

01:45:51 Speaker_03
That's right. That's speculating on speculation.

01:45:53 Speaker_02
Exactly. That's speculating on speculation.

01:45:54 Speaker_02
So there's this sort of like valuable type of speculation, which is people moving resources around in a way that reflects their beliefs and sort of like can help us make markets work better and achieve better outcomes.

01:46:07 Speaker_02
Like that's sort of in this mid space between the randomness where moving the money around has no impact on outcomes, right? You're just betting on coin flips, like, you know, your money does nothing.

01:46:16 Speaker_02
And this other edge where moving the money around becomes sort of its own project that is independent of outcomes. And so again, like sort of doesn't provide information. Right.

01:46:25 Speaker_02
Like these prediction markets are particularly well architected again, at least in the cases where they're very large and thick and all the things we've talked about that you need to make them work.

01:46:34 Speaker_02
They're particularly well architected to try and be in that mid space where the information provided is valuable and comes out of like real knowledge and activity. in a way that actually sort of means the market does something valuable.

01:46:47 Speaker_03
Yeah. And by the way, on the earlier example, when we talk about it a lot, the obvious examples where it plays out is like the Carlotta Perez framework of like speculation phase followed by an installation phase.

01:46:57 Speaker_03
It's like a driver of technology cycles. There's also the example, Bern Hobart wrote a piece for me a few years ago on how bubbles are actually a good thing when they have a certain type of quality in this case.

01:47:08 Speaker_03
And he also wrote a new book about it recently for Stripe Press with Tobias Huber, which they go into greater detail about that. Oh, I should read that.

01:47:15 Speaker_03
It's basically an example of quote, I don't want to put moralistic terms on it necessarily, but useful speculation that kind of leads to other things as an outcome versus speculating for the sake of speculating, which is partly the distinction you're pointing out.

01:47:29 Speaker_01
Well, I think people, you know, in Las Vegas who are at the slot machines, they're gambling because they have no way of influencing or of improving their predictions of what the slot machine is going to show up. Right. It's just pure random chance.

01:47:46 Speaker_01
On the other hand, there are many, many areas in which we are trying to predict the future and in which investing can help us improve our predictions.

01:47:54 Speaker_01
And this is why I think prediction markets should be completely legal, should be legalized, because of all the forms of gambling, of all the forms of speculation, this is one of the most useful forms.

01:48:06 Speaker_01
So we want to incentivize the type of speculation or gambling which, as a side product, produces these useful public goods. which is trying to predict the future. This is incredibly important.

01:48:18 Speaker_01
You think about all of the questions that we have, you know, what is happening with climate change? Which of these scientific predictions are accurate? Who is the best candidate for the presidency?

01:48:30 Speaker_01
All of these questions we have, prediction markets can help us answer these questions in a way which is more objective, more accurate and more open to everyone. So I think the case for legalizing these is very, very strong.

01:48:45 Speaker_03
That's amazing. I'm going to give you the last word on that, Alex. You guys, thank you so much for joining this episode. That was so fun.

01:48:51 Speaker_01
Thanks, Sonal. Thanks, Scott. It's been fantastic being here.

01:48:54 Speaker_02
Thanks so much. Really fun conversation and QED.

01:48:57 Speaker_03
Bye, QED. Thank you for listening to Web3 with A6NZ. You can find show notes with links to resources, books, or papers discussed, transcripts, and more at a6nzcrypto.com. This episode was produced and edited by Sonal Choksi. That's me.

01:49:18 Speaker_03
The episode was technically edited by our audio editor, Justin Golden. Credit also to Moonshot Design for the art and all thanks to support from A6NZ Crypto.

01:49:27 Speaker_03
To follow more of our work and get updates, resources from us and from others, be sure to subscribe to our Web3 weekly newsletter. You can find it on our website at a6nzcrypto.com. Thank you for listening and for subscribing. Let's go.