Prediction Markets Have an Elections Problem

Jeremiah Johnson

Weeks after it was clear that Donald Trump lost the 2020 election, you could still make pennies on the dollar betting Joe Biden would win. Why doesn’t smart money drive out dumb money in election markets?

Prediction markets are having a moment. The mainstream media is increasingly profiling them — see, for example, Bloomberg’s Matt Levine linking to a market on why Sam Altman was fired from OpenAI — and paying attention to what they have to say. Advocates claim that by providing a marketplace for bets on uncertain events, prediction markets give predictors a financial incentive to be correct. If you want a good estimate of what will happen, prediction markets filter signal from noise and cut past pundits with no skin in the game. Money talks and bullshit walks, and prediction markets force people to put that money where their mouth is.

Some of the largest and most notable prediction markets to date have been around elections. The only problem? Prediction markets simply aren’t very good at political predictions. Markets for major U.S. elections are some of the deepest prediction markets anywhere: billions of dollars bet, millions of daily trades, and huge amounts of press. In theory, the larger the market, the more accurate the predictions. But in the markets with the biggest spotlight, we see a lot of strange stuff. Predictions that don’t line up with common sense. Odds that seem to defy reality. Obviously noncredible market movements. To figure out why, we’ll have to explore the underlying mechanisms that make markets work, and why the typical user of political prediction markets may not behave in the ways we expect.


How do we know prediction markets struggle with politics in the first place? The best evidence comes from the 2020 election. Despite a deep pool of participants, markets for the presidential election showed clear signs of irrationality and biased outcomes.

The elections were held on November 3, 2020. By November 7, almost every major media and poll-watching organization had declared Joe Biden the winner in enough states to be elected president. And yet deep into December, prediction market contracts in states like Georgia, Michigan, Arizona, and Pennsylvania only listed Joe Biden at 90 cents (or a 90% chance to win) 1  — long after those states had formally ratified Joe Biden as their winner. In a more normal scenario those markets would have shot to 99% or 100% once every media outlet called the relevant states for Biden, and then closed once those states formally certified Biden as the winner. That wasn’t the case.

Why did those particular prediction markets refuse to go to 100%? Were they dominated by irrational behavior, or was there some sensible explanation for why? In the aftermath of Biden’s victory, Donald Trump quickly declared that large-scale election fraud had taken place. He pursued a variety of legal challenges to the official results, and, if polls are accurate, convinced something like a third of the country that the official results were fraudulent. Although there was never any real evidence or rational case for mass-scale election fraud, perhaps the market believed that Trump’s legal challenges had a ~10% chance of succeeding and was pricing in that outcome. Call this the “rational chance of overturning” theory.

Alfonso de Anda

Unfortunately, that theory falls flat in the face of other evidence. Consider the “Electoral College Margin of Victory” market for the presidential election on PredictIt, a prominent prediction market. This market was designed to predict the margin of victory in the Electoral College — Biden wins by 30–59 electors, 2 Trump wins by 10–29 electors, etc. A rational theory pricing in a 10% chance that Trump’s legal challenges would succeed would see that 10% distributed among the likely outcomes, such as a narrow Trump victory by a small number of electoral votes. But instead, the majority of the bettors against Biden bet on the astonishing outcome of “Trump wins by 280+ electoral votes.” That, it should be clear, was impossible — it would require results to be overturned not just in states with tight races but also in Democratic bellwethers like California. The Trump campaign did not have active legal challenges in enough states to even consider this a possibility. And yet the bulk of prediction market betting on Trump was betting on a 280+ margin, with prices as high as 9 cents late in November.

Other signs of irrationality also existed in the 2020 election prediction markets. Sports betting sites are essentially a more legal form of real-money prediction markets, and sometimes venture into political betting. The sports book MyBookie listed Trump as a slight favorite to win the presidency overall, giving him implied odds of 53% to win re-election. But on that same site, Biden was a significant favorite in every swing state — 69% to win Wisconsin, 66% to win Pennsylvania, 60% to win Arizona. In a rational market, the national odds should reflect the aggregation of state-level odds, or else there would be free arbitrage available. But that arbitrage persisted — state odds were continuously out of tune with national odds. 


While the 2020 election is one of the clearest examples of irrational behavior in political prediction markets, it’s far from the only example. Academic interest in prediction markets has increased since 2016, and that research has repeatedly found evidence of inefficiency and irrationality in political prediction markets. One study found that betting markets for the 2016 EU referendum in the U.K. were inefficient in absorbing new information. 3 Other researchers 4 have found that both the 2016 and 2020 U.S. presidential elections had large and persistent arbitrage opportunities in their prediction markets — a classic sign of inefficient markets. Prediction markets as a whole have a tendency to overstate the odds of low-probability events, a tendency known as “small odds bias.” But analysis shows that tendency is even more extreme in political prediction markets. 5  

Beyond the academic research on elections from 2016 and 2020, there’s also evidence from more recent elections that political prediction markets struggle to forecast events accurately when compared to expert forecasters. A proponent of prediction markets might be inclined to dismiss research for 2016, as prediction markets were still in their relative infancy. They may also equivocate about the strange 2020 markets, since Donald Trump’s claims of mass election fraud were a confounding factor and difficult to account for. But even in the 2022 U.S. midterm elections, with no Trump and fully modern prediction markets, prediction markets fared worse than expert forecasters. 

Source: First Sigma

In Figure 1, a variety of election prediction sites were graded on their accuracy based on a log-odds scoring method, where a higher score means a more accurate forecast. 6 The election forecasts from FiveThirtyEight’s famed election model were more accurate than prediction markets from Manifold Markets, Polymarket, Election Betting Odds, and PredictIt. The only site to surpass 7 FiveThirtyEight was Metaculus, which is not a prediction market — it aggregates expert predictions but without the buying or selling of shares that are the signature feature of markets. The two best predictors of the 2022 midterm results were the two sites least reliant on betting and market mechanisms and most reliant on specialist expertise. The loss to FiveThirtyEight is particularly embarrassing for prediction markets because FiveThirtyEight’s predictions were public and widely shared for months before the election. Prediction markets could have simply copied those odds, since they were proven to be well-calibrated. 8 In deviating from the public predictions from FiveThirtyEight, prediction markets added negative value.


By now, we’ve seen quite a lot of evidence that prediction markets struggle with major political predictions. But if prediction markets are inefficient and irrational when predicting political outcomes, why?

First, we need to talk about the technical factors that can cause prediction markets to malfunction. Even the most successful prediction markets today often suffer from these factors. They may have a low volume of betting or a limited pool of bettors. They may have high fees, or a cap on how much can be bet. For long-run predictions like political outcomes, the time value of money can distort odds. 9 Questions about the legality of some prediction markets may scare away bettors; some markets allay that fear by using only fake money, but that introduces a new concern since fake money may not incentivize bettors as strongly as real money. And the markets that have the fewest troublesome technical factors — crypto markets and overseas sports betting sites — are the most logistically and legally challenging markets for everyday U.S.-based bettors to reach.

There’s some evidence that these technical factors are slightly more impactful on political markets than other markets. We know that small-odds bias is larger in political markets. There’s good reason to think that legal issues that prevent higher volumes of trading are more severe in political markets. Even as sports betting has seen a wave of legalization in the U.S. the Commodity Futures Trading Commission, which regulates derivative markets (under which event-based contract markets fall), denied prediction market Kalshi permission to offer real-money bets on political races. 10 And some political prediction markets have higher fees to participate than more general betting sites.

But these technical factors alone can’t fully explain all the odd predictions and idiosyncratic behavior we see in political prediction markets, or why they’re worse than other types of prediction markets. Instead we’ll need to turn to psychology to understand why people place these bets in the first place.

Politics is one of the strongest sources of identity in modern society. Hundreds of millions of Americans strongly identify with political parties or labels like conservative, liberal, socialist, libertarian, Republican, or Democrat. They form groups and communities with others who keep the same beliefs. They discriminate against opposing party members in favor of their own. 11 They have a common language with shared phrases, references, and inside jokes. They have beloved in-group heroes and hated out-group villains. It’s not a stretch to compare political loyalties to a religion — political identities can also have holy texts, promote saintlike figures, construct complex moral and ethical codes to live by, and enforce orthodoxy with the punishment of expulsion from the group. For millions of people, a political stance is not an abstract idea they have rationally considered and think is correct. It’s who they are as a person. And when you mix this sort of potent identity creation with prediction markets, outcomes get weird.

There’s an interesting piece of research from Temple University 12 that tested sports fans on their ability to predict the outcome of games, while also noting which teams the participants were fans of and how strongly they considered themselves fans. They found that participants had lower prediction accuracy when predicting their favorite team’s games, because they strongly overestimated their team’s chances of winning. This effect was larger the more strongly the participant identified as a fan of a particular team. 

In short, the more strongly someone identified as a fan of a team, the worse their predictions were. Fans in this context often either misevaluate their teams or just fail to evaluate their teams at all. Betting can be seen as a form of loyalty, as an expression of allegiance to the team rather than a rational attempt to maximize expected value. Other researchers have found that fans are often reluctant to bet against their teams even if the bet is free — their identity as a fan overwhelms any rational profit-seeking motive. 13 If this effect can happen for teams in sports, it certainly can happen for the “teams” in politics, where group-identity forces are even stronger. 

How can we tell that political loyalists on prediction markets are exhibiting this “betting as loyalty” behavior? This may sound like a flippant suggestion, but a single glance at the comments section for a major PredictIt political market should tell you all you need to know. You’re unlikely to find any sort of reasoned analysis, but you’ll find plenty of bitter partisan fights, memes, and culture-war yelling around which party and which candidates are currently ruining America. 

To state the problem bluntly, there is an enormous amount of dumb money that surges into political prediction markets for major elections. The 2020 presidential election alone saw more than a billion dollars wagered at European sportsbooks, and major state and national political markets on PredictIt reached millions of bets per day leading up to election day. We know that this tidal wave of money is dumb money because we can see intense tribal behavior on these sites and research tells us that the more one identifies with a “team,” the worse one’s predictions for that team are. People will bet for candidates and parties not because they have an evidence-based analysis supporting their bet, but as an expression of identity.

Once you look at these bets as expressions of identity rather than rational bets, many of the irrational and puzzling behaviors we described earlier make more sense. This reasoning explains why the strongest overperformance comes from candidates with highly online, energetic fan bases such as Donald Trump, Andrew Yang, and Vivek Ramaswamy. It explains why the 2020 Trump betting clustered at a margin of victory of 280+ electoral votes rather than a realistic scenario like Trump winning by 10-29 electoral votes. The bettors involved weren’t doing any sort of detailed analysis or investigating the facts. They were expressing how much they believed in Trump, how much they supported him, and how loyal they were to him. Betting on an impossible outcome is how you show the most loyalty!

This also explains why state-level prediction markets disconnected from national-level prediction markets. Most state-level markets received an order of magnitude less attention and betting volume than national markets. Identity-based betting congregated in those highly visible national markets and tilted them heavily towards Donald Trump. Meanwhile, far fewer bettors filtered down to the state markets, allowing savvier bettors to dominate. And these state-level predictions, when aggregated, showed Joe Biden as the favorite.


Whenever we observe instances of markets failing, it’s useful to remember why markets usually work in the first place. We know that prediction markets offer a monetary incentive to make correct predictions. But too many people jump from that starting point directly to the conclusion that prediction markets must therefore arrive at correct and efficient outcomes.

Real-world markets are more complicated. They rely on all sorts of collective norms, enforcement mechanisms, and other characteristics in order to function. They can be thrown out of whack by frictions like fees or taxes, incomplete information, barriers to entry, psychological or cultural factors, and more. To simplify a complex topic potentially more than we should: Markets only work when smart money can drive out dumb money. Dumb money will always exist in some form, but the financial incentives for sharp traders to bet against less savvy traders are usually enough to allow markets to function well and arrive at efficient or near-efficient outcomes. 

But smart money driving out dumb money isn’t an automatic process. There are a variety of reasons why it might not happen, including technical barriers to market entry, regulation, or cultural factors. Even in extremely deep, liquid, and mainstream financial markets the process isn’t automatic. The rise of meme stocks such as GameStop (GME) and Bed Bath & Beyond (BBBY) show that if dumb money is large enough and enthusiastic enough, it’s difficult for smart money to overwhelm it and get a stock back to its “correct” price. BBBY stock maintained a market cap of around $60 million for months after the company filed for bankruptcy. Dumb-money demand for BBBY stock was so strong that BBBY actually got special permission to sell more stock — of an already bankrupt company! — and sold millions of dollars of it to willing buyers. The market cap remained at $62 million until the literal day the stock was delisted as their bankruptcy was finalized with zero value going to stockholders.

This process of smart money driving out dumb money is even harder in prediction markets. There’s strong evidence that prediction markets have a huge amount of unsophisticated, identity-based bettors. And there’s very little evidence of the opposite — financial juggernauts throwing around their weight. Where are the hedge funds making an easy buck? Billionaire financiers routinely make massive bets on stocks, bonds, and currencies — where are the billionaires taking huge positions on prediction markets? That kind of institutional smart money doesn’t exist in political prediction markets the way it does in traditional financial markets. With so much dumb money and so little smart money, is it any surprise that we get outcomes dictated by trolls instead of professional traders?

Prediction markets as a tool are still in their relative infancy. They show great promise, but the results from political prediction markets should give us pause. These markets are demonstrably inefficient, biased in predictable ways based on political identities, and can’t outperform expert analysis even when they have public access to that expert analysis. This isn’t a reason to abandon prediction markets. Like any tool, prediction markets can be used, abused, or misused, and one failure doesn’t doom the concept. 

But it does show that we need to take seriously the structures that make markets work when designing prediction markets. We can see the same dynamics from previous political markets playing out in the markets for the 2024 presidential election — candidates like Robert Kennedy Jr., Vivek Ramaswamy, and Gavin Newsom (who hasn’t even declared he’s running) all have odds that seem unrealistically high. Prediction markets that deal with other highly charged subjects — for instance, a market on the outcome of a war, or a market on urban crime statistics — will likely be subject to some of the same factors that distort political prediction markets. Unless we make sure that market structures are providing the right incentives in the right way, we shouldn’t be surprised if prediction markets continue to struggle.

  1. A popular practice among prediction markets is to express outcomes as $1 — buy an outcome at 85 cents, and if the outcome comes true then get paid $1 with a profit of 15 cents. Therefore percentages and “cents” will be used interchangeably in this article, where one cent equals one percentage point.
  2. Biden by 60–99 electors was the actual final outcome.
  3. Tom Auld and Oliver Linton, “The Behaviour of Betting and Currency Markets on the Night of the EU Referendum,” International Journal of Forecasting 35, no. 1 (January–March 2019): 371-389. The efficient market hypothesis (EMH) is a financial theory stating that asset prices reflect all available information. This paper found violations of both the strong and semi-strong versions of the EMH.
  4. Andrew Stershic and Kritee Gujral, “Arbitrage in Political Betting Markets,” The Journal of Prediction Markets 14, no. 1 (September 2020).
  5. Lionel Page and Robert T. Clemen, 2013 “Do Prediction Markets Produce Well-Calibrated Probability Forecasts?,” The Economic Journal 123, no. 568 (May 2013): 491–513.
  6. Log-odds scoring rewards not just correct predictions but high-confidence predictions — but it also penalizes highly confident incorrect predictions.
  7. FiveThirtyEight (now 538) typically releases three election models with varying degrees of complexity — Deluxe, Classic, and Light. The score for FiveThirtyEight shown in this graphic is from their Deluxe model. If the Classic model had been chosen instead, FiveThirtyEight would have outperformed even Metaculus. Light was nearly identical in score to Deluxe.
  8. How Good Are FiveThirtyEight Forecasts?
  9. For example — if a prediction market is mispriced by 5%, but the market won’t resolve for more than a year, bettors may prefer to simply invest their dollars elsewhere (bonds, stocks, etc.) where they can make a greater return, leaving the prediction market at an incorrect price.
  10. Kalshi is now suing the CFTC to overturn this decision.
  11. Sean J. Westwood et al., “The Tie That Divides: Cross-National Evidence of the Primacy of Partyism,” European Journal of Political Research 57, no. 2 (May 2018): 333–354.
  12. Sangwon Na, Yiran Su, and Thilo Kunkel, “Do Not Bet on Your Favourite Football Team: The Influence of Fan Identity-Based Biases and Sport Context Knowledge on Game Prediction Accuracy,” European Sport Management Quarterly 19, no. 4 (October 2018): 1–23.
  13. Carey K. Morewedge, S. Tang, and R.P. Larrick, “Betting Your Favorite to Win: Costly Reluctance to Hedge Desired Outcomes,” Management Science 64, no. 3 (2018): 997–1014.

Jeremiah Johnson is the founder of the Center for New Liberalism and host of The New Liberal Podcast. He writes at Infinite Scroll.

Published February 2024

Have something to say? Email us at

Further Reading