I participated, as a superforecaster, in the Forecasting Research Institute (FRI) Forecasting the Economic Effects of AI survey. They’ve published their results in this 224 page paper.
My prior experience in the Existential Risk Persuasion Tournament led me to expect that the average participant would predict less AI impact than I predicted, but I was still shocked by the extent of the disagreement.
AI Progress by 2030
The survey classifies AI progress into three scenarios, defined by AI abilities at the end of 2030:
- In the “slow” scenario, AI is a capable assisting technology for humans: writing literature reviews at the level of a capable PhD student, handling half of all freelance software-engineering jobs that would take an experienced human a day to complete, topping up your online grocery cart, and physically being able to unload dishwashers in some homes.
- In the “moderate” scenario, AI is an effective collaborator across domains: autonomous lab systems can make rapid advances in solar-cell technology; almost all freelance software-engineering jobs requiring 5 days of effort from an experienced human are automatable; robots can do dishes as quickly as humans; robo-taxis can drive anywhere that humans can.
- In the “rapid” scenario, AI systems surpass humans in most cognitive and physical tasks. Autonomous researchers can collapse years-long research timelines into months or even days. AI systems can surpass all freelance software engineers, customer service agents, paralegals, and clerical workers. Models can write 2025-Pulitzer-caliber books—and negotiate the resulting book contract. Robots can assist in an arbitrary home or factory anywhere in the world.
I don’t have a record of my answers to this survey, but I do have the answers that I gave to equivalent questions on another survey that they ran in early February 2026, so I’ll use those in this post.
I predicted a 3% chance of slow progress, a 48% chance of moderate progress, and a 49% chance of rapid progress.
Superforecasters as a group gave a 45% chance of slow progress, and other groups gave a 38% to 41% chance.
Figure 3 shows that I was the most extreme superforecaster in giving a high probability of rapid progress. One AI expert, and a few of the general public, were more confident than me in predicting rapid progress.
Impact on GDP Growth
The most interesting forecasts were of median annualized US GDP growth rate from the start of 2030 to the start of 2050.
I predicted 32%/year growth (10th percentile: 11%, 90th percentile 85%). (Caveat: that LEAP forecast was for 2030 to 2050, but the paper is reporting forecasts for 2045 to 2050. My forecast for 2045-2050 ought to have been higher than my 2030-2050 forecast, since I consider increasing growth rates to be more plausible than decreasing growth. See Robin Hanson’s Long-Term Growth As A Sequence of Exponential Modes for one model that I’m using.)
Superforecaster and economists as groups predicted a median of 2.5% growth and a 90th percentile of 5% growth (for their unconditional forecasts).
What caused disagreement?
From FRI’s paper:
Debate on the future economic impacts of AI can largely be reduced to two questions: 1. Will AI capabilities progress meaningfully, such that AI systems are capable of completing a large quantity of economically meaningful work? 2. If this progress in capabilities occurs, what will happen to important economic indicators?
Lastly, we now consider how this analysis relates to the question of whether forecasters disagree on capabilities progress or outcomes conditional on capabilities progress. 1. If forecasters disagree noticeably on capabilities progress, we expect between-scenario, between-forecaster variance to be large. 2. If forecasters disagree noticeably on outcomes conditional on capabilities progress, we expect within-scenario, between forecaster variance to be large. For GDP growth, the first component (0.3%) is small in absolute terms and relative to the second component (16.1%). This suggests that disagreement about outcomes conditional on various levels of progress is a more important driver of total variance in 2030 GDP growth forecasts than disagreement on capabilities progress per se.
I find it plausible that most of my disagreement with the median participant about 2030 results from disagreement about rates at which AI diffuses throughout the economy. Early 2030 is about my median forecast for when macroeconomic effects of AI start to become obvious.
Respondents were asked to select which scenario, in sum, best represented their views, and were advised that progress might be uneven across domains. Therefore, two respondents who both selected the rapid scenario may have held meaningfully different assumptions about the specific capability profile underlying that label.
I suspect that the differences in assumptions about early 2030 were small enough to not explain much disagreement. But it seems obvious that the differences in assumptions grew pretty big when applied to more distant years.
For me, the differences between moderate progress and rapid progress up to 2030 made medium sized differences in what year I expected AI and robots to replace most human labor, but in either case I expected that replacement to happen in the 2030s. A delay of a couple of years didn’t much change my prediction that 2050 would be unrecognizably different from 2029. So my interpretation of the scenarios was likely a fairly poor match for what FRI aimed to produce.
Another weakness in the paper’s framing is that it assumes there’s a clear distinction between assumptions about AI capabilities and about diffusion rates. That distinction feels maybe appropriate for forecasts of the next few years. But for 10+ years out, it feels wrong. The usual models of diffusion seem drastically less relevant if AI is capable enough to create new companies that are mostly run by AIs.
My main puzzle is why the consensus for 2030-2050 is far from what I consider reasonable.
Hypotheses
People seem eager to predict that AI progress will hit a wall. Maybe most participants came up with a model in which, even when AI makes rapid progress up to the start of 2030, it halts just in time to prevent them from needing to make unusual predictions further out.
Here’s a rationale from an economist that partially supports this hypothesis (but I’m confused by what they think happens in the 2030s):
The specific timeline of capabilities and especially adoption matters significantly. AI’s impact on economic growth will be most significant during the adoption phase. At some point industries/sectors will be effectively saturated with the technologies, and the impact on growth rate will fall as opportunities for low and mid-hanging gains disappear. If the high-growth phase happens 2040-2045 and reaches general saturation at the end of that period, growth 2045-2050 would likely be much lower than if the high-growth adoption phase happens 2045-2050. So on and so forth.
I find it implausible that growth will return to 20th century normal if AI saturates, given that I expect AI to mostly eliminate labor supply constraints on growth. Population growth causes GDP growth, and human-level AI enables the equivalent of rapid population growth. I don’t see signs that other participants imagined this model.
Economists might be modeling a future where essentially all human desires are satisfied, so there’s little demand for additional production. I.e. any desire by Elon Musk to colonize other solar system has little impact. I can see how regulations might enforce such a future, but I don’t see why people would model that as the default outcome.
Here’s a rationale from an economist:
By 2050, my tails become “weird”. … On the other hand, best case scenario is material utopia for all, which I have entered as 20 per cent growth rate, recognizing that these numbers become somewhat meaningless at this point.
This person mostly recognizes that what I’m forecasting is possible, yet mostly balks at aiming to put a useful number on it.
Maybe 20% makes sense if the world is dominated by a back to nature movement which outlaws many things, but that doesn’t fit my idea of material utopia. Or maybe most growth moves off-planet, and they don’t want to count off-planet growth as part of GDP?
But I see a near-default scenario in which growth is limited by something like the rate at which robots can be built. Manufacturing can generally exceed 20% growth rates by a significant margin. (I think that economist is almost right in one respect – people will stop caring about GDP numbers by 2050. But they’ll still mean something important about how 2050 differs from 2025.)
Selection Bias?
Who were the AI experts that took the survey? It sounds like many of them came from leading AI companies. Yet:
This finding stands in marked contrast to warnings, raised by some prominent voices in the AI industry, about rapid economic transformation. Our sample partially captured this view, especially in the AI expert group, which forecast a GDP growth rate of 3.7% in 2030 and 5.3% in 2050 under the rapid scenario. Their 90th percentile forecast is 6.5% in 2030 meaning that they assign a 10% probability to growth equal to or larger than this.
My stereotype of an AI expert would forecast 5x or 10x higher GDP growth for 2050.
Maybe only the AI company employees who saw the least AI impact were willing to take time away from their work to participate in the survey? The survey appears to have done a good job of figuring out which AI companies to invite employees from, and probably a decent job of inviting most of the appropriate employees there. But the low response rate leads me to doubt that they got a random sample of the invitees. They say they generated a list of 2209 AI industry professionals (before deduplication) and only got 30 of those to participate.
Concluding Thoughts
It looks likely that many of the forecasts in this survey will be too cautious by a factor of 10 or so.
There are some disagreements about how quickly AI will diffuse throughout the economy. It feels to me like those have to be at least partly based on disagreements about capabilities. I expect AI to alter the diffusion rate, partly by enabling rapid creation of new companies that render human-dominated companies obsolete. When most of the company’s employees are AIs, the usual hiring constraints become minor. I suspect that most participants aren’t trying to model how AI of the rapid progress scenario will affect diffusion.
There also seems to be a problem of anchoring too heavily on the most salient historical base rates.
I’m still left with some feelings of confusion about the extent of the disagreements.
I’m fairly confident that other participants didn’t see important considerations that I’ve overlooked, so I’m reluctant to defer to their judgment. We’re all extrapolating trends that look important, and disagreeing with how to resolve conflicts between trends.