Robin Hanson has been suggesting recently that we’ve been experiencing an AI boom that’s not too different from prior booms.
At the recent Foresight Vision Weekend, he predicted [not exactly – see the comments] a 20% decline in the number of Deepmind employees over the next year (Foresight asked all speakers to make a 1-year prediction).
I want to partly agree and partly disagree.
I expect many companies are cluelessly looking into using AI in their current business, without having a strategy for doing much more than what simple statistics can do. I’m guessing that that mini-bubble has peaked.
I expect that hype directed at laymen has peaked, and will be lower for at least a year. That indicates a bubble of some sort, but not necessarily anything of much relevance to AI progress. It could reflect increased secrecy among the leading AI developers. I suspect that some of the recent publicity reflected a desire among both employers and employees to show off their competence, in order to attract each other.
I suspect that the top companies have established their reputations well enough that they can cut back a bit on this publicity, and focus more on generating real-world value, where secrecy has some clearer benefits than is the case with Go programs. OpenAI’s attitude about disclosing GPT-2 weakly hints at such a trend.
I’ve written before that the shift in AI research from academia to industry is worth some attention. I expect that industry feedback mechanisms are faster and reflect somewhat more wisdom than academic feedback mechanisms. So I’m guessing that any future AI boom/bust cycles will be shorter than prior cycles have been, and activity levels will remain a bit closer to the longer-term trend.
VC funding often has boom/bust cycles. I saw a brief indication of a bubble there (Rocket AI) three years ago. But as far as I can tell, VC AI funding kept growing for two years, then stabilized, with very little of the hype and excitement that I’d expect from a bubble. I’m puzzled about where VC AI funding is heading.
Then there’s Microsoft’s $1billion investment in OpenAI. That’s well outside of patterns that I’m familiar with. Something is happening here that leaves me confused.
AI conference attendance shows patterns that look fairly consistent with boom and bust cycles. I’m guessing that many attendees will figure out that they don’t have the aptitude to do more than routine AI-related work, and even if they continue in AI-related careers, they won’t get much out of regular conference attendance.
A much more important measure is the trend in compute used by AI researchers. Over the past 7 years, that happened in a way that was obviously unsustainable. In some sense, that almost guarantees slower progress in AI over the next few years. But that doesn’t tell us whether the next few years will see moderate growth in compute used, or a decline. I’m willing to bet that AI researchers will spend more on compute in 2024 than in 2019.
I invested in NVIDIA in 2017-8 for AI-related reasons, until its price/earnings ratio got high enough to scare me away. It gets 24% of its revenues from data centers, and a nontrivial fraction of that seems to be AI-related. NVIDIA experienced a moderate bubble that ended in 2018, followed by a slight decline in revenues. Oddly, that boom and decline were driven by both gaming and data center revenues, and it eludes me what would synchronize market cycles between those two markets.
What about Deepmind? It looks like one of the most competent AI companies, and its money comes from a relatively competent parent. I’d be surprised if the leading companies in an industry experienced anywhere near as dramatic a bust as do those with below-average competence. So I’ll predict slowing growth, but not decline, for Deepmind.
The robocar industry is an important example of AI progress that doesn’t look much like a bubble.
This is not really a central example of AI, but clearly depends on having something AI-like. The software is much more general purpose than AlphaGo, or anything from prior AI booms.
Where are we in the robocar boom? Close to a takeoff. Waymo has put a few cars on the road with no driver already. Some of Tesla’s customers are already acting as if Tesla’s software is safe enough to be a driverless car. In an ideal world, the excessive(?) confidence of those drivers would not be the right path to driverless cars, but if other approaches are slow, consumer demand for Tesla’s software will drive the robocar transition.
I expect robocars to produce sustained increases in AI-related revenues over the next decade, but maybe that’s not relevant to further development of AI, except by generating a modest increase in investor confidence.
Some specific companies in this area might end up looking like bubbles, but I can’t identify them with enough confidence to sell them short. Uber, Lyft, and Tesla might all be bubbles, but when I try to guess whether that’s the case, I pay approximately zero attention to AI issues, and a good deal of attention to mundane issues such as risk of robocar lawsuits, or competitors adopting a better business strategy.
I wish I saw a good way to bet directly that the robocar industry will take off dramatically within a few years, but the good bets that I see are only weakly related, mostly along the lines I mentioned in Peak Fossil Fuel. I’m considering a few more bets that are a bit more directly about robocars, such as shorting auto insurance stocks, but the time doesn’t yet look quite right for that.
Finally, it would be valuable to know whether there’s a bubble in AI safety funding. This area has poor feedback mechanisms compared to revenue-driven software, so I find it easier to imagine a longer-lasting decline in funding here.
MIRI seems somewhat at risk for reduced funding over the next few years. I see some risk due to the effects of cryptocurrency and startup sources of wealth. I don’t see the kinds of discussions of AI risk that would energize more of the MIRI-oriented donors than we’ve seen in the past few years.
I’m less clear on FHI’s funding. I’m guessing there’s more institutional inertia among FHI’s funders, but I can easily imagine that they’re sufficiently influenced by intellectual and/or social fads that FHI will have some trouble several years from now.
And then there are the safety researchers at places like Deepmind and OpenAI. I’m puzzled as to how much of that is PR-driven, and how much is driven by genuine concerns about the risks, so I’m not making any prediction here.
Note that there are more AI safety institutions, most of which I know less about. I’ll guess that their funding falls within the same patterns as the ones I did mention.
In sum, I expect some AI-related growth to continue over the next 5 years or so, with a decent chance that enough AI-related areas will experience a bust to complicate the picture. I’ll say a 50% chance of an ambiguous answer to the question of whether AI overall experiences a decline, a 40% chance of a clear no, and a 10% chance of a clear yes.
To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.