Some people wonder whether AI companies can make enough money selling chatbot subscriptions to justify their planned $100+ billion spending on datacenters.
Current trends suggest that there’s enough competition between AI companies to drive chatbot prices down to somewhere near the cost of compute.
I expect they won’t rely on these subscriptions as a primary revenue source, and that they have some awareness of other business models to which they may shift in a few years. This post will outline one possible model.
The Plan
I envision AI companies becoming something more like a startup incubator. Instead of releasing their most powerful models to the public, I expect them to use the models as employees of the startups that they create.
We can see a crude prototype of a startup run entirely by AIs by looking at the AI village. Possibly the AIs will hire an occasional human for tasks that require more dexterity than the available robots, or tasks that are regulated to require human involvement. But mostly I mean startups where all the important roles normally performed by humans are performed by AIs.
I’m imagining that sometime in the 2028-2033 time frame, companies run mostly or entirely by AIs will begin to outcompete human-run companies in many industries. By then, AIs will generally be faster, cheaper, more reliable, and able to respond to more information than humans.
Management quality is likely to remain the most important determinant of success. That means startups would pay a lot to use the latest and greatest AIs to be the managers.
Let’s compare two different approaches for charging for these AIs:
- The subscription model, where independent startups buy AI services from the AI companies.
- The incubator model, where AI companies own the startups, and don’t offer the AI services to companies outside of their control.
While recent trends can give the impression that the industry is on track to use the subscription model, I predict that the incubator model will better describe where the industry will head.
So why isn’t this incubator model more common already? Traditional startup incubators, which provide funding and mentorship to new companies, have a major bottleneck: finding and supporting talented human founders. An incubator’s success is limited by how many visionary people it can find. With rare exceptions such as Elon Musk, founders can only run one company at a time.
Advantages
In contrast, when the startups that are being incubated are having AIs do almost all the labor, the labor supply is mostly constrained by the availability of compute, or maybe the availability of training data. When the startups need more talent, they mostly just spin up another copy of an existing AI.
I see several other potentially important reasons for shifting to the incubator approach.
Less need for safety testing.
If access to bleeding edge AIs is limited to trusted employees, the AI companies face much less risk of careless/malicious users causing the AI to do something harmful.
I’m reminded of the accident that killed Uber’s self-driving car project. That’s not an ideal example, due to Uber’s unusual recklessness. But I suspect some AI companies are eager enough to deliver value to their users that they’re on track for a subtler version of that mistake.
Maybe a better example would be the possibility of an AI organizing a harmful cult. That’s something that would be likely to show up in a mass-released chatbot, but not in an AI that’s used purely for business purposes within the company that created it. That’s one reason why I expect AI companies to become slower to release their best AIs.
Trade secrets.
If competitors get access to an AI via a chatbot subscription, they can more easily replicate its capabilities. I’m unclear how important this will be.
Venture Capital Profits.
Currently, a nontrivial fraction of the value of a startup goes to rewarding VCs for identifying the most promising founders. When the founders are instead created and trained by the AI companies, the AI companies will be better at evaluating the founders than the VCs will be. The incubator approach cuts out the (newly obsolete) middleman.
Bubble?
I’ll guess that in the 2028-2033 time frame, a leading AI company will be able to produce somewhere between 1 and 10 successful startups per year that get valuations in the $1 billion to $100 billion range. The lower end of those ranges would be bad news for AI companies that seem likely to have invested $1 trillion each in datacenters. The upper end of those ranges would represent a return on investment of around 100% per year.
Some of my uncertainty here reflects uncertainty as to how much competition there will be between startups.
Will the various AIs all think similarly enough that they all pursue the same business plans in each incubator?
Is each AI company going to create a startup that researches the same treatment for aging as the other AI companies? I think that for some such treatments, the patent system will stop competition at an early stage. I expect startups like this to be produced a bit less than once per year in the 2028-2033 time frame, but the recent example of Eli Lilly suggests that one might be worth a trillion dollars.
Will each AI company create nanotech startups with different approaches to achieving Drexler’s vision, with one approach succeeding much earlier than the others? Or will the world end up with several different nanotech companies that compete fairly intensely with each other? I’ll guess that a company with a monopoly on Drexlerian nanotech would be worth trillions. But I predict there will be a fair amount of competition.
It seems likely that some AI-run startups will be hedge funds. The simplest approaches to this would likely end up with multiple hedge funds using nearly identical strategies, and making mostly the same trades (due to being trained on mostly the same data). That means their competition with each other would wipe out most of their advantage. I expect AI companies will improve on this somewhat, maybe with a combination of creative prompting plus some extra randomness. They might manage to create a new hedge fund each month, each worth a few billion dollars.
I conclude that those trillion dollar datacenters will be reasonable gambles, but I’ll still focus most of my investment on companies that help build the datacenters rather than those who own the AIs. And the AI companies would hedge their risk by pursuing some other business plans with their less advanced AIs.
P.S. SemiAnalysis (paywalled) suggests a different business model that is more feasible today. It’s roughly an ad-based strategy. It’s plausibly appropriate for the 2026 to 2028 time frame.