One of the weakest claims in The Age of Em was that AI progress has not been accelerating.
J Storrs Hall (aka Josh) has a hypothesis that AI progress accelerated about a decade ago due to a shift from academia to industry. (I’m puzzled why the title describes it as a coming change, when it appears to have already happened).
I find it quite likely that something important happened then, including an acceleration in the rate at which AI affects people.
I find it less clear whether that indicates a change in how fast AI is approaching human intelligence levels.
Josh points to airplanes as an example of a phase change being important.
I tried to compare AI progress to other industries which might have experienced a similar phase change, driven by hardware progress. But I was deterred by the difficulty of estimating progress in industries when they were driven by academia.
One industry I tried to compare to was photovoltaics, which seemed to be hyped for a long time before becoming commercially important (10-20 years ago?). But I see only weak signs of a phase change around 2007, from looking at Swanson’s Law. It’s unclear whether photovoltaic progress was ever dominated by academia enough for a phase change to be important.
Hypertext is a domain where a clear phase change happened in the earl 1990s. It experienced a nearly foom-like rate of adoption when internet availability altered the problem, from one that required a big company to finance the hardware and marketing, to a problem that could be solved by simply giving away a small amount of code. But this change in adoption was not accompanied by a change in the power of hypertext software (beyond changes due to network effects). So this seems like weak evidence against accelerating progress toward human-level AI.
What other industries should I look at?
In the late nineties, the change in power of hypertext was immense. Apple’s Hypercard is an example of what was out there, as were various laserdisks full of linked text. The first versions of HTML and HTTP didn’t have much more ability to link text, but the network effects were astronomical. It seems mildly possible (definitely worth investigating anyway) that the network effects (or other causes of exponential take-off) of AI may be similar.
At Xerox PARC, we definitely noticed a phase change after the introduction of the PC (and later Macintosh). Before that it was cost effective for a private lab to develop their own hardware in order to be ahead of the COTS marketplace, and investigate what would be possible in the future when people had access to more computation. After the PC, it was silly to develop purpose-built hardware. If it wasn’t going to hit commercial quantities, it wouldn’t be competitive after the first release. I think the same was true of graphics hardware when gaming took over. SGI was able to stay ahead of the curve for a long time, but when hardware graphics accelerators started taking off for gaming, they were never able to keep up.
I think Cameras are right at the edge. Apparently there are enough camera buffs that it still makes sense to develop hardware for them, but my understanding is that the cameras included with smart phones have caught up, and the phone manufacturers have such volume, and enough drive to be competitive on the camera hardware that the camera form factor only holds on because of that niche market.
Chris,
Those PC and camera phase changes might be relevant if AI is taking off due to economies of scale. But I take Josh’s most important point to be that AI research incentives have shifted. Industry has more incentive than academia to focus on questions that matter. For computers and cameras, if their progress was ever dominated by academic research, the shift to research with better incentives happened well before we were born.
Pingback: Moore’s Law and AGI Timelines | Bayesian Investor Blog
Pingback: Another AI Winter? | Bayesian Investor Blog