AI 2027 portrays two well thought out scenarios for how AI is likely to impact the world toward the end of this decade.

I expect those scenarios will prove to be moderately wrong, but close enough to be scary. I also expect that few people will manage to make forecasts that are significantly more accurate.

Here are some scattered thoughts that came to mind while I read AI 2027.

The authors are fairly pessimistic. I see four key areas where their assumptions seem to lead them to see more danger than do more mainstream experts. They see:

  • a relatively small capabilities lead being enough for a group to conquer the world
  • more difficulty of alignment
  • more difficulty of detecting deception
  • AI companies being less careful than is necessary

I expect that the authors are being appropriately concerned on about two of these assumptions, and a bit too pessimistic on the others. I’m hesitant to bet on which assumptions belong in which category.

They don’t focus much on justifying those assumptions. That’s likely wise, since prior debates on those topics have not been very productive. Instead, they’ve focused more on when various changes will happen.

This post will focus on aspects of the first two assumptions for which I expect further analysis to be relatively valuable.

Continue Reading

I have an idea for a fairly cheap strategy for slightly reducing the risk of catastrophic harm from AI.

This post was inspired by a recent LessWrong post How AI Takeover Might Happen in 2 Years.

That post describes a scenario in which most humans die, mainly because an AI that is first to become powerful enough to attempt world conquest sees risks that other AIs, imperfectly aligned with it, will cause danger soon. This creates a perceived need to engage in a few violent pivotal processes, rather than using a combination of persuasion and negotiation.

The specific scenario in that post seems fairly unrealistic. I see less than a 5% chance that a leading AI within the next 10 years will want to attempt that kind of world conquest. But it’s close enough to being realistic that I want to analyze a class of scenarios that are similar to it.

I want to nudge AIs in the direction of being more willing to cooperate with each other, in order to reduce the risk of such a conquest. Specifically, I want them to negotiate a peaceful outcome instead of using pre-emptive destruction.

Continue Reading

Trump has become seriously overconfident. Republicans in Congress now face a dilemma similar to what Democrats faced last summer when it became apparent how senile Biden was.

The senate vote against Canadian tariffs is just the beginning of congressional resistance.

Voters in swing states a month or two from now will put more pressure on Republican politicians to change course. As with Biden, leaders will be slow to speak out against Trump until after most of them have privately decided on some sort of break with him.

It seems very likely that there will be some sort of dramatic confrontation between Trump and Congress and/or the courts. The outcome may be quite messy, but I don’t see how Trump can avoid being widely recognized as a pathetic loser. If the Republican party doesn’t distance itself enough from Trump soon, it will become a minority party for quite some time.

I doubt that a recession could be avoided merely by Trump conceding to reduce tariffs. The uncertainty that he creates is roughly half the problem. Economic recovery depends on a clear signal that the US is a safe place for business, such as Trump leaving office or being restrained by another branch of government.

What does this mean for the stock market?

The historical analogies that I can find aren’t very close, but weak analogies are better than nothing.

My main analogy is the Carter Administration’s credit controls (March 1980), which caused a short recession. The market dropped 7% in two weeks after they were announced. It took nearly 2.5 years for a sustained recovery to begin. But the harm from the credit controls, which were clearly intended as a temporary measure, probably ended within a couple of months.

My second analogy is 9/11. The market dropped nearly 12% in 10 days, then recovered enough that the decline in 2002 was likely unrelated. If the US unites against tariffs as clearly as it united against Al Qaeda, then I’d be buying now. But Trump would likely need to make another mistake to produce that much unity.

Tariffs are likely doing somewhat more harm than those two events, but it’s fairly plausible that the markets have mostly reflected the effects.

AI stocks look cheap. I expect to buy more sometime over the next few months. But the uncertainties will likely delay the next bull market long enough that it’s wise to wait for more news before buying.

I participated in the TRIIM-X trial, a phase 2 test by Intervene Immune, intended to regrow the thymus. Regrowing the thymus likely delays age-related declines in health.

I’m also an investor in Intervene Immune.

Here’s a video presentation of some results of the trial. It confirms the moderately impressive evidence from the original TRIIM trial.

The main ingredients of the treatment are human growth hormone, metformin, and DHEA.

Continue Reading

I’ve been creating prediction markets on Manifold in order to better predict AI strategies. Please trade them.

If I get a bit more trading in these markets, I will create more AI-related markets. Stay tuned here, or follow me on Manifold.

In 2015, I posted some investing advice for people who only spend a few hours per year on investing.

I intended to review it after five years, but a pandemic distracted me. It looks like this whole decade will end up being too busy for me to write everything that I want to write. But I’ve become able to write faster recently, maybe due to the feeling of urgency about AI transforming the world soon. So I’m getting a few old ideas for blog posts off of my to-do list, in order to be able to devote most of my attention to AI when the world becomes wild.

My advice worked poorly enough that I’m too discouraged to quantify the results.

Continue Reading

I’ve been using Modere’s Curb, a supplement intended to produce healthy GLP-1 levels.

I started taking it in late October, hoping to lose enough weight (3 to 5 pounds?) that I could stop taking Rauwolfia to handle my blood pressure.

My weight dropped 2 pounds in late November, to 149. Since then my weight has been more stable than before. Any remaining trend has been too small to measure. I suspect that the timing of my weight loss is due to getting more exercise than usual the last week in November.

Continue Reading

Book review: Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World, by Darren McKee.

This is by far the best introduction to AI risk for people who know little about AI. It’s appropriate for a broader class of readers than most laymen-oriented books.

It was published 14 months ago. In this rapidly changing field, most AI books say something that gets discredited by the time they’re that old. I found no clear example of such obsolescence in Uncontrollable (but read on for a set of controversial examples).

Nearly everything in the book was familiar to me, yet the book prompted me to reflect better, thereby changing my mind modestly – mostly re-examining issues that I’ve been neglecting for the past few years, in light of new evidence.

The rest of this review will focus on complaints, mostly about McKee’s overconfidence. The features that I complain about reduce the value of book by maybe 10% compared to the value of an ideal book. But that ideal book doesn’t exist, and I’m not wise enough to write it.

Continue Reading

The standout announcement from the recent Foresight Vision Weekend came from Openwater, who presented a novel cancer treatment.

I’ve been a bit slow to write about it, in part because my initial reaction was that it’s too good to be true, and most big claims of medical advances are not true.

TL;DR: They’ve developed a cheap ultrasound device that can selectively kill cancer cells by exploiting their unique resonant frequencies, similar to how an opera singer can shatter a wine glass.

Continue Reading