Context: looking for an alternative to a pause on AI development.

There’s some popular desire for software decisions to be explainable when used for decisions such as whether to grant someone a loan. That desire is not sufficient reason for possibly crippling AI progress. But in combination with other concerns about AI, it seems promising.

Much of this popular desire likely comes from people who have been (or expect to be) denied loans, and who want to scapegoat someone or something to avoid admitting that they look unsafe to lend to because they’ve made poor decisions. I normally want to avoid regulations that are supported by such motives.

Yet an explainability requirement shows some promise at reducing the risks from rogue AIs.

Continue Reading

Book review: Outlive: The Science and Art of Longevity, by Peter Attia.

This year’s book on aging focuses mostly on healthspan rather than lifespan, in an effort to combat the tendency of people in the developed world to have a wasted decade around age 80.

Attia calls his approach Medicine 3.0. He wants people to pay a lot more attention to their lifestyle starting a couple of decades before problems such as diabetes and Alzheimer’s create obvious impacts.

He complains about Medicine 2.0 (i.e. mainstream medicine) treating disease as a binary phenomenon. There’s lots of evidence suggesting that age-related diseases develop slowly over periods of more than a decade.

He’s not aiming to cure aging. He aims to enjoy life until age 100 or 120.

Continue Reading

Robin Hanson suggests, partly in response to calls for a pause in development of AGI, liability rules for risks related to AGI rapidly becoming powerful.

My intuitive reaction was to classify foom liability as equivalent to a near total ban on AGI.

Now that I’ve found time to think more carefully about it, I want to advocate foom liability as a modest improvement over any likely pause or ban on AGI research. In particular, I want the most ambitious AI labs worldwide to be required to have insurance against something like $10 billion to $100 billion worth of damages.

Continue Reading

Book review: How the World Became Rich: The Historical Origins of Economic Growth, by Mark Koyama and Jared Rubin.

This is a well-written review of why different countries have different wealth, i.e. mostly about the industrial revolution.

The authors predominantly adopt an economist’s perspective, and somewhat neglect the perspective of historians, but manage to fairly present most major viewpoints.

Continue Reading

I previously said:

I see little hope of a good agreement to pause AI development unless leading AI researchers agree that a pause is needed, and help write the rules. Even with that kind of expert help, there’s a large risk that the rules will be ineffective and cause arbitrary collateral damage.

Yoshua Bengio has a reputation that makes him one of the best people to turn to for such guidance. He has now suggested restrictions on AI development that are targeted specifically at agenty AI.

If turned into a clear guideline, that would be a much more desirable method of slowing the development of dangerous AI. Alas, Bengio seems to admit that he isn’t yet able to provide that clarity.

Continue Reading

Book review: Four Battlegrounds: Power in the Age of Artificial Intelligence, by Paul Scharre.

Four Battlegrounds is often a thoughtful, competently written book on an important topic. It is likely the least pleasant, and most frustrating, book fitting that description that I have ever read.

The title’s battlegrounds refer to data, compute, talent, and institutions. Those seem like important resources that will influence military outcomes. But it seems odd to label them as battlegrounds. Wouldn’t resources be a better description?

Scharre knows enough about the US military that I didn’t detect flaws in his expertise there. He has learned enough about AI to avoid embarrassing mistakes. I.e. he managed to avoid claims that have been falsified by an AI during the time it took to publish the book.

Scharre has clear political biases. E.g.:

Conservative politicians have claimed for years – without evidence – that US tech firms have an anti-conservative bias.

(Reminder: The Phrase “No Evidence” Is A Red Flag For Bad Science Communication.) But he keeps those biases separate enough from his military analysis that I don’t find those biases to be a reason for not reading the book.

Continue Reading