Artificial Intelligence

[I mostly wrote this to clarify my thoughts. I’m unclear whether this will be valuable for readers. ]

I expect that within a decade, AI will be able to do 90% of current human jobs. I don’t mean that 90% of humans will be obsolete. I mean that the average worker could delegate 90% of their tasks to an AGI.

I feel confused about what this implies for the kind of AI long-term planning and strategizing that would enable an AI to create large-scale harm if it is poorly aligned.

Is the ability to achieve long-term goals hard for an AI to develop?

Continue Reading

Disagreements related to what we value seem to explain maybe 10% of the disagreements over AI safety. This post will try to explain how I think about which values I care about perpetuating to the distant future.

Robin Hanson helped to clarify the choices in Which Of Your Origins Are You?:

The key hard question here is this: what aspects of the causal influences that lead to you do you now embrace, and which do you instead reject as “random” errors that you want to cut out? Consider two extremes.
At one extreme, one could endorse absolutely every random element that contributed to any prior choice or intuition.

At the other extreme, you might see yourself as primarily the result of natural selection, both of genes and of memes, and see your core non-random value as that of doing the best you can to continue to “win” at that game. … In this view, everything about you that won’t help your descendants be selected in the long run is a random error that you want to detect and reject.

In other words, the more unique criteria we have about what we want to preserve into the distant future, the less we should expect to succeed.

Continue Reading

Book review: The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, by Mustafa Suleyman.

An author with substantial AI expertise has attempted to discuss AI in terms that the average book reader can understand.

The key message: AI is about to become possibly the most important event in human history.

Maybe 2% of readers will change their minds as a result of reading the book.

A large fraction of readers will come in expecting the book to be mostly hype. They won’t look closely enough to see why Suleyman is excited.

Continue Reading

Context: looking for an alternative to a pause on AI development.

There’s some popular desire for software decisions to be explainable when used for decisions such as whether to grant someone a loan. That desire is not sufficient reason for possibly crippling AI progress. But in combination with other concerns about AI, it seems promising.

Much of this popular desire likely comes from people who have been (or expect to be) denied loans, and who want to scapegoat someone or something to avoid admitting that they look unsafe to lend to because they’ve made poor decisions. I normally want to avoid regulations that are supported by such motives.

Yet an explainability requirement shows some promise at reducing the risks from rogue AIs.

Continue Reading

Robin Hanson suggests, partly in response to calls for a pause in development of AGI, liability rules for risks related to AGI rapidly becoming powerful.

My intuitive reaction was to classify foom liability as equivalent to a near total ban on AGI.

Now that I’ve found time to think more carefully about it, I want to advocate foom liability as a modest improvement over any likely pause or ban on AGI research. In particular, I want the most ambitious AI labs worldwide to be required to have insurance against something like $10 billion to $100 billion worth of damages.

Continue Reading

I previously said:

I see little hope of a good agreement to pause AI development unless leading AI researchers agree that a pause is needed, and help write the rules. Even with that kind of expert help, there’s a large risk that the rules will be ineffective and cause arbitrary collateral damage.

Yoshua Bengio has a reputation that makes him one of the best people to turn to for such guidance. He has now suggested restrictions on AI development that are targeted specifically at agenty AI.

If turned into a clear guideline, that would be a much more desirable method of slowing the development of dangerous AI. Alas, Bengio seems to admit that he isn’t yet able to provide that clarity.

Continue Reading

Book review: Four Battlegrounds: Power in the Age of Artificial Intelligence, by Paul Scharre.

Four Battlegrounds is often a thoughtful, competently written book on an important topic. It is likely the least pleasant, and most frustrating, book fitting that description that I have ever read.

The title’s battlegrounds refer to data, compute, talent, and institutions. Those seem like important resources that will influence military outcomes. But it seems odd to label them as battlegrounds. Wouldn’t resources be a better description?

Scharre knows enough about the US military that I didn’t detect flaws in his expertise there. He has learned enough about AI to avoid embarrassing mistakes. I.e. he managed to avoid claims that have been falsified by an AI during the time it took to publish the book.

Scharre has clear political biases. E.g.:

Conservative politicians have claimed for years – without evidence – that US tech firms have an anti-conservative bias.

(Reminder: The Phrase “No Evidence” Is A Red Flag For Bad Science Communication.) But he keeps those biases separate enough from his military analysis that I don’t find those biases to be a reason for not reading the book.

Continue Reading