Archives

All posts by Peter

Book review: How Social Science Got Better: Overcoming Bias with More Evidence, Diversity, and Self-Reflection, by Matt Grossmann.

It’s easy for me to become disenchanted with social science when so much of what I read about it is selected from the most pessimistic and controversial reports.

With this book, Grossmann helped me to correct my biased view of the field. While plenty of valid criticisms have been made about social science, many of the complaints lobbed against it are little more than straw men.

Grossmann offers a sweeping overview of the progress that the field has made over the past few decades. His tone is optimistic and hearkens back to Steven Pinker’s Better Angels of our Nature, while maintaining a rigorous (but dry) style akin to the less controversial sections of Robin Hanson’s Age of Em. Throughout the book, Grossmann aims to outdo even Wikipedia in his use of a neutral point of view.

Continue Reading

I’m having trouble keeping track of everything I’ve learned about AI and AI alignment in the past year or so. I’m writing this post in part to organize my thoughts, and to a lesser extent I’m hoping for feedback about what important new developments I’ve been neglecting. I’m sure that I haven’t noticed every development that I would consider important.

I’ve become a bit more optimistic about AI alignment in the past year or so.

I currently estimate a 7% chance AI will kill us all this century. That’s down from estimates that fluctuated from something like 10% to 40% over the past decade. (The extent to which those numbers fluctuate implies enough confusion that it only takes a little bit of evidence to move my estimate a lot.)

I’m also becoming more nervous about how close we are to human level and transformative AGI. Not to mention feeling uncomfortable that I still don’t have a clear understanding of what I mean when I say human level or transformative AGI.

Continue Reading

There has been a fair amount of research suggesting that beyond some low threshold, additional money does little to increase a person’s happiness.
Here’s a research report (see also here) indicating that the effect of money has sometimes been underestimated because researchers use income as a measure of money, when wealth has a higher correlation with happiness.
There’s probably more than one reason for this. Wealth produces a sense of security that isn’t achieved by having a high income but spending that income quickly. Also, it’s possible that people with high savings rates tend to be those who are easily satisfied with their status, whereas those who don’t save when they have high incomes are those who have a strong need to show off their incomes in order to compete for status (and since competition for status is in some ways a zero sum game, many of them will fail).

Szabo on Global Warming

Nick Szabo has a very good post on global warming.
I have one complaint: he says “acid rain in the 1970s and 80s was a scare almost as big global warming is today”. I remember the acid rain concerns of the 1970s well enough to know that this is a good deal of an exaggeration. Acid rain alarmists said a lot about the potential harm to forests and statues, but to the extent they talked about measurable harm to people, it was a good deal vaguer than with global warming, and if it could have been quantified it would probably have implied at least an order of magnitude less measurable harm to people than what mainstream academics are suggesting global warming could cause.

Mike Linksvayer has a fairly good argument that raising X dollars by running ads on Wikipedia won’t create more conflict of interest than raising X dollars some other way.
But the amount of money an organization handles has important effects on its behavior that are somewhat independent of the source of the money, and the purpose of ads seems to be to drastically increase the money that they raise.
I can’t provide a single example that provides compelling evidence in isolation, but I think that looking at a broad range of organizations with $100 million revenues versus a broad range of organizations that are run by volunteers who only raise enough money to pay for hardware costs, I see signs of big differences in the attitudes of the people in charge.
Wealthy organizations tend to attract people who want (or corrupt people into wanting) job security or revenue maximization, whereas low-budget volunteer organizations tend to be run by people motivated by reputation. If reputational motivations have worked rather well for an organization (as I suspect the have for Wikipedia), we should be cautious about replacing those with financial incentives.
It’s possible that the Wikimedia Foundation could spend hundreds of millions of dollars wisely on charity, but the track record of large foundations does not suggest that should be the default expectation.

Traffic Cops

More evidence that people strongly overestimate the need for government.
The latest issue of Reason magazine has a nice report (based on this report in Motoring) that Ukraine fired all of the country’s traffic cops, and preliminary evidence indicates that the predictions of increased traffic accidents were false.

Serenity

The reports I’d been reading about this movie led me to expect it would be a must-see movie. I was somewhat disappointed. It was probably worth seeing. It’s more intelligent but more violent than a Star Trek movie. And the rationalizations for the violence aren’t as contrived as in a typical violent movie. But being more intelligent than most big-budget movies is hardly enough to make a movie great. Maybe this is close to the best that we can expect from a movie aimed at a large audience, but if so that just makes me impatient for the day when animation becomes sufficiently realistic and affordable that the artists who produce science fiction movies can afford to target smaller (more sophisticated) markets and can afford as much variety and experimentation as the book market currently can.