Book review: How Social Science Got Better: Overcoming Bias with More Evidence, Diversity, and Self-Reflection, by Matt Grossmann.

It’s easy for me to become disenchanted with social science when so much of what I read about it is selected from the most pessimistic and controversial reports.

With this book, Grossmann helped me to correct my biased view of the field. While plenty of valid criticisms have been made about social science, many of the complaints lobbed against it are little more than straw men.

Grossmann offers a sweeping overview of the progress that the field has made over the past few decades. His tone is optimistic and hearkens back to Steven Pinker’s Better Angels of our Nature, while maintaining a rigorous (but dry) style akin to the less controversial sections of Robin Hanson’s Age of Em. Throughout the book, Grossmann aims to outdo even Wikipedia in his use of a neutral point of view.

Continue Reading

I’m having trouble keeping track of everything I’ve learned about AI and AI alignment in the past year or so. I’m writing this post in part to organize my thoughts, and to a lesser extent I’m hoping for feedback about what important new developments I’ve been neglecting. I’m sure that I haven’t noticed every development that I would consider important.

I’ve become a bit more optimistic about AI alignment in the past year or so.

I currently estimate a 7% chance AI will kill us all this century. That’s down from estimates that fluctuated from something like 10% to 40% over the past decade. (The extent to which those numbers fluctuate implies enough confusion that it only takes a little bit of evidence to move my estimate a lot.)

I’m also becoming more nervous about how close we are to human level and transformative AGI. Not to mention feeling uncomfortable that I still don’t have a clear understanding of what I mean when I say human level or transformative AGI.

Continue Reading

I recently noticed similarities between how I decide what stock market evidence to look at, and how the legal system decides what lawyers are allowed to tell juries.

This post will elaborate on Eliezer’s Scientific Evidence, Legal Evidence, Rational Evidence. In particular, I’ll try to generalize about why there’s a large class of information that I actively avoid treating as Bayesian evidence.

Continue Reading

AI looks likely to cause major changes to society over the next decade.

Financial markets have mostly not reacted to this forecast yet. I expect it will be at least a few months, maybe even years, before markets have a large reaction to AI. I’d much rather buy too early than too late, so I’m trying to reposition my investments this winter to prepare for AI.

This post will focus on scenarios where AI reaches roughly human levels sometime around 2030 to 2035, and has effects that are at most 10 times as dramatic as the industrial revolution. I’m not confident that such scenarios are realistic. I’m only saying that they’re plausible enough to affect my investment strategies.

Continue Reading

BioVie

BioVie Inc recently reported some unusual results from a clinical trial for Alzheimer’s.

They report some mildly encouraging cognitive improvements, but it’s only 3 months into the trial and there’s no placebo group, so it’s easy to imagine they’re just seeing a placebo effect (Annovis’ results show a clear placebo effect, presumably influencing the measurement rather than the actual health).

What interested me is this:

Reduces Horvath DNA Methylation SkinBlood Clock by 3.3 years after 3 months of treatment.

Continue Reading

Book review: Investing Amid Low Expected Returns: Making the Most When Markets Offer the Least, by Antti Ilmanen.

This book is a follow-up to Ilmanen’s prior book, Expected Returns. Ilmanen has gotten nerdier in the decade between the two books. This book is for professional investors who want more extensive analysis than what Expected Returns provided. This review is also written for professional investors. Skip this review if you don’t aspire to be one.

Continue Reading

Blog post review: LOVE in a simbox.

Jake Cannell has a very interesting post on LessWrong called LOVE in a simbox is all you need, with potentially important implications for AGI alignment. (LOVE stands for Learning Other’s Values or Empowerment.)

Alas, he organized it so that the most alignment-relevant ideas are near the end of a long-winded discussion of topics whose alignment relevance seems somewhat marginal. I suspect many people gave up before reaching the best sections.

I will summarize and review the post in roughly the opposite order, in hopes of appealing to a different audience. I’ll likely create a different set of misunderstandings from what Jake’s post has created. Hopefully this different perspective will help readers triangulate on some hypotheses that are worth further analysis.

Continue Reading

A conflict is brewing between China and the West.

Beijing is determined to reassert control over Taiwan. The US, and likely most of NATO, seem likely to respond by, among other things, boycotting China.

We should, of course, worry that this will lead to war between China and the US. I don’t have much insight into that risk. I’ll focus in this post on risks about which I have some insight, without meaning to imply that they’re the most important risks.

Such a boycott would be more costly than the current boycott of Russia, and the benefits would likely be smaller.

How can I predict whether the reaction to China’s action against Taiwan will be a rerun of the response to the recent Russian attack on Ukraine?

I’ll start by trying to guess the main forces that led to the boycott of Russia.

Continue Reading

I previously sounded vaguely optimistic about the Baze blood test technology. They shut down their blood test service this spring, “for the foreseeable future”. Their web site suggests that they plan to resume it someday. I don’t have much hope that they’ll resume selling it.

Shortly after I posted about Baze, they stopped reporting numbers for magnesium, vitamin D, and vitamin B12. I.e. they only told me results such as “low”, “optimal”, “normal”, etc. This was apparently was due to FDA regulations, although I’m unclear why.

I’d like to believe that Baze is working on getting permission to report results the way that companies such as Life Extension report a wide variety of tests that are conducted via LabCorp.

At roughly the same time, Thorne Research announced study results of a device that sounds very similar to the Baze device (maybe a bit more reliable?).

Thorne is partly a supplement company, but also already has enough of a focus on testing that I don’t expect it to use tests primarily for selling vitamins, the way Baze did.

I’m debating whether to invest in Thorne.

Book review: What We Owe the Future, by William MacAskill.

WWOTF is a mostly good book that can’t quite decide whether it’s part of an activist movement, or aimed at a small niche of philosophy.

MacAskill wants to move us closer to utilitarianism, particularly in the sense of evaluating the effects of our actions on people who live in the distant future. Future people are real, and we have some sort of obligation to them.

WWOTF describes humanity’s current behavior as reckless, like an imprudent teenager. MacAskill almost killed himself as a teen, by taking a poorly thought out risk. Humanity is taking similar thoughtless risks.

MacAskill carefully avoids endorsing the aspect of utilitarianism that says everyone must be valued equally. That saves him from a number of conclusions that make utilitarianism unpopular. E.g. it allows him to be uncertain about how much to care about animal welfare. It allows him to ignore the difficult arguments about the morally correct discount rate.

Continue Reading