MIRI has produced a potentially important result (called Garrabrant induction) for dealing with uncertainty about logical facts.
The paper is somewhat hard for non-mathematicians to read. This video provides an easier overview, and more context.
It uses prediction markets! “It’s a financial solution to the computer science problem of metamathematics”.
It shows that we can evade disturbing conclusions such as Godel incompleteness and the paradox of the liar, by expecting to only be very confident about logically deducible facts (as opposed to being mathematically certain). That’s similar to the difference between treating beliefs about empirical facts as probabilities, as opposed to boolean values.
I’m somewhat skeptical that it will have an important effect on AI safety, but my intuition says it will produce enough benefits somewhere that it will become at least as famous as Pearl’s work on causality.