This is a tale of some differences between functional medicine and mainstream medicine in dealing with high cholesterol. Specifically, how I was treated at the California Center for Functional Medicine (CCFM) versus how I was treated at Kaiser.
Continue ReadingBook review: How the World Became Rich: The Historical Origins of Economic Growth, by Mark Koyama and Jared Rubin.
This is a well-written review of why different countries have different wealth, i.e. mostly about the industrial revolution.
The authors predominantly adopt an economist’s perspective, and somewhat neglect the perspective of historians, but manage to fairly present most major viewpoints.
Continue ReadingI see little hope of a good agreement to pause AI development unless leading AI researchers agree that a pause is needed, and help write the rules. Even with that kind of expert help, there’s a large risk that the rules will be ineffective and cause arbitrary collateral damage.
Yoshua Bengio has a reputation that makes him one of the best people to turn to for such guidance. He has now suggested restrictions on AI development that are targeted specifically at agenty AI.
If turned into a clear guideline, that would be a much more desirable method of slowing the development of dangerous AI. Alas, Bengio seems to admit that he isn’t yet able to provide that clarity.
Continue ReadingBook review: Four Battlegrounds: Power in the Age of Artificial Intelligence, by Paul Scharre.
Four Battlegrounds is often a thoughtful, competently written book on an important topic. It is likely the least pleasant, and most frustrating, book fitting that description that I have ever read.
The title’s battlegrounds refer to data, compute, talent, and institutions. Those seem like important resources that will influence military outcomes. But it seems odd to label them as battlegrounds. Wouldn’t resources be a better description?
Scharre knows enough about the US military that I didn’t detect flaws in his expertise there. He has learned enough about AI to avoid embarrassing mistakes. I.e. he managed to avoid claims that have been falsified by an AI during the time it took to publish the book.
Scharre has clear political biases. E.g.:
Conservative politicians have claimed for years – without evidence – that US tech firms have an anti-conservative bias.
(Reminder: The Phrase “No Evidence” Is A Red Flag For Bad Science Communication.) But he keeps those biases separate enough from his military analysis that I don’t find those biases to be a reason for not reading the book.
Continue ReadingOpenAI has told us in some detail what they’ve done to make GPT-4 safe.
This post will complain about some misguided aspects of OpenAI’s goals.
Continue ReadingI encourage you to interact with GPT as you would interact with a friend, or as you would want your employer to treat you.
Treating other minds with respect is typically not costly. It can easily improve your state of mind relative to treating them as an adversary.
The tone you use in interacting with GPT will affect your conversations with it. I don’t want to give you much advice about how your conversations ought to go, but I expect that, on average, disrespect won’t generate conversations that help you more.
I don’t know how to evaluate the benefits of caring about any feelings that AIs might have. As long as there’s approximately no cost to treating GPT’s as having human-like feelings, the arguments in favor of caring about those feelings overwhelm the arguments against it.
Scott Alexander wrote a great post on how a psychiatrist’s personality dramatically influences what conversations they have with clients. GPT exhibits similar patterns (the Waluigi effect helped me understand this kind of context sensitivity).
Journalists sometimes have creepy conversations with GPT. They likely steer those conversations in directions that evoke creepy personalities in GPT.
Don’t give those journalists the attention they seek. They seek negative emotions. But don’t hate the journalists. Focus on the system that generates them. If you want to blame some group, blame the readers who get addicted to inflammatory stories.
P.S. I refer to GPT as “it”. I intend that to nudge people toward thinking of “it” as a pronoun which implies respect.
This post was mostly inspired by something unrelated to Robin Hanson’s tweet about othering the AIs, but maybe there was some subconscious connection there. I don’t see anything inherently wrong with dehumanizing other entities. When I dehumanize an entity, that is not sufficient to tell you whether I’m respecting it more than I respect humans, or less.
Spock: Really, Captain, my modesty…
Kirk: Does not bear close examination, Mister Spock. I suspect you’re becoming more and more human all the time.
Spock: Captain, I see no reason to stand here and be insulted.
Some possible AIs deserve to be thought of as better than human. Some deserve to be thought of as worse. Emphasizing AI risk is, in part, a request to create the former earlier than we create the latter.
That’s a somewhat narrow disagreement with Robin. I mostly agree with his psychoanalysis in Most AI Fear Is Future Fear.
I like the basic idea of a pause in training increasingly powerful AIs. Yet I’m quite dissatisfied with any specific plan that I can think of.
AI research is proceeding at a reckless pace. There’s massive disagreement among intelligent people as to how dangerous this is.
Continue ReadingThis week we saw two interesting bank collapses: Silvergate Capital Corporation, and SVB Financial Group.
This is a reminder that diversification is important.
The most basic problem in both cases is that they got money from a rather undiverse set of depositors, who experienced unusually large fluctuations in their deposits and withdrawals. They also made overly large bets on the safety of government bonds.
Continue ReadingScott Alexander graded his predictions from 2018 and made new predictions for 2028.
I’m trying to compete with him. I’m grading myself as having done a bit worse than Scott.
Here’s a list of how I did (skipping a few where I agreed with Scott), followed by some predictions for 2028.
Continue ReadingBook review: How Social Science Got Better: Overcoming Bias with More Evidence, Diversity, and Self-Reflection, by Matt Grossmann.
It’s easy for me to become disenchanted with social science when so much of what I read about it is selected from the most pessimistic and controversial reports.
With this book, Grossmann helped me to correct my biased view of the field. While plenty of valid criticisms have been made about social science, many of the complaints lobbed against it are little more than straw men.
Grossmann offers a sweeping overview of the progress that the field has made over the past few decades. His tone is optimistic and hearkens back to Steven Pinker’s Better Angels of our Nature, while maintaining a rigorous (but dry) style akin to the less controversial sections of Robin Hanson’s Age of Em. Throughout the book, Grossmann aims to outdo even Wikipedia in his use of a neutral point of view.
Continue Reading