Archives

All posts by Peter

Buy the Yuan?

I’m curious how U.S. politicians are rationalizing their support for tariffs to punish China for keeping the yuan high (i.e. propping up the dollar). There’s an obvious alternative if it’s as clear as most people claim that the yuan is undervalued: have the U.S. government buy as many yuan as it can. That would make it more expensive for China to keep the yuan down at a probable profit to the U.S.
There may be some Chinese rules which pose obstacles to doing this, but I doubt the U.S. politicians are able to know that they couldn’t get around such rules.
Note that I didn’t ask what their real motives are (tariffs benefit special interests more than buying the yuan would, which provides a strong hint). I’m interested in what excuses they would give.

I attended an interesting talk yesterday at SJSU by Dwight R. Lee on the topic “Misers vs. Philanthropists: And the Winner Is?” (part of a provocative lecture series sponsored by the Econ department).
His attempt to prove that misers helped the world more than philanthropists was only a partial success, mainly because he made assumptions about how well philanthropists spent their money that were somewhat arbitrary and unconvincing (probably too favorable to many philanthropists). He did a good job of explaining why philanthropists were overrated and misers underrated. The miser who hides money in his basement provides diffuse benefits to other holders of money by driving up the value of the money, providing nobody with much incentive to understand the effect. The beneficiaries of philanthropists are much more concentrated groups of people, who notice the benefits in ways that public choice theory describes for comparable political handouts.
I’m a bit puzzled as to whether the benefits he attributes to misers are often offset by central bank policies.
The best part of his talk was the great analogy he gave to suggest in a concise way that can be understood by an average person why the fact that workers get laid off isn’t an argument against free markets. He asked whether anyone in the audience liked pain. Nobody raised a hand. He then asked whether anyone would want to be completely without pain. Nobody raised a hand at this point either, correctly anticipating the description he would give of people who never feel pain (a disease known as CIPA, which significantly reduces a person’s life expectancy). Likewise with markets, plant closures are a symptom of mistakes in resource allocation, and we can expect systems that suppress those symptoms to perpetuate mistakes.

Temple Grandin’s latest book Animals in Translation has a couple of ideas that deserve some wider discussion. (The book as a whole is disappointing – see my reviews on Amazon for some of my complaints).
She reports that Con Slobodchikoff has shown that prairie dogs have a language that includes nouns, adjectives, and verbs, and they can apparently combine words to describe objects they haven’t seen before. This seems sufficiently inconsistent with what I’ve read about nonhuman languages (e.g. in Pinker’s books) that it deserves more attention than it has gotten. I can’t find enough about it on the web to decide whether to believe it, and it will take some time for me to get a paper version of Slobodchikoff’s descriptions of the research.
Grandin has an interesting idea about the coevolution of man and dogs. Domestication of animals causes their brains to become smaller, presumably because they come to rely on humans for some functions that they previously needed to handle themselves. It seems that human midbrains shrank about 10% around 10,000 years ago, about when dogs may have become domesticated. That is what we would expect if humans came to rely on dogs for many smelling tasks.

Book Review: FAB: The Coming Revolution on Your Desktop–From Personal Computers to Personal Fabrication by Neil Gershenfeld
This book brings welcome attention to the neglected field of personal, general-purpose manufacturing. He argues that the technology is at roughly the stage that computing was when minicomputers were the leading edge, is good enough to tell us something about how full-fledged assemblers as envisioned by Drexler will be used, and that the main obstacle to people using it to build what they want is ignorance of what can be accomplished.
The book presents interesting examples of people building things that most would assume were beyond their ability. But he does not do a good job of explaining what can and can’t be accomplished. Too much of the book sounds like a fund-raising appeal for a charity, describing a needy person who was helped rather than focusing on the technology or design process. He is rather thoughtless about choosing what technical details to provide, giving examples of assembly language (something widely known, and hard enough to use that most of his target users will be deterred from making designs which need it), but when he describes novel ideas such as “printing” a kit that can be assembled into a house he is too cryptic for me to guess whether that method would improve on standard methods.
I’ve tried thinking of things I might want to build, and I’m usually no closer to guessing whether it’s feasible than before I read the book. For example, it would be nice if I could make a prototype of a seastead several feet in diameter, but none of the examples the book gives appear to involve methods which could make sturdy cylinders or hemispheres that large.
The index leaves much to be desired – minicomputers are indexed under computers, and open source is indexed under software, when I expected to find them under m and o.
And despite the lip service he pays to open source software, the CAM software he wrote comes with a vague license that doesn’t meet the standard definition of open source.

In the latest issue of Econ Journal Watch, Bryan Caplan and Donald Wittman hold an inconclusive debate on whether democracy produces results that are sensibly related to voters’ interests. They come much closer than most such discussions to using the right criteria for answering that question.
But they fail because they implicitly assume that inaccuracies in voters’ beliefs are random mistakes. If that were the case, Wittman’s replies to Caplan would convince me that Caplan’s evidence of voter irrationality is as weak as the arguments that consumer irrationality prevents markets from working, and that Wittman’s proposed experiments might tell us a good deal about how well democracy works.
On the other hand, if you ask whether voters have incentives to hold beliefs that differ from the truth in nonrandom ways, you will see a fairly strong argument that voters’ inadequate incentive to hold accurate beliefs causes systematic problems with democracy.
Imagine that you live near a steel mill. This means that believing that steel import restrictions are bad will increase the risk that your acquaintances will dislike you (because you views endanger their jobs or their friends’ jobs), and will probably bias you toward supporting protectionism.
Or take the issue of how gun control affects crime rates. There are some obvious patterns of beliefs about this which the random-mistake hypothesis fails to predict. Whereas the theory that people adopt beliefs in order to indicate that they think like their friends and neighbors (combined with some regional variations in gun ownership that created some bias before people started thinking about the issue) does a much better job of predicting the observed patterns of belief.
Because this seems to be a widespread problem with democracy, I’m fairly certain democracy works poorly compared to markets and compared to forms of government such as Futarchy which improve the incentives for policies to be based on accurate beliefs.

I’ve switched from b2evolution to WordPress for the software that runs this blog.
I had been planning to do this for several weeks because b2evolution was too spam-friendly, and WordPress seems to have a competent approach to comment spam (holding comments for moderation if they don’t come from someone who previously made an approved comment). I was forced to shut down the b2evolution software yesterday when the load it put on my system became excessive.
I think I’ve fixed all the permalinks to work as before.

Book Review: On Intelligence by Jeff Hawkins

This book presents strong arguments that prediction is a more important part of intelligence than most experts realize. It outlines a fairly simple set of general purpose rules that may describe some important aspects of how small groups of neurons interact to produce intelligent behavior. It provides a better theory of the role of the hippocampus than I’ve seen before.
I wouldn’t call this book a major breakthrough, but I expect that it will produce some nontrivial advances in the understanding of the human brain.
The most disturbing part of this book is the section on the risks of AI. He claims that AIs will just be tools, but he shows no sign of having given thought to any of the issues involved beyond deciding that an AI is unlikely to have human motives. But that leaves a wide variety of other possible goals systems, many of which would be as dangerous. It’s possible that he sees easy ways to ensure that an AI is always obedient, but there are many approaches to AI for which I don’t think this is possible (for instance, evolutionary programming looks like it would select for something resembling a survival instinct), and this book doesn’t clarify what goals Hawkins’ approach is likely to build into his software. It is easy to imagine that he would need to build in goals other than obedience in order to get his system to do any learning. If this is any indication of the care he is taking to ensure that his “tools” are safe, I hope he fails to produce intelligent software.
For more discussion of AI risks, see sl4.org. In particular, I have a description there of how one might go about safely implementing an obedient AI. At the time I was thinking of Pei Wang’s NARS as the best approach to AI, and with that approach it seems natural for an AI to have no goals that are inconsistent with obedience. But Hawkins’ approach seems approximately as powerful as NARS, but more likely to tempt designers into building in goals other than obedience.

Robin Hanson has another interesting paper on human attitudes toward truth and on how they might be improved.
See also some related threads on the extropy-chat list here and here.
One issue that Robin raises involves disputes between us and future generations over how much we ought to constrain our descendants to be similar to us. He is correct that some of this disagreement results from what he calls “moral arrogance” (i.e. at least one group of people overestimating their ability to know what is best). But even if we and our descendants were objective about analyzing the costs and benefits of the alternatives, I would expect some disagreement to remain, because different generations will want to maximize the interests of different groups of beings. Conflicting interests between two groups that exist at the same time can in principle be resolved by one group paying the other to change it’s position. But when one group exists only in the future, and its existence is partly dependent on which policy is adopted now, it’s difficult to see how such disagreements could be resolved in a way that all could agree upon.