Does accountability require knowing what people do when they are alone, or only when they are interacting with us? Brin makes a big deal about the need to spy on powerfull people in order to hold them accountable.
If things like taking bribes or a conspiracy to assasinate a president are important factors in our lives, then Brin's answer may be right. But most of the corruption I'm aware of involves things like campaign contributions to reward politicians for, say, sugar import quotas. That kind of influence survives even when publicized (unless the society adopts a principled opposition to such influence, say by electing a Libertarian government) because the people who lose don't lose enough to alter their votes, while those who gain do. And it doesn't take anything more than a basic committment to the rule of law rather than the rule of men to insure that we can know (if we care enough) when politicians are harming us be means like this.
A good story usually depends on heroes and villains to make it entertaining, but this insures that most successfull novelists are biased against seeing the effects of broad social trends that result from decisions made by millions of people.
Brin demonstates this bias more than most authors. He sees the biggest threats to freedom coming from plots by small groups of people who thwart the will of the masses. Yet he produces little evidence that this has been a common source of oppression in the past. There is no shortage of examples of oppression that was openly supported by significant fractions of the population (slavery, South African apartheid, witch burnings, many wars, etc.). The worst examples in recent memory, such as Nazi Germany, Pol Pot's Cambodia, and Stalin's Soviet Union, were sufficiently open about some of their crimes that it's hard to say that their supporters had been significantly deceived or that more exposure of their plans would have prevented the oppression.
All these examples relied at least in part on persuading a large and powerfull part of society to openly target a weaker (and usually smaller) set of people. I therefor assume that helping minorities escape the effects of hostile majorities (through encryption) is more important than being able to uncover secret plots.
While increased openness among homosexuals has caused increased tolerance, that is not because openness neccesarily causes tolerance (its main effect was to make people who were previously partially tolerant choose between full tolerance and open intolerance), but because increasing tolerance of sexual eccentricities in general had recently made it safer to test openness.
I agree with Brin's claim that at least in most English speaking parts of the world today, tolerance of eccentricities has become the default attitude, but there is still plenty of reason to worry about unjust intolerance.
Suppose there were a few children alive today who had been created by cloning. In how many parts of the U.S. would it be safe for them to admit that?
Suppose AI researchers create a program which they think has enough human-like qualities to deserve human rights, but the average worker fears its competition enough to use the program's nonhuman features as grounds for boycotting any business employing them. Wouldn't that program be safer if it could do business anonymously?
Suppose I am cryonically suspended, and then someone finds a way to convert the information in my frozen brain into software. The first time this happens, it is likely that the result will be as controversial as the first AI, especially if it is clear that the simulation is less than 100% identical to the original person. How can I be confident that people won't be intolerant of such software?
It is these kinds of concerns that lead me to worry more about the need to keep some secrets than about the need to expose secret plots.
Brin's belief in the value of reciprocity in openness has no obvious justification, and sounds absurd under many conditions. It only works when everyone involved has eccentricities of roughly the same importance. But that's rarely true in the cases that privacy advocates consider most important, e.g.:
I suspect that there will be no way around the conclusion that anything we say or do in a place accesible to the public should not be considered private.
"If you can't trust a nerd, who can you trust?"You can trust hundreds of nerds independently examining the public source code for flaws.
Brin seems to think that the commonness of software bugs can be taken as evidence that no software feature can ever be reliable. But most bugs result from programmers implementing more complex systems than can be debugged with the effort they are willing to put into it.
Simple and important algorithms such as a square root or a cosine function have apparently been written well enough so that no bugs are ever found. Encryption is a simple enough goal that if enough smart, motivated programmers check a common implementation, they ought to have a high probability of finding all the flaws that will ever be found.
It is true that given sufficient time, it will be possible to crack most encryption. But given that the computing power needed to crack RSA-style encryption increases exponentially with the key size but the time to encrypt doesn't, it is easy enough to keep messages secure for arbitrarily long time periods (unless maybe quantum computing changes all the rules; I don't understand that well enough to determine its effects). Brin's concern over petaflop computers suggests he is just guessing about the risks.
Short of quantum computing, the only real obstacles to reliable encryption are things like spying on people as they are doing the encryption or guessing poorly chosen passwords. Most failures of attempted privacy through encryption are due to this kind of negligence rather than flaws in the encryption itself.
But privacy advocates such as cypherpunks often collaborate on open software research. A world in which personal things (finances, sex life, etc.) are kept secret but in which most people demand that impersonal things like software are open to scrutiny appears at least as stable as any alternative Brin hints at.
While it's clear that it is unhealthy for a tyrant to permit himself as an individual to be subjected to more criticism than other government officials (which seems sufficient to explain much of the governmental desire for secrecy and diffuse responsibility), I doubt that a consistent pattern of criticising government actions harms the government or the individuals running it (and it probably provides some security against unexpected revolts). The U.S. government seems to have been subjected to more criticism than most governments, and I see no signs that many people running it would be more satisfied with their positions if they were running closed societies (for all their power, Hitler and Stalin were often at risk of sudden death if they overlooked an enemy. Whereas heavily criticised leaders such as Nixon or Clinton have a better understanding of who their enemies are).
"Suppose there were a small but real cost or penalty to encryption - perhaps an encryption tax or tariff"I'd be really amazed if Brin could come up with a method of collecting such a tax that has a small cost. Dozens of different mail programs written in several different countries probably aren't going to be changed to handle the tax collection.
See here for more discussion of these topics.