Comments on David Brin's book The Transparent Society

by Peter McCluskey

Does accountability require knowing what people do when they are alone, or only when they are interacting with us? Brin makes a big deal about the need to spy on powerfull people in order to hold them accountable.

If things like taking bribes or a conspiracy to assasinate a president are important factors in our lives, then Brin's answer may be right. But most of the corruption I'm aware of involves things like campaign contributions to reward politicians for, say, sugar import quotas. That kind of influence survives even when publicized (unless the society adopts a principled opposition to such influence, say by electing a Libertarian government) because the people who lose don't lose enough to alter their votes, while those who gain do. And it doesn't take anything more than a basic committment to the rule of law rather than the rule of men to insure that we can know (if we care enough) when politicians are harming us be means like this.


A good story usually depends on heroes and villains to make it entertaining, but this insures that most successfull novelists are biased against seeing the effects of broad social trends that result from decisions made by millions of people.

Brin demonstates this bias more than most authors. He sees the biggest threats to freedom coming from plots by small groups of people who thwart the will of the masses. Yet he produces little evidence that this has been a common source of oppression in the past. There is no shortage of examples of oppression that was openly supported by significant fractions of the population (slavery, South African apartheid, witch burnings, many wars, etc.). The worst examples in recent memory, such as Nazi Germany, Pol Pot's Cambodia, and Stalin's Soviet Union, were sufficiently open about some of their crimes that it's hard to say that their supporters had been significantly deceived or that more exposure of their plans would have prevented the oppression.

All these examples relied at least in part on persuading a large and powerfull part of society to openly target a weaker (and usually smaller) set of people. I therefor assume that helping minorities escape the effects of hostile majorities (through encryption) is more important than being able to uncover secret plots.


On page 109, Brin mentions a bunch of governments (1917 Russia, 1933 Germany, etc) that were replaced by worse governments, and describes them as weak-blind, with vague insinuations that blinding them contributed to their downfall. I'm not aware of any correlation between conquest by tyrants and deliberate restrictions on information-collecting by governments (as opposed to governments which collapsed because they neglected signs of discontent or revolt), and Brin doesn't try to justify his claim that these governments were blinded more than similar ones which preserved freedom.


On pages 256 to 257 Brin says "Only through concealment, say the advocates of anonymity, can nonconformists find hope of safety from the mob. ... But that was there. That was then. Evidence from our own society refutes the image of relentless conformism, especially amid the waves of pro-eccentricity propaganda ... By coming boldly out of the closet, homosexuals are winning vastly greater acceptance than they ever has while cowering within."

While increased openness among homosexuals has caused increased tolerance, that is not because openness neccesarily causes tolerance (its main effect was to make people who were previously partially tolerant choose between full tolerance and open intolerance), but because increasing tolerance of sexual eccentricities in general had recently made it safer to test openness.

I agree with Brin's claim that at least in most English speaking parts of the world today, tolerance of eccentricities has become the default attitude, but there is still plenty of reason to worry about unjust intolerance.

Suppose there were a few children alive today who had been created by cloning. In how many parts of the U.S. would it be safe for them to admit that?

Suppose AI researchers create a program which they think has enough human-like qualities to deserve human rights, but the average worker fears its competition enough to use the program's nonhuman features as grounds for boycotting any business employing them. Wouldn't that program be safer if it could do business anonymously?

Suppose I am cryonically suspended, and then someone finds a way to convert the information in my frozen brain into software. The first time this happens, it is likely that the result will be as controversial as the first AI, especially if it is clear that the simulation is less than 100% identical to the original person. How can I be confident that people won't be intolerant of such software?

It is these kinds of concerns that lead me to worry more about the need to keep some secrets than about the need to expose secret plots.


On page 257 Brin says "Even if that weren't true, the case for reciprocal transparency would still stand, because there is no greater weapon against the intolerant than exposing their peccadiloes."

Brin's belief in the value of reciprocity in openness has no obvious justification, and sounds absurd under many conditions. It only works when everyone involved has eccentricities of roughly the same importance. But that's rarely true in the cases that privacy advocates consider most important, e.g.:

I suspect that the desire for reciprocity has hindered some people from adopting openness. Many people have a gut reaction that publicizing their source code would be a mistake because many people would use that source without giving anything in return, although the example of Linux shows many benefits to this kind of openness.


Brin points out on pages 14-15 that easier surveillance will deter voyeurs. It is true that surveillance can deter the kind of voyeurism that makes the voyeurs' targets feel singled out for attention. But the kind of voyeur who casually scans the entire restaurant or nude beach into his camera and later watches the results in private isn't deterred, since the only people who notice are engaged in similar voyeurism and have no cause to complain.

I suspect that there will be no way around the conclusion that anything we say or do in a place accesible to the public should not be considered private.


On page 283 Brin says:
"If you can't trust a nerd, who can you trust?"
You can trust hundreds of nerds independently examining the public source code for flaws.

Brin seems to think that the commonness of software bugs can be taken as evidence that no software feature can ever be reliable. But most bugs result from programmers implementing more complex systems than can be debugged with the effort they are willing to put into it.

Simple and important algorithms such as a square root or a cosine function have apparently been written well enough so that no bugs are ever found. Encryption is a simple enough goal that if enough smart, motivated programmers check a common implementation, they ought to have a high probability of finding all the flaws that will ever be found.

It is true that given sufficient time, it will be possible to crack most encryption. But given that the computing power needed to crack RSA-style encryption increases exponentially with the key size but the time to encrypt doesn't, it is easy enough to keep messages secure for arbitrarily long time periods (unless maybe quantum computing changes all the rules; I don't understand that well enough to determine its effects). Brin's concern over petaflop computers suggests he is just guessing about the risks.

Short of quantum computing, the only real obstacles to reliable encryption are things like spying on people as they are doing the encryption or guessing poorly chosen passwords. Most failures of attempted privacy through encryption are due to this kind of negligence rather than flaws in the encryption itself.


Brin's analysis on page 275 of the relative stability of open societies versus masked ones correctly implies that a world in which ideas about encryption are often kept secret is less stable than if they are public.

But privacy advocates such as cypherpunks often collaborate on open software research. A world in which personal things (finances, sex life, etc.) are kept secret but in which most people demand that impersonal things like software are open to scrutiny appears at least as stable as any alternative Brin hints at.


On page 119 Brin says "What is healthy for a nation? Accountability." "What is healthy for a king, high priest, or tyrant? The exact opposite! Criticism is inherently dangerous ..."

While it's clear that it is unhealthy for a tyrant to permit himself as an individual to be subjected to more criticism than other government officials (which seems sufficient to explain much of the governmental desire for secrecy and diffuse responsibility), I doubt that a consistent pattern of criticising government actions harms the government or the individuals running it (and it probably provides some security against unexpected revolts). The U.S. government seems to have been subjected to more criticism than most governments, and I see no signs that many people running it would be more satisfied with their positions if they were running closed societies (for all their power, Hitler and Stalin were often at risk of sudden death if they overlooked an enemy. Whereas heavily criticised leaders such as Nixon or Clinton have a better understanding of who their enemies are).


On page 250 Brin says
"Suppose there were a small but real cost or penalty to encryption - perhaps an encryption tax or tariff"
I'd be really amazed if Brin could come up with a method of collecting such a tax that has a small cost. Dozens of different mail programs written in several different countries probably aren't going to be changed to handle the tax collection.

See here for more discussion of these topics.