Nuance is supposed to be about subtle distinctions between ideas. It describes complex things rather than simple ones. And it is almost always wrong.

Practitioners of nuance presume that more subtle and complicated answers are superior to simple and concrete answers. In reality, nuance relies on logical fallacies which  lead to wrong answers.

In physics and mathematics, simple explanations are best. Logically concise proofs convey information that is absolutely true. The known axioms of physics, which can explain much of what goes on in the universe, can be written on a single index card.

Nuanced explanations require billions of index cards by definition. They list many complicated conditions and subtle distinctions to describe events. Why do people give them more credit than simple logical proofs?

Professor Alex Bavelas ran an experiment to test preferences for complicated vs simple explanations. (via Mahalanobis, another great blog)

In one experiment, two subjects, A and B, are seated facing a projection screen. There is a partition between them so that they cannot see each other, and they are requested not to communicate. They are then shown medical slides of healthy and sick cells and told that they must learn to recognize which is which by trial and error. In front of each of them are two buttons marked “Healthy” and “Sick,” respectively, and two signal lights marked “Right” and “Wrong.” Every time a slide is projected they have to press one of the buttons, whereupon one of the two signal lights flashes on.

“A” gets true feedback; that is, the lights tell him whether his guess was indeed right or wrong. His situation is one of simple discrimination, and in the course of the experiment, most “A” subjects learn to distinguish healthy from sick cells with a fair degree of correctness (i.e., about 80 percent of the time).

“B’s” situation is different. His feedback is based not on his own guesses, but on A’s. Therefore it does not matter what he decides about a particular slide; he is told “right” if “A” guessed right, “wrong” if “A” guessed wrong. B does not know this; he has been led to believe there is an order, that he has to discover this order, and that he can do so by making guesses and finding out if he is right or wrong.

In other words, there is no way in which he can discover that the answers he gets are noncontingent — that is, have NOTHING to do with his questions — and that therefore he is not learning anything about his guesses. So he is searching for an ORDER where there is none that HE could discover.

This leads to an obvious result. Subjects A learned to distinguish between healthy and sick cells through simple means which lead to accurate guesses. Subject B, literally, learned nothing of the difference, but rationalized that he learned something. B rationalized why he failed to guess so often, so he looks for ever more subtle “clues” to distinguish the cells. His explanation is, in a word, nuanced.

Afterwards the subjects are asked to explain the difference between healthy and sick cells to each other and judge who’s view is more accurate. This is where the experiment produced an interesting result.

“A”‘s explanations are simple and concrete; “B”‘s are of necessity subtle and complex — after all, he had to form his hypothesis on the basis of very tenuous and contradictory hunches.

The amazing thing is that A does not simply shrug off B’s explanations as unnecessarily complicated or even absurd, but is impressed by their sophisticated “brilliance.” “A” tends to feel inferior and vulnerable because of the pedestrian simplicity of his assumption, and the more complicated “B”‘s “delusions”, the more likely they are to convince A.

This is amazing and depressing at once. Subject Bs’ explanations are long-winded, delusional, and wrong, but they convince Subject As through sheer verbiage.

The nuanced are ignorant but disguise their lack of knowledge with a show of words to intimidate others. They become so nuanced that they engage in self-deception too. They are in awe of their ability to see and describe such complicated things.

Nuance vindicates their perception of their enlightened intelligence and it distinguishes them from the simpletons.

Nuance is wrong.

And yet we continue to value false nuanced arguments over concrete and simple proofs.

A related logical fallacy is the conjunction fallacy. This is the belief that a more complicated statement with more specific conditions is more probable than a general statement.
For instance:

Which is more likely?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

85% of those asked chose option 2.

Which is horrifying, no? If you think of set theory, this should be obvious. There’s Sets {Not Bank Tellers}, {Activist Feminist} and {Bank Tellers}. You can draw a Venn Diagram of this to visualize it. {Bank Tellers} has two subsets {Activist Feminist Bank Tellers} and {Not Activist Feminist Bank Tellers} The subset {Activist feminist bank tellers} is contained entirely within the whole set of bank tellers. This specific subset is probably smaller than the whole set of Bank tellers and can be no more than equal to the whole (if all bank tellers are activist feminists).

Yet 85% of people will say it is more likely that Linda is a activist feminist bank teller than a bank teller. P(A+B)< P(A) It’s an impossible conclusion.

Adding many complicated and specific conditions to a statement makes it less probable but we are hard-wired to believe it is more likely to be true. Odd, no?