January 29, 2017: What’s Your Evidence?
Of the books I read in 2016, a favorite was Rachel Cusk’s Outline. So I was excited to hear that her newest novel, Transit, was a sequel to that book. Both follow the narrator, Faye, through a series of more or less ordinary days taking place around major life events, in particular her divorce and move to London; both consist more or less entirely of conversations, which Faye often seems to evoke more than participate in. The conversations are connected by the events of Faye’s life, and by the concerns and preoccupations that she (very subtly) reads into them. It’s never clear how much these themes emerge organically, and how much they result from being filtered through Faye. The books have in them a constant tension between the weight of the ideas they take up and the lightness of Cusk’s writing.
One of the major themes of Transit, at least as I read it, is the tendency we have to impose a narrative on our memory, turning a series of events into a series of plot points— into a story, which necessarily alters, maybe even distorts, memory. The characters debate whether the things that have happened to them are the result of “fate” or destiny, or, alternatively, a direct result of their own choices; either view, the book suggests, creates a artificial, deceptive sense of order, and fails to account for how little is actually within our control. I’m less interested in the question of fate than in the attraction to narrative, the reflex by which we assemble events into recognizable stories, ignoring extraneous details and focusing on whatever will make the route between two points make sense; ultimately this is a way of explaining ourselves to ourselves.
I thought of this as I read “How Statistics Lost Their Power— And Why We Should Fear What Comes Next”, by William Davies. Statistics, Davies suggests, are more and more seen as an “elite” form of evidence, because, be definition, they deal in averages and aggregates, which are often far from personal experience. Think of the jokes about how the average American family has 2.6 children (or whatever— I didn’t actually bother to look it up); that number, obviously, doesn’t describe any family, though it does describe all families. It’s a way of talking about the whole that doesn’t seem true to any of the parts. People naturally, then, have a preference for anecdote and narrative, because they are specific, particular, rooted in someone’s actual life. Even if the story is very different from one’s own, it is still possible to see how it could be true for someone else. This preference seems validated when economic statistics like GDP and the national unemployment rate seem increasingly detached from the personal experiences of many people; one thing that increasing inequality means is that GDP can grow and grow without the lives of people at the bottom improving at all. So, as the numbers seem increasingly far from real life, it becomes more and more plausible that they are simply untrue in some way that goes beyond the fact that they necessarily emphasize some things at the expense of others. Statistics are what “they” want you to believe, in the face of common sense and what you see every day.
As Davies’s title suggests, this is a problem. While statistics may not represent any particular person’s experience very well, anecdotes are, almost by definition, not representative; but, because they are stories about actual people and experiences, they sound more real to us. But statistics, with all of the flaws of any particular method or measure, are the only way we have of understanding some phenomena. GDP may be a really problematic way of understanding the national economy— and it really, really is— but you certainly aren’t going to get any kind of picture of the whole economy by listening to a few individual stories. Approaching the economy that way is how you get to think that keeping one factory open is going to make a significant difference.
Of course, statistics, and scientific data more broadly, can also be used selectively with the deliberate intention to deceive. In “The Atomic Origins of Climate Science”, Jill Lepore writes about the use of science to make arguments for or against particular policies, which are or become associated with a specific ideology. Her starting point is the history of the theory of nuclear winter, which arose out of (mostly) military research in the 1960s and 70s. Carl Sagan, who had been involved in some of that research early on, became the public face of the theory of nuclear winter, arguing in particular that Reagan’s Strategic Defense Initiative (a.k.a. “Star Wars,” or SDI), because it meant abandoning the doctrine of mutual assured destruction, made nuclear disaster more likely. He used the theory of nuclear winter as an example of the costs of a nuclear exchange, and he sometimes implied there was more certainly about the theory than there actually was. (Obviously, it had never been tested). That opened it up to attacks from proponents of the SDI, who argued that the data didn’t say what Sagan claimed they did; Lepore suggests that the institutional and intellectual apparatus they built to make this case, as well as some their tactics, are now being used to discredit climate science in similar ways. One of these that has gotten a lot of attention recently is the Fairness Doctrine and the principle of “equal time” in the media, which, it has been argued, create a false picture by implying that the two sides in the debate over climate change have more or less equal numbers in support.
Part of the problem here is obviously distorting scientific data and methods for ideological ends, but part of it is also that people in general don’t understand, or can’t deal with, the uncertainty inherent in science. We will dismiss theories if they are presented as “just a guess,” without distinguishing between a hypothesis and baseless speculation, or thinking about the ways in which the theory has been tested or the evidence behind it. A hypothesis exposed as uncertain— which all hypotheses are in fact, is treated as though it has been discredited. And this is especially true if the theory doesn’t match our own anecdotal experience of the world— which is how you get “Why is it so cold if we’ve got global warming?”
On a somewhat different note, earlier this week at work I was the facilitator of a screening and discussion of the film “By Blood”, about the Cherokee Freedmen controversy. Very briefly, the Freedmen are decedents of Black slaves owned by the Cherokee prior to the Civil War; in a treaty with the federal government after the war, the Cherokee agreed to give the Freedmen “all the rights of membership” in the tribe, but more recently it has changed its laws, and then its constitution, to exclude them from membership. The case is now in the federal courts, where a decision is long overdue.
It was a strange coincidence, then, to see in the New York Times Magazine a couple of weeks ago “Who Decides Who Counts as a Native American?” by Brooke Jarvis. Beginning with controversy over the expulsion of some members of the Nooksack tribe, who live around the border between Washington and British Columbia, Jarvis also gets into issues of blood quantum and what the right standard of evidence for tribal membership, or Indian status in general, should be. Though the facts of this case are very different— the Nooksack never owned slaves, and there is no question of racial difference at issue here— there is obviously significant overlap as well. Neither case is only about whether a particular group of people should be enrolled; both are also about what kind of evidence should count to establish Indianness, and who should decide on the standard itself. The Cherokee now insist that all tribal members should be Cherokee “by blood,” but this is to be established by having an ancestor on the Dawes Roll— a census conducted by the federal government in 1903, which created separate lists for “Indians by blood” and Freedmen— which nobody involved really thinks is a reliable indicator of ancestry. What this brings to the fore is the constant tension between “Native American” as a political identity and an ethnic or racial one; in effect, everybody, including both the tribes and BIA, wants to have it both ways, treating it sometimes as one, sometimes the other. And the standard of evidence changes depending on which of those definitions is adopted.
In all of these cases, it is not strictly the quality of the evidence that is at issue. The debate, instead, is largely over what counts as evidence in the first place. If you think that statistics are a tool used by elites to distort reality or mislead the public, then you don’t care about sampling errors or the robustness of correlations. If you think that an anecdote is better evidence because it is more grounded in “reality,” then abstract numbers are never going to convince you. If you think that science claims to produce certainty about the world, then it will be easy for you to “debunk” it with examples of theories that fail to pan out. More generally, it’s impossible to have adequate about anything if we can’t first agree on what counts as evidence, or what it would mean to be right.
And, to finish up, a couple of odds and ends from the last couple of weeks:
“Cipher War” by Mallory Locklear describes the effort, now lasting more than a century, to decipher the Indus Valley script, found mostly in thousands of very short fragments on clay tablets. Researchers are now using machine learning techniques to detect patterns in the script, which won’t really decipher it but may reveal something useful. There is debate, though, about whether the script actually represents language, per se, at all, or e.g., a method of accounting, or listing the names of gods, or something else all together. So, the question is whether the patterns found are the right kinds of patterns for a linguistic system or not.
Last but not least: the new album from electronic duo Emptyset is out, and it is…intense.