March 18, 2024

Once, during the summer that I worked in the campus computer store, a chatty customer began talking to me about science fiction books. One he mentioned specifically was The Mote in God’s Eye, by Larry Niven and Jerry Pournelle. This stuck in my mind only because he was impressed that I knew what the word “mote” meant. Earlier in the conversation he had managed to work in the fact that he was a member of Mensa; he was so impressed by my vocabulary that he suggested I should see about joining myself.

At the time, this mainly seemed to me like a pretty bad reason to suppose that I was in the 98th percentile for IQ. As I thought more about it, though, it made me wonder more generally about the whole notion of an objective, quantitative measure of some apparently innate, unified quality called “intelligence.” The first IQ tests, used during WWI to evaluate potential army recruits, asked people things like “what is an armadillo”— factual knowledge that, in retrospect, seems too geographically and culturally specific to work as a measure of some universal characteristic. Now we test things like spatial reasoning, which is of course important for certain things, like higher math, but the ability to rotate a cube in your head also isn’t quite what most of us are thinking of when we say that somebody is “smart.”

Despite their shortcomings, IQ tests continue to be used because they serve a function that has become increasingly culturally necessary; IQ is how we decide what kinds of programs kids should be in in school, what kinds of careers people are suited for, even whether criminals can be held responsible for their actions. IQ tests categorize and sort the population; we have a broad sense that certain positions and powers should go to the ones with the greatest intelligence. This is part of why, Stephen Cave argues in “Intelligence: a History”, the idea of intelligent machines makes us nervous. If machines become more intelligent than we are, then not only might they decide to take charge, by our own standards they should. I think Cave moves a little too fluidly from a concept like “reason,” as used by Plato, to “intelligence” as we use that word today, but the more general point— that presumed intellectual superiority, however spurious the criteria used as evidence for it, has long been used as a basis for some to dominate others— is a good one.

It’s not, though, the only reason that AI makes us nervous. In “Why Nothing Works Anymore”, Ian Bogost points out that many of the problems we have with technology today, including such apparently trivial things as automatic toilets flushing when we don’t want them to, are a result of the fact that it is not designed to work for us any more. Automatic toilets are designed to save water, not to respond to typical human bathroom patterns, and the former may conflict with the latter. Technology in effect serves its own purposes, which are not necessarily ours; it’s easy to see why that becomes disturbing if the technology also gets smarter than us.

On the one hand, “Mapping the Brain to Build Better Machines” by Emily Singer, illustrates just how far we really are from making that happen; on the other, it underlines the problem Bogost is describing. Singer’s article, which is almost a year old now, describes efforts to better understand the human brain by modeling portions of cortical tissue. One team is building a complete model of one million cubic microns— “one five-hundredth the volume of a poppy seed”— which, though too small for humans to see, is “orders of magnitude larger than the most-extensive complete wiring map to date, which was published last June [that is, June 2015] and took roughly six years to complete.” The density of neural networks in the cortex is so high that even such a small chunk is only now becoming something that can be recreated with reasonable accuracy.

One of the hopes of these projects is that they will result in AI systems that are better at visual identification— a task that humans find simple but computers struggle with. Human beings are much better at making generalizations from a small amount of data; show a child a couple of pictures of elephants and she will be able to identify an elephant in almost any position or setting. A computer, in contrast, will need thousands of pieces of information to even begin to figure out which features are important for identification and classification. While these data are increasingly available, and computers are able to sort through them at increasing speed, that still suggests a fundamental gap between human capacities and those of artificial intelligence systems.

And that gap is not necessarily just a matter of degree. If making these kinds of generalizations, extrapolating from instances to patterns, is a characteristic quality of human intelligence, then a system that can’t do that in the same way might be fundamentally different, even if it were to become in some way more capable. I’ve written here before about the problem AI researchers have in understanding why these systems reach the conclusions they do; even when they arrive at the right answer, they don’t seem to do it in the way a human being would. Is a system that arrives at the same conclusions as a human, but follows a completely different path to them, really anything like human intelligence? And if it’s not, is that a problem?


 

At the same time, though, this capacity for generalizing isn’t always everything it’s cracked up to be. We don’t fill in the gaps in our information in a neutral way; what we have to fill them in with is a matter of where and when we are living. In “Neanderthals Were People Too”, Jon Mooallem explains that the popular image of Neaderthals was built from very little evidence—a skull here, a couple of other bones there. That meant that in imagining what Neanderthals were like, scientists had to do a lot of extrapolating from what little was known. These extrapolations reflect the preconceptions and preoccupations of the era in which they were produced. The imagination of what connects the few known facts—in this case literally the muscles and tendons that hold the bones together— happens in context, and that context puts limits on the imagination. So, Neanderthals had to be “primitive” in intellect, since their skulls looked, to early researchers, brutish and unrefined. A whole bunch of behavioral characteristics went with that, like a tendency to violence. Newer evidence, and simply a greater quantity of evidence, suggests that Neanderthals were probably much more like Homo sapiens than was previously thought, but the older image is so ingrained that “Neanderthal” is used as a synonym for the brutish and barbaric. The same pattern held for dinosaurs, as well: the material evidence of bones and petrified impressions left a lot of room for speculation— in fact, demanded it— and the shape of that speculation could not be divorced from the predominant ideas of the time. In particular, the idea of inevitable progress entailed the assumption that if dinosaurs weren’t alive anymore, there must have been something wrong with them that led to their disappearance. Thus they were slow, dull-witted, and simple; the largest sauropods were so ungainly that their legs couldn’t even support their weight out of water. Again, most of this is almost certainly wrong, based on additional evidence.

Similarly, “This 3,500-Year-Old Greek Tomb Upended What We Thought We Knew About the Roots of Western Civilization” by Jo Marchant describes the discovery near Nestor’s palace at Pylos of an intact grave from about 1400 BC. The style of the burial and the artifacts found in it suggest that, rather than Mycenaean kingdom simply replacing the Minoans through conquest or Minoan collapse, the two cultures existed together in a network of mutual influence. The collapse-and-replace hypothesis was, at least in part, a result of never seeing the two cultures in the same place at the same time— an extrapolation from available evidence, again with the background assumption that more sophisticated cultures replace less sophisticated ones.

There is also a kind of feedback loop between the context and the ideas that fill in the gaps in the evidence. Ideas about “primitive” cultures in Australia or Africa informed the conception of Neanderthals, and that conception in turn reinforced the idea of the primitive. In particular, the notion that cultures and peoples could be “left behind” by an inevitable force called “progress,” which would change too fast for some people to keep up with. This had to be possible because it had happened to Neanderthals; it must have happened to Neanderthals, because they seemed to have things in common with the primitive peoples who seemed to be vulnerable to the same forces.

I don’t want to be taken to imply that, since our efforts at filling in the gaps are always rooted in a particular way of thinking about the world, all conceptions are therefore equally flawed and that therefore there is no true knowledge. Current thinking about Neanderthals may turn out to be wrong, but it is based on a lot more evidence, from more places and more researchers, and it’s reasonable to suppose that it is closer to reality. Moreover, while I don’t think scientists can ever fill in the gaps without any biases or blind spots or bad assumptions whatsoever, knowing how wrong the older conceptions were may make them more aware of such problems, and prompt them to think harder about how to avoid them. It won’t ever be perfect, but it can be better.


And, totally unrelated to any of that: “Is the Chicken Industry Rigged?”, by Christopher Leonard, discusses the company Agri Stats, which provides subscribing pouty producers companies with highly detailed reports on the practices of others companies in the industry, including how birds are processed, what they are fed, and what prices they are sold for. The company is now part of an anti-trust quit claiming that chicken producers colluded to inflate prices by using Agri Stats’s data to coordinate their choices without communicating directly.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.