A whole bunch of stuff this time, since I missed posting for June. It turns out that I also wrote a whole post for January but never put it up, so I am just going to stick that on the end here.
First, a couple of sets of photos that, to my eyes, go together: Tom Blackford’s Nihon Noir, of some of Tokyo’s more futuristic-looking architecture, and this Twitter thread of infrastructure that looks like it comes from science fiction. (And yes, I am still calling it Twitter. I refuse to validate Musk’s nonsense. I do hope this thread, and others like it, will be archived somewhere else by the time he finally finishes burning the company to the ground).
Next, I have three very different articles about libraries:
“Have You Been to the Library Lately?”, by Nicholas Hune-Brown, is about libraries in Canada, but with a few small exceptions you could say all of the same things about U.S. libraries as well. Hume-Brown describes the way that libraries have shifted from places to check out books to offering a wide range of educational and social services, responding to the growing needs of the communities they serve. (Although, frankly, if you still think of the library as a place with nothing but books and staff who are constantly shushing patrons, then I’d ask whether you think going to the movies involves looking into a little wooden box and turning a crank).
When people tell the story of this transformation, from book repository to social services hub, it’s usually as an uncomplicated triumph…That story, while heartwarming, obscures the reality of what has happened. No institution “magically” takes on the role of the entire welfare state, especially none as underfunded as the public library. If the library has managed to expand its protective umbrella, it has done so after a series of difficult decisions. And that expansion has come with costs.
From violent patrons and harassment to mental health crises— including a growing number in children— librarians are dealing with a range of problems that they are not trained, equipped, or, frankly, paid to deal with. But they also don’t really have a way to say no: they are public institutions, in every sense of the word, and don’t generally have the authority to turn people away.
One worker from Winnipeg became emotional when talking to me about her job. (Staff aren’t allowed to speak to the press, and the city denied an interview request.) She explained that threats and verbal abuse were common, and dealing with erratic behaviour was par for the course. As a veteran of more than ten years, she wasn’t particularly sensitive. But it was clear to her that, in recent years, the library was being asked to do far more than it could sustain. “It just becomes this really small space where all the issues that are in society are just magnified,” she said. Staff, she told me, were regularly being retrained in de-escalation techniques, seemingly with the idea that perhaps new training or a new attitude could mitigate the need for more funding or more employees or a functioning supportive housing system. “I spend a lot of time thinking, ‘Is this really what my job is now?’ And what is the library? I don’t even think I know anymore,” she said. “I don’t remember the last time I actually did my real job.”
The issue, of course, is that libraries have been pushed into taking on all of these duties because governments and other public institutions have increasingly withdrawn from them. It is certainly reasonable to talk about increasing libraries’ funding and staffing, but channeling all these public services through a single institution that was never intended to provide them is neither sustainable nor reasonable. At some point, a library that has to try to do everything will end up doing nothing very well— and probably cease to be a library, in any meaningful sense.
“Ingenious Librarians”, by Monica Westin, is a good example, if any were needed, of why the loss of libraries would be a bad thing. It tells the story of the first “online’ electronic search, at Syracuse University in 1970. (“Online” here meaning using a remote terminal connected to a mainframe computer, not a search using the internet, which only sort of existed at the time). It was part of a study to figure out better ways for researchers to retrieve information, undertaken at a time when the volume of academic publications was growing too fast for librarians to keep up with it. However, it was intended to supplement reference librarians, not to replace them, and was meant mainly for users who were not near a reference desk and couldn’t ask anyone for help. The system, which was designed by librarians, anticipated many aspects of contemporary internet search engines, like using the full text of the document rather than assigned keywords or headings (which were too restrictive), and keeping track of other users’ searches to create suggestions to help people find what they were looking for. But the project was also guided by what seems, in retrospect, like a more realistic, less utopian vision of networked information exchange than that of the techno-futurists of post-WWII America. Comparing librarian Pauline Atherton, who headed up the study, to internet pioneer J.C.R. Licklider, Westin says:
Culture celebrates people like Licklider for being visionary in a positive vein. But, similarly, Atherton and the SUPARS research team should be celebrated for having seen and then designed for what the future would lose. Expanding our group of established internet visionaries to include people like Atherton, we see a more complex portrait of how different kinds of researchers envisioned the world to come. Where Licklider saw what we would gain from being able to communicate online with anyone in the world, Atherton’s group saw that we would lose expert intermediaries; they designed for this cost.
I was also interested in the study participants’ reaction to the system. The single most common word they used to describe the experience was “frustrating,” but at the same time “they also found the system intriguing and exciting (‘fun’, ‘thorough’, ‘I dig computers’), and 94 per cent said they would use SUPARS…again if it were available.” That seems like a good description of most developments in computers and digital technology: to start with, they are frustrating and often more trouble than doing things the old way, but there are enough people who are either sufficiently charmed by the novelty or convinced that it will eventually be better to keep pushing things forward.
“The Coolest Library on Earth”, by Elizabeth Landau, is about a collection of ice cores at the University of Denmark. The cores— which are long cylinders drilled out of the world’s coldest places— can reveal all kinds of things about how the earth’s climate has changed over time, going back thousands of years. This is, unfortunately, knowledge that is only becoming more necessary. This would probably be more accurately described as an “archive” than a “library,” but the general point here still holds: we need ways of storing and keeping track of information, of all kinds, and we also need people who specialize in doing that, and in making that information accessible to the people who need it. What that looks like in practice varies tremendously, depending on what kind of information we’re talking about, but the basic principle is consistent.
“Outsourcing Virtue”, by L.M. Sacasas, is about the desire to create systems— codes, rules, standards— that can eliminate bad or destructive behaviors without requiring virtue from the people they govern. This was, in a sense, one of the original goals of bureaucracy, which sought to eliminate bias and corruption from government by limiting the discretion given to the people in charge of providing services. The real decisions are made by the people at the top, and implemented in a (theoretically) neutral, consistent, even mechanical way by the bureaucrats further down. It’s also the impulse behind using various algorithmic systems to decide, for instance, the length of someone’s prison sentence, or whether they receive a home loan: we know that the people making those decisions have often been driven by racism and other forms of bias, but if you can remove human judgment from the process altogether you can avoid these problems without somehow purging people’s hearts of racism and hatred. You can also see this desire behind James Madison’s advocacy of checks and balances, in Federalist 51, when he asks,
But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.
In other words, if you could count on everyone to do the right thing all the time, then you wouldn’t need government at all; a good government will account for humans as they actually are, not as we might wish them to be, and will work even when they don’t do the right thing. The design of the system, itself, is what guarantees freedom and justice.
All of these attempts have, of course, failed to achieve their desired ends, for various reasons. But we keep trying: a similar kind of case is made for things like “smart contracts” using blockchain technology: if you make the contract self-executing and automatic, then nobody will be able to break their word. Individual honesty becomes irrelevant. Gavin Wood, the person who coined the term “Web3” and one of the developers of the Ethereum blockchain, argues that
trust in itself is actually just a bad thing all around. Trust implies that you are you’re placing some sort of authority in somebody else, or in some organization, and they will be able to use this authority in some arbitrary way.
Trust in this sense, he thinks, should be eliminated, or at least minimized, through technological systems that give us “greater reason to believe that our expectations will be met.” He sums this position up with the slogan “less trust, more truth,” which in effect means that the system tells you what to expect— that what you see is what you get.
The appeal of this idea is obvious, but, as Sacasas makes clear, it also takes us to places we probably don’t want to go. The corollary of not having to make moral choices is not having the chance to do so. A procedure or mechanism like a smart contract that makes it impossible not to fulfill your end of the bargain also robs that action of meaning; keeping your word has the same moral weight as obeying the laws of physics. A good society has to include some meaningful degree of human freedom, and allowing such freedom means, in turn, accepting that people will need to learn how to use it, and will sometimes use it wrong.
The alternative is to recognize that the contingency that appears as an obstacle and a threat to the system operating at scale may be the very condition of human flourishing. It is in the face of such contingency that a person is free to exercise a measure of agency, to make judgments and assume responsibility, to adapt creatively, to work meaningfully, to experience the consolation of knowing and being known and, yes, of cultivating skill and virtue.
And, speaking of taking people as they actually are: Patricia Marx’s “Is the Army’s New Tactical Bra Ready for Deployment?” underlines the difficulty of standardizing anything meant to be worn and used by human beings. It’s also a nice capsule history of military uniforms for American women, which were often concerned at least as much with aesthetics as with utility:
The government asked Elizabeth Arden to concoct a lipstick to match the red piping on women’s Marine Corps uniforms. Women marines were issued this Montezuma Red lipstick and matching nail polish in their official military kits. (It remained mandatory for thirty more years.) A Tangee cosmetics ad from the era reasoned, “No lipstick—ours or anyone else’s—will win the war. But it symbolizes one of the reasons we are fighting . . . the precious right of women to be feminine and lovely, under any circumstances.”
Numerous dissertations could be written— and some probably have been— about how this exemplifies how appearance is made to matter for women in ways it simply is not for men, as well as the adaptive capacity of capitalism. Which probably makes it sound like the article is dense and ponderous, but it is not at all— it’s totally entertaining, neatly capturing both the difficulty of this project and the absurdity you find at the intersection of military necessity and intimate biological reality.
I don’t know if people are sick of talking about AI yet, but it’s not going away any time soon. A couple of pieces I read recently look at it from new angles.
First, “AI is a Lot of Work”, by Josh Dzieza, is about the human labor— often tedious, repetitive, and performed by poorly-paid people in the global south— on which recent advances in machine learning and generative AI depend, but which is mostly hidden in the discussion about those technologies. The short explanation is that these systems are generally trained on very large data sets, and the elements of those data sets (e.g., images) need to be labeled to help the machines learn, and to help the people working with them understand what went wrong when they make mistakes.
Annotation remains a foundational part of making AI, but there is often a sense among engineers that it’s a passing, inconvenient prerequisite to the more glamorous work of building models. You collect as much labeled data as you can get as cheaply as possible to train your model, and if it works, at least in theory, you no longer need the annotators. But annotation is never really finished. Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data. These failures, called “edge cases,” can have serious consequences. In 2018, an Uber self-driving test car killed a woman because, though it was programmed to avoid cyclists and pedestrians, it didn’t know what to make of someone walking a bike across the street. The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them. Already, this has given rise to a global industry staffed by people like Joe who use their uniquely human faculties to help the machines.
As Dzieza points out, the general feeling among AI researchers is that the need for such work is temporary, and will vanish when the systems are sufficiently mature. That might or might not be true, but even if it is, the fact remains that it is necessary now, and the people doing it are often not paid or treated very well, even as money is pouring into these technologies and the companies that produce them.
I think the idea the piece presents of a supply chain for data is also useful. In so many ways, we tend to think of digital technology as weightless, immaterial, despite the vast physical infrastructure required to produce, store, move, and analyze it. To put it in the relatively familiar terms we use to talk about other kinds of goods and products helps to emphasize not only this material footprint, but the human labor that activates and maintains it.
The piece also highlights the gap between the way humans think and the way machines process information, and the difficulties that gap creates. The unaccountable errors these systems sometimes make are on example, but trying to avoid such errors requires that humans, in a sense, ignore the way their own brains generalize and make distinctions.
The act of simplifying reality for a machine results in a great deal of complexity for the human. Instruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use. A human asked to tag all the shirts in a photo probably wouldn’t tag the reflection of a shirt in a mirror because they would know it is a reflection and not real. But to the AI, which has no understanding of the world, it’s all just pixels and the two are perfectly identical. Fed a dataset with some shirts labeled and other (reflected) shirts unlabeled, the model won’t work. So the engineer goes back to the vendor with an update: DO label reflections of shirts…The job of the annotator often involves putting human understanding aside and following instructions very, very literally — to think, as one annotator said, like a robot.
The irony, of course, is that the goal that drives companies to make human workers to label things in such strange or counterintuitive ways is to create systems that will behave more like humans. Or, as Dzieza later puts it, following a description of some of ChatGPT’s training:
Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.
And if it seems to you like the outcome of a process like that is predictable or comfortable, then you must know something I don’t.
Second, Maria Karpo’s “Imitation Games” looks at AI from another angle, explaining how generative systems like DALL-E are essentially making explicit something which ancient thinkers took as a matter of course, but modern ones rejected: the role of imitation in art.
Finally, a couple of very different pieces about crime, and the politics behind and around it:
“The Dark History of America’s First Female Terrorist Group”, by William Rosenau, feels a little dismissive of some of the ideas driving groups like the May 19th Movement, but at the same time it’s a good example of what you get when you start to think that the ends justify the means. I didn’t know much of anything about the specific people and groups he’s talking about, but the most surprising thing to me was that the U.S. Capitol was successfully bombed in 1983, but none of that has been mentioned (at least, not that I’ve heard) in all the discussion of January 6th. It’s also interesting that an all-women group formed in the late 1970s didn’t say much about feminism, but maybe Rosenau has just left out that part of the story.
“This is the Hometown of San Francisco’s Drug Dealers”, by Megan Cassidy and Gabrielle Lurie, has a somewhat sensational and over-simplifying title— for which we should probably not blame the authors— but actually gives a pretty in-depth and nuanced look at the links between one small area of Honduras and the drug markets of the Bay Area. It can be read as an indication of how some well-intended policies have negative effects, but I would say the most important takeaway is that drugs have to be addressed globally; no local or even national approach is going to be really effective. The reason this particular Honduran valley is the origin of so many dealers is really just a combination of chance and need: after the agricultural sector collapsed in the 1980s and a gold mind didn’t provide the anticipated benefits, the area had a lot of very poor people and no opportunities; once a few people to San Francisco and came back with enough money to build new houses and live well, others followed in their footsteps. (As one source in the piece puts it, “People follow people.”) To get to the U.S., they will often end up being exploited in one way or another, and left with few options other than dealing. To think that there is really something specific to these particular places that has led to this pattern is, I think, to ignore the larger picture. Fundamentally, the problem is poverty and inequality— as is usually the case.
Part 2 of the story deals with the San Francisco drug markets themselves.
And this is the post from January:
“In Defense of the Art-Targeting Climate Activists,” by Peter Singer
Over the last few months, there have been a number of protests involving people gluing themselves to museum walls or the frames of paintings, or smearing point, oil, cake, or mashed potatoes over paintings (all of which are behind glass, and are undamaged). The protests seek to draw attention to climate change, and the failure of world governments to take sufficient action to do anything about it. I certainly agree that our governments have so far failed us, and continue to fail us, in this area, but I’ve been pretty skeptical of the utility of these particular protests. So I was interested to see a defense of them from Peter Singer, who is one of the most important contemporary ethicists.
Part of Singer’s argument is that we are inconsistent, or even hypocritical, in our judgement of protest and political activism: that we applaud the protests of the past, even those deemed disruptive or “extreme” at the time, but castigate the people engaging in similarly dramatic forms of protest today. I’m certainly sympathetic to the implicit point that we don’t learn from history, that we repeat the same patterns over and over again. “Hindsight is 20/20” is both a cliché and profoundly untrue, but at least we can say that the passage of time shifts perspective. At the same time, is Singer right that we now think of the British suffragettes, who actually destroyed works of art (and attempted to bomb the homes of politicians) were in the right? Or do we instead tend to honor the less “extreme” wing of the suffrage movement? (The answer to that question might actually be different for people in the U.K. and the U.S., I don’t know).
Rather than the passage of time allowing us to see historical events more clearly, I think part of what is going on here is selection bias. What characterizes the protests and demonstrations of the past that we now condone is, ultimately, that they were successful in some significant way. Martin Luther King and the movement he helped lead succeeded in getting the Civil Rights Act and Voting Rights Act passed; the suffrage movement succeeded in getting the right to vote for women. This makes it easy to look back and consider them justified, because what they did worked, and it’s impossible to say whether a less confrontational approach would have. On the other hand, if we look back at, say, the anarchist movement in the U.S. in the early 20th century, which generally failed to achieve any of its aims, it’s again easy to think that they went too far, and that this undermined their cause. (Of course, you could also say that this movement is just too different from the other examples for the comparison to make sense).
Singer concludes with what I think is his strongest argument: that protest, even protest seen as disruptive or destructive, is necessary for people who are excluded from the democratic process. He notes that
In seeking a conviction against the people who glued their hands to the frame of The Hay Wain, the prosecutor sought to distinguish the actions of the suffragettes from those of the activists on trial by saying that the former “had no democratic means by which they could further their cause,” whereas today “We have an established democracy.”
But, of course, Britain in the early 20th century might have claimed to be an established democracy, too, despite the exclusion of half the population. And in this case, the people most affected by the problem may be those with the least ability to influence policy— a clearly undemocratic situation. Singer says, “Ask yourself who will suffer the most if we fail to prevent catastrophic climate change. The answer is the young and those yet to be born – both categories unrepresented in our political systems.” This is also self-evident, and a powerful argument.
To me, the problem, ultimately, with these particular protests isn’t that they “go too far,” it’s that their message is muddled. I don’t have data to support this, but I suspect that few people in the general public think there is any clear link between the art chosen, or works of art in general, and the climate. As Singer says, the point is to highlight what will be lost if climate change is not mitigated; after gluing themselves to Vermeer’s “Girl with a Pearl Earring,” a protestor asked the watching crowd, “How do you feel when you see something beautiful and priceless being apparently destroyed before your eyes?…Do you feel outraged? Good. Where is that feeling when you see the planet being destroyed before your very eyes?” But is anybody getting that message? The group behind this action is Just Stop Oil, and they want to end the use of fossil fuels; how likely is it that people will make a connection between burning oil and the (apparent) destruction of a Vermeer?
I can appreciate Singer’s point that the art being targeted, no matter how great or important, is less significant than the fate of the planet, and all the life on it. This is self-evidently true, and so if one could say that the attack on the art (even if it were actually damaged) was actually, concretely making the planet’s destruction more likely, then that trade-off would be tragic but reasonable. But that’s a big “if.” I can also understand the general idea that protests are intended to draw attention and get people talking about issues, even if the action itself doesn’t seem closely connected to those issues. By that measure, these actions are certainly successful. But this can also backfire; if people see the protests as frivolous or misdirected, then it can undermine support for the whole cause. Then again, it’s all too easy for people to go on with our lives and ignore this problem, which is still mostly abstract (at least in the wealthier northern countries) and seems intractable. Only disruptive action is likely to change that and garner sustained attention. So, I guess I remain torn about these particular actions.
“TikTok’s Addictive Anti-Aesthetic Has Already Conquered Culture”, by Carolina A. Miranda
I’m not on TikTok. There are several reasons for this. One is privacy; I don’t know if TikTok is actually any worse than any of the other apps/companies to whom I’ve already handed over my personal information in exchange for their services (though there is some reason to think it might be), but really, I just feel done with making that particular exchange. But also, the app seems to embody everything I dislike about social media. The reliance on algorithms rather than connection; the stripping away of all contextual information from videos; the idea of a continuous feed that just keeps coming; the brevity of the videos, the intensive personalization. To me, it seems like ByteDance read a bunch of articles about all the things people find wrong with social media, and designed an app to maximize those qualities. I absolutely detest the changes that have been made to Instagram recently, and those are essentially aimed at making it more like TikTok, so, yeah, not for me. Miranda summarizes TikTok as “present[ing] short-form videos in a frantic endless scroll,” which pretty well summarizes the reasons for my lack of interest. Yes, I know there’s good stuff there, and that people have found creative ways to use it. But finding funny or entertaining content on the internet is not, at this point, a difficult enough task to make another channel for such content sufficiently appealing to overcome the downsides.
At the same time, you can’t exactly avoid it. As Miranda points out, it has had 3 billion downloads, and by some measures it’s now bigger, or at least more influential, than Facebook or Instagram (or certainly Twitter). It has already transformed the music industry. And, again, competition from TikTok is reshaping other apps and services:
The TikTok effect has sent Big Tech back to the drawing board on long-established apps. In July, a Google exec revealed at a conference that, according to internal studies, 40% of young people turn to TikTok or Instagram when looking for a basic service like lunch — not a search engine like Google. Since then, Google has made user reviews much more prominent on its maps and now delivers many more images, graphic text boxes and social media feeds in its results.
I don’t deny that there are things about TikTok that are appealing, or at least interesting: Miranda calls the duet function as like “an ouroboros of looking,” which sounds terrible, but then describes “the duet train, in which one user pairs her video with another who pairs it with another and another — like a digital exquisite corpse,” which sounds…pretty cool. And I know there are lots of people talking about science or history in interesting and creative ways. So I’m not suggesting that it’s valueless, or that people are stupid for using it. But I got back to the phrase “frantic endless scroll,” and think: nope.
“Getting Lost in the World’s Largest Stack of Menus,” by Adam Reiner
The New York Public Library has the world’s largest collection of menus, and the majority of them of them were gathered by a single person: a woman named Frank E. Buttolph. On her own initiative, she got the library to create a menu collection, and give her “a voluntary position as menu archivist” in 1900. By 1908, she’d collected 14,500 menus, and by the time she died, in 1925, there were over 25,000. The collection now has 40,000, so more than half were collected by Buttolph. They’re a weird little window into history, especially New York history.
Speaking of windows into history: “HBOMax’s Great Looney Tunes Purge,”, by Sam Thielman, makes the case for the classic Warner Brothers cartoons as part of our collective cultural heritage, which should be preserved and made available to the public. Apparently HBO has been quietly removing a lot of material from their streaming service, and the Looney Toons cartoons are part of that; they are, somewhat shockingly, difficult to find elsewhere. This seems strange to me in part because when I was growing up, Bugs Bunny and co. were always on TV somewhere; they were used (or so it seemed) to fill in the schedule in low-priority time slots, like Sunday afternoons. I didn’t often go looking for them, but I watched them all the time, and I came to think of them as something that would simply always be there, always be accessible. That they are not anymore is, if nothing else, an example of how the streaming era often fails to live up to its promise.
And finally: this cover of Outkast’s “Hey Ya” by Kamauu is kind of magical.