I presented at a conference at the beginning of March, and getting ready for that left me with little time to write here. So, I’m combining two month’s worth of internet.
“Are You Sure You Know What a Photograph Is“?, by Rasher Haq
This is a question I’ve been thinking about for some time—since we started hearing about “computational photography,” which seems to stretch the definition. Initially, digital photography was actually a very close analogue of the older, chemical processes; the only real difference was that the chemically-treated film in the camera had been replaced by a digital sensor that recorded and stored light values. But the inherent limitations of smartphone cameras— small lenses and sensors, no physical shutter— prompted efforts to find ways of improving the images they produced by making use of phones’ increasing processing power. I’m not actually sure there’s a clear, consistent distinction marking what counts as computational photography and what doesn’t. For instance, I’ve seen articles that consider high-dynamic range photos (HDR) as an example. To take one of these, the phone is actually taking two or three pictures at different exposures and combining them, resulting in a single image in which (ideally) everything is exposed properly. But nineteenth century photographers including Eadweard Muybridge (who’s mentioned here for his motion studies) regularly combined negatives with different exposures to get an image in which the sky was not blown out, but objects in the foreground remained visible as well— a textbook example of the uses of HDR.
At the same time, we are increasingly able to look at pictures that have been produced in very different ways. Haq mentions the recently published image of a black hole, which was created by combining radio data and converting it into light values. Is that a photograph? It looks like one, but there’s something really different going on here. I’ve been doing research lately that touches on how people initially saw photographs as an especially accurate or truthful form of representation, compared to paintings or drawings; rather than being filtered through the taste and ability of an artist, a photograph was produced by a mechanical and chemical process, which a human initiated but did not really control. We long ago became more skeptical about the truth claims of photographs, but do still tend to see them, I think, as a kind of record of something that really did happen, that really was there— was seen. As more and more layers of intervention are placed between reality and its depiction, even this weaker set of assumptions comes into question.
“Their Bionic Eyes are Now Obsolete and Unsupported,” by Eliza Strickland and Mark Harris
We’ve all probably grumbled at some point about “planned obsolescence”— the way technology companies design products so that they will need to be upgraded every few years. Many of us have also had the experience of a product we like and use suddenly gradually ceasing to function because the maker has stopped supporting it or updating the software. Always extremely frustrating. Now, imagine that happening to your eyes. That is the situation for something like 350 people with retinal implants created by a company called Second Sight, which stopped supporting them in 2019 as they encountered major financial problems and laid off many of their employees. That’s a relatively small number of people, but as the piece points out,
Neural implants—devices that interact with the human nervous system, either on its periphery or in the brain—are part of a rapidly growing category of medicine that’s sometimes called electroceuticals…recent advances in neuroscience and digital technology have sparked a gold rush in brain tech, with the outsized investments epitomized by Elon Musk’s buzzy brain-implant company, Neuralink. Some companies talk of reversing depression, treating Alzheimer’s disease, restoring mobility, or even dangle the promise of superhuman cognition. Not all these companies will succeed, and Los Angeles–based Second Sight provides a cautionary tale for bold entrepreneurs interested in brain tech. What happens when cutting-edge implants fail, or simply fade away like yesterday’s flip phones and Betamax? Even worse, what if the companies behind them go bust?
Two things particularly struck me in this story. The first is that there was never any chance that these implants weren’t going to need upgrades. Apart from the issues of maintenance in any complex, novel technology, the vision they offered to patients was a very long way from “normal” sight.
Patients and doctors alike stress that the Argus II provides a kind of artificial vision, really a brand-new sense that people must learn how to use. Argus II users perceive shades of gray that appear and disappear as they move their heads.
While that was still a major improvement for many users, both they and the company presumably were hoping that the technology would improve over time, which means that either the hardware or the software or both would need to be replaced from time to time.
The second point is that the growth of the company and the development of its technology followed a course that is familiar from other high-tech companies and products: a small number of founders has an idea that produces promising results; they successfully seek investment to pursue the development of the technology, but lose control of the company to their investors, who are there to make a profit and will cut support for products that don’t seem viable in the market. The Argus implants were not only expensive in themselves, but required lots of fairly intense coaching for patients to learn how to use— not to mention the complex bureaucracy that must be managed when selling any medical device. The technology worked, but the company wasn’t making money with it, so they stopped pursuing it. That’s fine with a product like Google Glass, which was never more than an intriguing novelty for its users; in this case, it makes the difference between seeing and not seeing. At least in the United States, this is really the only model we have for funding the development of a technology that is experimental, difficult, and expensive, with a high likelihood of failure— but it also seems like a deeply flawed one for a product for which the stakes are so high.
“How Mexico’s Lucrative Avocado Industry Found Itself Smack in the Middle of Gangland,” by Jeffrey Miller
To be honest, the explanation here is pretty unsurprising: demand for avocados in the United States increased dramatically, driving up prices, and well-organized cartels eventually wanted a cut. That lead to the development of what is more or less a good old-fashioned protection racket. A threat against a U.S. agricultural inspector in February led to a brief ban on importing avocados from Mexico, and raised the specter of a shortage.
I think we tend to see some kind of moral correlation between illegal drugs and crime: drugs are bad, and so the people who sell them must be bad too. There are all kinds of reasons why that is a faulty assumption, but what I think this case shows is that the violence associated with drugs is the predictable result of the combination of a lot of money to be made and a lack of effective authority.
“Programming Language Design as Art,” by Daniel Temkin
Here’s an example of how thin the barriers increasingly are between the things we think of as “art” and those we call “science”— or, more generally, how porous is the category of “creative” work. “Esolangs,” or esoteric languages, are “programming languages designed as forms of self-expression.” Programs written in Piet (a language named for the modernist painter Piet Mondrian) are images, made up of discrete blocks called “codels”; relationships between codels constitute commands or operations (e.g., “A transition from light blue to dark red means ‘NOT,’ while moving from red to yellow of similar brightness will tell the machine to ‘ADD'”). A program can appear on its own, or it can be embedded into another image— hiding code in plain sight. Another language, Cree#, makes use of the cultural values and practices embedded in spoken language (in this case, Cree):
To program in Cree#, we need to understand not only Cree linguistics but also its cultural logic. To declare a variable (which can be thought of as a storage location for data), one must put it in either a mînisiwat, a berry bag, or maskihkîwiwat, a medicine bag. If the variable is everyday or transient, it would go in the berry bag. If it refers to something with cultural significance or sacred meaning, it goes in the maskihkîwiwat. Each time the programmer declares a variable, they decide where it best fits.
This is partly about recognizing the ways in which many programming languages effectively assume that their users speak English, or think in English, but it’s also rethinking the way that programming works, introducing new concepts through different usage (just as spoken language does).
“Spain’s Ingenious Water Maze,” by Keith Drew
In the area around Valencia, on Spain’s east coast, a unique irrigation system, developed over a thousand years ago when Spain was under Moorish rule, produces “an incredibly diverse crop yield. Centuries-old local rice varieties grow in the fields around Lake Albufera, south of the city, while unique species like chufa, or tiger nuts (which are used to make the ice-cold milky Valencian drink of horchata), are sown in the north.” The system is based on each farmer’s right to a proportion (rather than a set amount) of whatever water is available from the River Turia;
Eight main irrigation channels, or acequías, funnel water from the River Turia, which is then carried – by gravity – along a series of smaller branches, which distribute the water to thousands of tiny plots across the fields. The amount of water each plot receives isn’t measured in terms of volume but rather on how well the river is flowing. The unit, known as a fila (from the Arabic word meaning “thread”), represents an individual’s right to a proportion of the water over a period of time; the irrigation cycle usually lasts a week, but when the river’s level is low, the cycle is extended. It’s an incredibly efficient system. Each plot receives the same access to water for the same amount of time, no matter where they are in the mosaic, and there are no water shortages, even in periods of drought.
Reading about this, I thought of it in comparison to the Doctrine of Prior Appropriation, in the western United States, which entitles claimants to the same amount of water the originally took from a water source, forever; claims are settled in the order they were established, so that the first claimant gets their full share of water before any later claimants get any. In a time of shortage, that may mean that people with later claims get nothing at all. It also leads to all kinds of waste, because you can lose your right to some portion of water if you don’t take all of it, so people take water they don’t need and dump it rather than lose access to water they might want later. In Valencia’s system, when there’s a drought, everyone must cut back, but everyone gets the same share of whatever is available. I’m sure there are complications here, but that seems like a dramatic improvement.
I was also interested in the way this system is administered:
The whole process is held together by a unique social organisation that has been governing La Huerta for more than 1,000 years. The Tribunal de las Aguas de la Vega de la València, or Water Court of the Plains of Valencia, was established around 960 CE and as such is officially the world’s oldest judicial body. The tribunal is made up of eight farmers, elected representatives of the communities that work off each of the main irrigation channels, who meet to settle disputes outside the doorway of Valencia Cathedral every Thursday at noon…Proceedings are in Valencian and are ruthlessly quick; all decisions are final.
I’m pretty surprised that this way of doing things has persisted through Spain’s fascist period under Franco, transition to democracy, and joining the EU— to mention only events of the last century or so. It sounds, to a political scientist, like a system that would be very prone to abuse, but if it’s been around for more than a millennium, it must work pretty well.
“Should Leopards Be Paid for Their Spots?” By Rebecca Mead
Prints and designs inspired by animals are, of course, ubiquitous, and leopard print is among the most common. Most of the time that we see it today, of course, it’s fake, since a garment using an actual leopard’s skin would be either pretty old or very illegal. But that’s a relatively recent development, and hunting leopards for their fur has been very popular, on and off, for centuries: Mead has one example from ancient Egypt, and begins the piece with Jackie Kennedy sparking a fad that may have resulted in the deaths of a quarter of a million leopards. The question Mead is considering here is whether it would make any sense— or help anything— to, in a sense, pay the animals for the use of their coat’s pattern, sort of like they had trademarked it. The money wouldn’t go directly to the leopards themselves, of course, but would be used for conservation efforts. The same thing would happen with images of lions, or any other endangered animals. The thinking behind this “species royalty” is “that a dissociation between the print and the animal itself might get in the way of encouraging leopard conservation through fashion.” By charging a small fee, we could place the animal at the forefront of people’s minds.
I don’t know how viable this is, either logistically or politically. Enforcement would be…challenging, and I can’t imagine any Republicans in the U.S. Congress voting to support such an idea. But the idea that we don’t reliably connect “leopard print” to actual, living leopards, and that this is a problem, is interesting. There’s also a great short history of leopard print in fashion in the piece, which would make it worth reading by itself.
Drive My Car won the Oscar for Best International Feature this year; though I haven’t seen all of the contenders, I think it should have won Best Picture. It’s available to stream in several places, and you should watch it if you haven’t yet.
Okay, so my conference is a significant part of the reason this post is late, but if I am honest another part is Horizon Forbidden West. It’s the sequel to Horizon Zero Dawn, which may actually be my favorite video game. They’re open-world RPGs, which is probably something you either like or you don’t, but what makes them great is the story and characterization. The setting is also gorgeous. If you need me, this is where I will be for a while.