June 17, 2024

#agir Recommendations for July 18, 2016

It’s been a while, and I have quite a bit of stuff this time around. I’ll start with a few connected things, and finish up with some random bits.

Unintentional Conversation: “Cities Will Have to Be Redesigned to Confuse Invading Robots”, by Geoff Manaugh, “Deep learning is Creating Computer Systems We Don’t Fully Understand”, by James Vincent, and “Inside the Playlist Factory” by Reggie Ugwu

Artificial Intelligence is the thing right now in Silicon Valley, drawing large and growing investment. This is because, after decades of technical advances that failed to produce any useful real-world applications, AI has started getting much, much better— orders of magnitude better, thanks in large part to the strategy known as “deep learning”, which seeks to model the structure of the human brain to create neural networks than can then be fed massive amounts of data, and learn, with relatively little human intervention, to recognize patterns in those data. These advances are what make an idea like self-driving cars even slightly plausible, though, as the recent fatal crash of a self-driving Tesla shows, they are still far from perfect.

The growing prominence of such systems prompts a bunch of interesting questions. Deep learning involves constructing a network, feeding it data, and then tweaking the various “weights” assigned to individual neural nodes until outcomes improve, but the designers of the systems don;t necessarily know how they reach the decisions they do. As research described in Vincent’s piece shows, they often don’t do what we expect: asked to identify what was covering the windows in a picture of a bedroom, AI systems being tested looked first at the bed, probably because they’ve figured out that that is what defines a bedroom, but the kind of room in the picture isn’t relevant for answering the question. None of that matters very much if the task at hand is distinguishing blinds from curtains, but if it’s distinguishing a truck from the sky (as was apparently the issue in the Tesla crash), the stakes are a lot higher.

(I’d also recommend a recent episode of PBS Idea Channel, in which the script was written by an AI system, based on all the previous scripts; as Mike Rugnetta pointed out elsewhere, it got the diction pretty much perfectly, and yet it makes no sense at all).

The inexplicable (so far) differences that arise between human brains and AI systems as they learn are probably part of what has led music streaming services like Spotify and Apple Music to turn increasingly to human curators, rather than algorithms, for their public playlists. Part of the issue here, certainly, is that the ability of machines to identify similarities in songs is still very imperfect, and is sill largely limited to preexisting sets of tags and categories. But part of it also seems to be that listeners can tell the difference between a list made by a machine and one made by a person, with particular tastes and biases, and they respond differently to the latter, even when they don’t exactly like it. There’s an ineffable quality to human choices that machines, at least so far, don’t seem to be able to replicate.

And maybe we should be glad for those weaknesses. One of the issues for deploying AI in the real world is making the environment “machine readable”— that is making it so that the sensory equipment of machines can make the kinds of distinctions it needs to for whatever job it’s been assigned. Of course, in the long term the goal would be to make the machines able to “read” the world as it is, but that seems a long way off. One of the reasons that self-driving cars are moving forward relatively quickly is that the road system is relatively machine readable as it is: clear lines with different colors, arrows, signs with different shapes, etc. Manaugh’s piece is about the ways in which we might make our environment less easy for machines to read, as a means of defending cities against military AI. That still sounds fairly sci-fi, but it won’t for long; already government agencies alter buildings to dampen sound vibrations, absurd laser light, and more, to defeat various snooping technologies. It doesn’t seem such a stretch, then, that such concerns might apply to the broader population before too long.

And, on that note, a short film in which none of this matters, as the distinction between human and machine collapses entirely. “Inside” by Mattis Dovier

INSIDE from Mattis Dovier on Vimeo.


Article: “Why Smart Clowns Immortalize Their Make-up Designs on Ceramic Eggs” by Atlas Obscura

Let’s just get the obvious issue out of the way here first: clowns are kind of creepy. And having their miniaturized faces lined up in rows and staring at you sounds more like a nightmare than a museum. All that said, though, the Clown Egg Register is a clever— and deeply weird— way of dealing with a practical problem: how can a clown create a representation of their makeup design clear and accurate enough to be used as a check against possible copycats? Obviously (?), by painting a their face onto a ceramic egg. I have to think that today this would be much easier to do digitally, but in the 1940s, when the Register was created, that was not true, and eggs (I guess?) had the advantage over paper in being three-dimensional. Honestly, I don’t know if this was ever the most logical way to deal with the problem, but there they are: three hundred little clown faces.


Article: “Dime after Dime: A Gripping History of Claw Machines” by Jake Rossen

This isn’t really so much about claw machines themselves as about their precursors: various designs of “diggers” or machines with a miniature steam shovel that could be used to dig prizes out from some kind of base (corn kernels, candy, nickels). Classified as gambling machines in the middle of the twentieth century, many were destroyed or hidden away in basements; a change in the application of the law allowed the rise of the modern claw machine in the 1970s and 1980s. I was mostly interested in this because these machines are the kind of thing we don’t think of as having a history; they’re just there, sucking money and rarely if ever paying out (unless you’re my dad, who has mastered them).


And, finally, a newsletter about dust. It’s called Disturbances, it’s written by Jay Owens, and I guess you’re either willing to follow a link to a (roughly) bi-monthly newsletter about dust or you’re just not, but it is fascinating. I just discovered it, and am still working my way through the backlog, but I might recommend starting where I did, with the most recent issue, which discusses the relationship between dust and climate change, focusing on snowmelt in Greenland. You might already be getting a sense of how much ground this thing covers from that, but if I were going to try to identify a central insight at this point, it would be this: dust circulates, on a global scale. It moves around the world, across huge distances, in vast quantities, and as it does so it has profound and unpredictable effects. As Owens put it in the first issue, “Dust is part of the earth’s metabolism.”

Part of my purpose in writing this blog is to make connections between disparate things and show that, from the right angle, everything is interesting. I can’t think of a better example of that principle than this.

2 thoughts on “Recommendations for July 18, 2016

  1. A while back, I worked on a project with Chicago’s McCrone Research Institute, which specializes in microscopy–including analysis of dust particles. One of their major early projects (post WWII) was identifying pollen in dust samples to determine where/when the Soviet Union was secretly testing nuclear weapons. For our museum, they investigated the possibility of using this technique to analyze dust on artifacts associated with the Lincoln assassination, among other things. It was fascinating working with them!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.