by Selby Sohn

The first time I encountered Adam Chin’s work was during a critique at SF Camerawork. Immediately, I was mesmerized. I saw large-scale pairs of mugshots: forward-facing portraits paired with profiles from a series titled Front and Profile (2019). While one photo was always in focus, the other was blurry, reminiscent of a Marlene Dumas painting. Chin explained that while the in-focus photos were mugshots, the blurry ones were generated by an algorithm guessing what the other side of the face would be. The original mugshots came from the National Institute of Science and Technology (NIST) database of photos taken between the 1930s and the 1970s. None had names of the subjects or explanations of their arrests; they were devoid of all identifying content.
Chin noticed, however, that the subjects in the database were disproportionally Black. Heather Snider, the former director of SF Camerawork who was leading the critique, saw the problem in this. If a database feeding an algorithm has a racial bias, so will police when they attempt to identify a criminal suspect. This turned out to be true. In 2020, four people were proven to be wrongly arrested because of an algorithm. All of them were Black. Algorithms are a reflection of the database that feeds them.

Algorithms are step-by-step instructions a computer uses to complete a task and are now integrated into almost every aspect of our lives. Even our cell phone cameras use them to modify how we see ourselves. Chin has a history of modifying images using AI. He was one of the original employees of Pacific Data Images (a pioneering computer graphics studio), where he designed lighting effects for Shrek, Madagascar, How to Train Your Dragon, and Kung Fu Panda. In his photo-based art, Chin uses the algorithms derived from open-source, raw code that he finds through online communities such as GitHub, which he debugs and repurposes to fit the project at hand. There is something soothing about his images because they are blurry. I found myself feeling relieved by this fact while looking at the exhibition. Maybe the tech giants don’t know everything about me, I thought. Usually, when I sense a computer looking at me, I imagine the image to be microscopic — that it sees me, sees all of me, all my blemishes, all of my flaws. I envision it tracing my eye contact across a page, noticing what I am noticing, seeing when my pupils dilate. Then, my mind snapped back to reality: of course, the tech giants know everything about me — they sell that information. Chin’s images are blurry because he uses outmoded, experimental algorithms, not the precise hygienic algorithms of our corporate surveillance state. And it’s that messiness, that “badness,” recomposed at every pixel, that makes them so compelling. We see the computer struggling, making tentative strokes, unsure of itself.

Heather Snider, the SF Camerawork director, likened this effect to spirit photographs from the late 19th to early 20th centuries. Seeing them is like witnessing an aura. They captivate us because they shift our reality, showing us something we don’t already know. As algorithms progress, this mode of seeing is quickly becoming outmoded. Are we going to miss when the algorithms were “bad”? Chin’s current show, Machine Learning, curated by DeWitt Cheng, answers affirmatively, showing us a nostalgia for the present.
SAGAN (2020), a second algorithm-generated project, is a grid with 16 faces, all of the same subject, each slightly askew. The effect resembles the blur of a camera moving while taking a picture, yet somehow, oddly, blurring only portions of it. Each face is recognizable yet impossible, a phenomenon known as the “uncanny valley” – a term coined by robotics professor Masahiro Mori to denote our recognition of something as human and our simultaneous, intuitive rejection of it as inhuman. The effect is both alluring and unnerving. For this project, Chin used SAGAN, the same algorithm the U.S. Postal Service relies on to identify numbers. From it, he created a dataset of 800 self-portraits made remotely by friends at the start of the pandemic. Their photographs, he said, were far better than any he could have taken because they revealed things he didn’t know about them. Maybe the revealing dataset is why

the recomposed faces are so off. We see them clearly, but something about them is wrong, wrong yet impossible to pinpoint. The series also includes a 12-minute video (Evolution), showing the faces being slowly composed by the algorithm. We see the computer deciding to place a pixel somewhere and then change course, shifting facial expressions as it completes an image.
Photobooth Kiss (2021) consists of photobooth strips Chin culled from friends, eBay, and the Musée Mechanique at Fisherman’s Wharf—the last photobooth in San Francisco that relies on darkroom chemicals. Chin fed them into an algorithm that transformed forward-facing images into composites showing what the couple would look like kissing. These, too, are out-of-focus, which made the kisses hard for me to see. Eventually, I saw them, which was so satisfying — an experience comparable to Magic Eye paintings (autostereograms that hide recognizable forms inside composites made of abstract shapes).

Chin’s most recent work, a man eating sushi (2022), derives from an open-source version of a computer program (DALL-E). If you type a caption describing what you want, DALL-E searches a database of 60 million images on the internet, generating results that fit your description. While Chin’s photos purportedly show a man eating sushi, what we see instead is a Francis Bacon-like medley of vaguely violent colors and shapes, an anthropomorphic figure chewing on something that bears no resemblance to sushi. This gulf in our ability to recognize what’s pictured Chin ascribes to the fact that sushi takes so many forms — a hand roll is very different from inari, for example — making it hard for the computer to assemble a stable image. Which is great, actually. If the photos depicted what I expected, they’d be boring. My mind has to guess at what it already knows. I know I should see sushi. Instead, I see something else. I check myself. I see something else again. My mind engages in a feedback loop of wild supposition.

Photographs, throughout time, have expanded our vision. We can see Earth from outer space and an insect’s eyeball in detail. We can also see faster than our vision allows, the classic example being Eadweard Muybridge’s galloping horses, feet off the ground, frozen in time. What will it mean to see what an algorithm sees? Our vision is never truly our own, as we carry with us the imagined sight of not only everyone else but also that of cameras and now algorithms. For me, Chin’s images shimmer because they divulge a new unknown, the unknown we navigate every time we use a search engine or post on social media. And maybe that’s why I can’t stop looking at them — for the unknown is always a type of secret, and who doesn’t want to know a secret?
# # #
Adam Chin: “Machine Learning” @ Chung 24 Gallery through November 12, 2022.
About the author: Selby Sohn is a Bay Area artist, writer, and curator who makes objects and actions on the brink of utility. Currently, their work is on a NASA PACE-1 satellite orbiting Earth, and they have exhibited recently at Cone Shape Top, Mercury 20 Gallery, ATA Window Gallery, Root Division, Berkeley Art Center, East Window, and through the City of Palo Alto Public Art Program. Lately, their writing has been published in Third Iris and Journal.fyi.
Nice writing.