Soraya Mazarei
“Training Humans,” an idea and exhibition crafted and curated by Kate Crawford, co-director and co-founder of the AI Now Institute at New York University, and Trevor Paglen, an American artist principally concerned with issues including privacy, surveillance, and these two realms’ changing definitions as affected by digital technology. The website of the Osservatorio Prada, Prada’s dedicated contemporary photography space, defines the exhibition as “the first major photography exhibition devoted to training images: the collections of photos used by scientists to train artificial intelligence (AI) systems in how to “see” and categorize the world.” Upon entering the exhibition space, the visitor faces an open-plan floor plan with photos covering the walls and some dark marks pasted onto the windows. Peering closer, one can distinguish thanks to the light streaming through another part of the building diagrams and texts demonstrating how to split the human face into an x-, y-, and z-axis: how to read it three-dimensionally, as well as texts aspiring to classify people’s appearance so that “humans could be economically employed to make a final specific identification.” These startling efforts seem as though they belong either to the colonial past – they are similar to the practices of European doctors in Africa, who tried to make a case for Caucasian superiority on physiological terms – and modern – calling to mind the Chinese government’s use of facial recognition technology to track the movement of Uighurs, the country’s Muslim minority who have been increasingly monitored and moved into internment camps in the western province of Xinjiang. However, it is from neither the 19th or 21st centuries that these window decals originate – they are the work of an American governmental report of 1963, cluing the viewer into the fact that the digital surveillance and classification of appearance is neither antiquated history or dystopian future, but the recent past, and, therefore, the present.
Dragging you back into the moment is the recorded loop, playing the same script read in different accents, meant to reveal implicit biases in the listener’s perception of the same information given by people from different genders, parts of one country, or races. This audio is played from a speaker on the stairs, leading one to the second floor of the exhibition. Before you get there, however, you come across a board showing Michael Lyons, Miyuki Kamachi, and Jiro Gyoba’s “The Japanese Female Facial Expression (JAFFE) Database, developed in 1998. The set contains pictures of ten Japanese women making seven different facial expressions, meant to be used to help datasets learn to categorize these emotions for themselves. However, there are several implications here: firstly, there are only emotions worth studying; secondly, there is no room for one’s displayed emotion not being reflective of the internal one, and lastly, that the meaning of these seven emotions is constant for each individual. This quantification of the human is the central focus of the exhibition: ImageNet.
Crawford and Paglen trawled through thousands of images from the image database ImageNet, which was created so that “machine learning” could take place: humans identified millions of images, attaching subjective labels to them. From these identifications, the database began to teach itself, applying these same labels to individuals based on what their characteristics. This, as Crawford and Paglen, put so clearly in their article, “Excavating AI: The Politics of Images in Machine Learning Training Sets,” leads to extremely problematic results: a young African-American man is labelled a “murder suspect,” a young girl in a bikini is called a “slut.” Others are just bizarre: Hugo Chavez and Mahmoud Ahmadinejad, clad in red hard-hats, become “animists.” People believe that the digital, the technological, the scientific is infallibly objective. In “Training Humans,” Crawford and Paglen demonstrate that the calls digital technology and AI make are extrapolations of the prejudices and judgements of human beings. The exhibition also makes a joke. Crawford and Paglen grant viewers the opportunity to insert themselves into ImageNet technology: a chance to see how the machine would judge you. I was a “flatmate” and “little sister” – both correct, as the technology is, at times. It also gauges your emotion and age range, often missing the mark by a major distance. The lack of accuracy, the mismatch between what someone is, what someone feels themselves to be, and what the machine read them as, created a humorous response among visitors – though this could well be different had the individuals had a different racial identity or been dressed in a different way. Regardless, this section of the exhibition is the most photographed and uploaded – people amused by what digital technology judges them as, uploaded to a digital platform so that a machine’s judgement may be judged by humans. Paglen and Crawford shed light onto human ignorance: the blame for apparent desynchrony between human judgements and machine judgements cannot be placed upon a breakdown in communication, for it is we who taught them to think as such.

Kate Crawford | Trevor Paglen : Training Humans
Osservatorio Prada, Milan
12 Sep. 2019 – 24 Feb. 2020
Editor:

From Washington, DC. Spent four years living in a Scottish fishing village obtaining a degree in Art History & Modern History before relocating to Milan. Particular interests in German expressionism and works on paper and people-watching at art fairs.


















