March 18th

March 20th

Who Cleans Up and Who Cashes In

AI, Aesthetics, and Hidden Labor


Two Articles, One Industry

Today we’re working with two pieces that approach AI from different angles:

Billy Perrigo, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” (TIME, January 2023)

Gareth Watkins, “AI: The New Aesthetics of Fascism” (New Socialist, February 2025)

One is investigative journalism. The other is cultural criticism. The question for today: what do they have to do with each other?


Article 1: The TIME Investigation

What happened

Before ChatGPT launched in late 2022, OpenAI had a problem: its language models, trained on text scraped from the internet, would generate racist, violent, and sexually explicit content.

To fix this, OpenAI contracted with Sama, a San Francisco–based outsourcing firm with workers in Kenya, Uganda, and India. Starting in November 2021, Sama employees in Nairobi were sent tens of thousands of text snippets describing child sexual abuse, bestiality, murder, torture, and incest. Their job: read it, classify it, label it — so the model could learn what not to say.


Article 1: The Conditions

Workers earned between 2.00 per hour depending on seniority and performance.

They were expected to process 70 to 250 passages per shift (the numbers are disputed between Sama and the workers themselves).

Four workers interviewed by TIME described being mentally scarred. One reported recurring visions after reading descriptions of child sexual abuse. “Wellness” counseling was available but workers described it as unhelpful and rare.

Sama canceled its OpenAI contract in February 2022 — eight months early — citing the traumatic nature of the work.


Article 1: The Structure

The key structural detail: three layers of distance.

  1. OpenAI designs the system and sets the requirements

  2. Sama (SF-based) manages the contract and the workers

  3. Kenyan employees (many from Kibera, one of Nairobi’s largest informal settlements) do the actual labor

When TIME asked OpenAI about working conditions, an OpenAI spokesperson said the company did not issue productivity targets and that Sama was responsible for managing payment and mental health provisions.

This is the architecture of plausible deniability.


Article 1: Key Concepts

Data labeling — the human work of classifying content so a model can learn from it. Often described as the “hidden” or “ghost” labor of AI.

Outsourcing chains — the practice of contracting labor through intermediary firms so that the company whose name is on the product is legally and publicly distanced from the conditions of production.

Content moderation as traumatic labor — the psychological cost of processing the worst content the internet produces, borne disproportionately by workers in the Global South.


Article 2: The New Socialist Essay

The argument

Watkins argues that AI-generated imagery has become the dominant aesthetic form of the political right — not despite its ugliness, but because of it.

His central claim: the right loves AI art not because it looks good but because it looks bad — and that badness is a display of power and a small act of cruelty.


Article 2: Three Reasons the Right Loves AI

1. Contempt for labor. AI imagery signals that you don’t need to hire — or interact with — the kind of educated, urban, often left-leaning workers who do creative work. The absence of human craft is the point.

2. Class solidarity among capital. The tech industry and the political right have formed a mutual alliance. The right normalizes AI by using it; the tech industry rewards the right with platforms and access. Over $1 trillion has been invested in generative AI.

3. The rejection of aesthetic rules. If art involves the making or breaking of aesthetic rules — if even punk and Dada operate within a tradition — then AI art as practiced by the right says: there are no rules but the exercise of power.


Article 2: Key Concepts

“Slop” — the term for mass-produced, low-quality AI content that floods platforms. Nearly Oxford’s word of the year in 2024.

“Postmodern conservatism” — Watkins’s term for a right-wing politics that has abandoned even the pretense of reasoned argument in favor of performative cruelty and ironic provocation.

Cruelty as aesthetic principle — the argument that AI imagery’s primary mode of enjoyment comes from knowing it hurts someone: displaces a worker, degrades a subject, or attacks the idea that art can have value.

Normalization through use — the idea that if enough powerful people use a technology, it becomes “too big to fail” regardless of whether it works or has value.


Article 2: A Provocation

Watkins ends with a tactical argument: the most effective response to AI art isn’t moral outrage (the right enjoys that) but ridicule.

“Our most effective weapons against AI, and the right wing that has adopted it, may not be strikes, boycotts or the power of dialectics. They might be replying ‘cringe,’ ‘this sucks,’ and ‘this looks like shit.‘”

Hold this in mind — we’ll come back to whether this is sufficient.


The Gap Between the Articles

At first glance, these articles seem to describe different problems:

  • Perrigo (TIME): exploitation of workers in the Global South who perform the traumatic labor that makes AI functional

  • Watkins (New Socialist): the use of AI-generated imagery as an aesthetic weapon by the political right

One is about production. The other is about consumption.

But what if they’re describing the same system from different ends?


A Framework: Front Stage / Back Stage

Think of the AI industry as having two faces:

Front stage: the glossy product, the interface, the generated image, the chatbot that sounds articulate and helpful. This is what users see. This is what Watkins analyzes.

Back stage: the data labeling, the content moderation, the traumatic labor, the outsourcing chains. This is what users don’t see. This is what Perrigo reveals.

The front stage depends on the back stage. ChatGPT is polite because someone in Nairobi read descriptions of child sexual abuse for $1.32 an hour. The AI image that Tommy Robinson posts is “clean” because someone, somewhere, labeled what counts as toxic.