March 23

Open discussion of how course is going.

March 25

Welcome to Slop World

How the Hostile Internet Is Driving Us Crazy

Jacob Silverman, Financial Times, April 2025


Today’s Plan

  1. The Argument — What is Silverman claiming? (~5 min)

  2. Key Concepts — Chumboxes, slop, enshittification, dead internet theory (~15 min)

  3. The “Hostile Internet” Framework — Architecture as metaphor (~10 min)

  4. Media Theory Connections — Peters, Brecht, Foucault, Kierkegaard (~10 min)

  5. Discussion — What does this mean for how we study media? (~10 min)


Who Is Jacob Silverman?

  • Journalist and tech critic

  • Author of Terms of Service: Social Media and the Price of Constant Connection (2015)

  • Forthcoming book: Gilded Rage: Elon Musk and the Radicalisation of Silicon Valley

  • This article is part of a longer project on how Silicon Valley’s political economy shapes online life


I. The Core Argument


The Thesis in Brief

Silverman argues that the consumer internet has become fundamentally hostile to its users.

This isn’t one problem — it’s a cluster of disorders:

  • AI-generated “slop” flooding platforms

  • Algorithmic recommendation systems tuned for engagement, not truth

  • Advertising models that accept anyone’s money

  • Platforms optimized for metrics irrelevant to human well-being

  • The erosion of shared epistemological ground

He wants to unite these under one framework: the hostile internet.


Why “Hostile”?

The term borrows from hostile architecture — physical design that discourages certain uses of public space (e.g., anti-homeless bench spikes).

Silverman’s analogy: Moynihan Train Hall (NYC, opened 2021)

  • $1.6 billion, meant to honor the lost Penn Station

  • Almost nowhere to sit

  • Plenty of places to shop

  • Giant screens showing ads; train times on small screens

  • Surveillance cameras everywhere

The building is not designed for the traveler. It’s designed to extract from the traveler.


The Internet Version

Today’s internet isn’t really designed for us, but rather to elicit certain responses from us, responses which, to put it loftily, are hostile to human flourishing.

The parallel: digital spaces are designed to extract behavior (clicks, data, purchases, attention) rather than serve the people using them.

This is the central metaphor of the piece.


II. Key Concepts


Concept 1: The Chumbox

What is it? Grids of ads with weird, sexual, heart-warming, or confusing images at the bottom of web pages. “Clickbait” in its most distilled form.

Origin: Advertising networks (e.g., Taboola, Outbrain) filling vacant ad space.

Why it matters for Silverman: The chumbox embodies an “any piece of content will do” philosophy. It signals that content quality is irrelevant — only clicks and impressions matter.

Key move: Silverman argues this isn’t just an ad format anymore. It’s become a business model for the entire internet.


Concept 2: AI Slop

Definition: Low-quality content mass-produced by generative AI — images, videos, text — designed not to inform or entertain but to exist and generate metrics.

Silverman draws on Max Read’s 2024 New York Magazine article:

  • Content creators worldwide use AI tools to churn out material

  • They leverage platform advertising/reward systems (esp. Facebook)

  • This feeds a global grey-market economy of spammers and entrepreneurs

The point: It doesn’t matter what the content says, or if it’s any good. It only matters that it registers as a pageview or an ad impression.


Concept 3: Enshittification

Coined by Cory Doctorow (novelist, tech critic, blogger).

The lifecycle of a platform:

  1. Attract users — the platform is good, useful, often subsidized

  2. Mine users for value — harvest data, build network effects

  3. Degrade the experience — serve advertisers, cut costs, let quality collapse

Silverman treats this as one of several overlapping diagnoses of what’s gone wrong, but not sufficient on its own.

Think about this: Can you name a platform you’ve watched go through these stages?


Concept 4: Dead Internet Theory

The claim (deliberately paranoid): Much of what appears to be human activity online is actually automated — bots, scripts, AI agents.

The reality Silverman notes:

  • Most internet traffic is machine-to-machine (ad networks, infrastructure, metadata)

  • This automated sphere is spilling into spaces meant for real people

  • Bot accounts reply to your posts, “watch” ads, rack up engagement metrics

  • Bots talk to bots, sometimes without “knowing” it

The human layer underneath: Even behind the bots, there are real people — scammers, spammers, arbitrageurs — using AI tools to run schemes at scale.


How These Concepts Relate

| Concept | Focus | Key Claim |

|---|---|---|

| Chumbox | Ad design / content quality | Any content will do → pollution of the public square |

| AI Slop | Content production | AI enables industrial-scale low-quality content |

| Enshittification | Platform lifecycle | Platforms degrade by design once they have market power |

| Dead Internet Theory | Authenticity of interaction | Human discourse is being displaced by automated activity |

Silverman’s argument: none of these is sufficient alone. They are overlapping symptoms of a single structural condition — the hostile internet.


III. The Hostile Internet in Practice


Case Study: X (Twitter) After Musk

Silverman uses his own X feed as an extended example:

The advertiser exodus (2022–)

  • Major brands left after Musk’s acquisition and public outbursts

  • Revenue gap filled by anyone willing to pay

What replaced mainstream ads:

  • Saudi government propaganda, Israeli war messaging

  • Crypto scams, CBD gummies, drop-shippers

  • Porn accounts, “sugar daddy” city guides

  • Aspiring influencers, grifters, self-promoters

Then it got weirder:

  • Promoted posts that sold nothing — just random civilians paying to broadcast

  • Customer service complaints, polls about sentimental guns, babbling conspiracy videos


The Schizophrenic Feed

By spring 2025, Silverman describes promoted posts reaching “a peak of incomprehensibility”:

  • AI-generated art with no recognizable meaning

  • People ranting about religious cults and alleged debts of £400 million

  • A hotel owner asking strangers to call and guess a number

These posts are — in Silverman’s framing — psychotic in their break with reality.

This is not just a metaphor. It reflects a platform whose editorial logic has collapsed. When anyone can pay to broadcast anything, the result is not democratic speech but informational bedlam.


The Hallucinating Chatbot

Silverman recounts a detailed exchange with Grok (xAI’s chatbot):

  1. He asks a factual question about US WeChat users in 2020

  2. Grok cites an analytics firm and a Washington Post article

  3. The link doesn’t work

  4. Grok insists the link is valid, walks through “verification” steps

  5. Eventually admits it may have “conflated details or misattributed the source”

The pattern: Generative AI doesn’t produce facts. It produces answers that fit the shape of facts. When pressed, it confabulates, doubles down, then apologizes.

Silverman calls this a lying machine — not malicious, but structurally indifferent to truth.


Everyday Hostility

Beyond the dramatic cases, Silverman notes the mundane friction of the hostile internet:

  • Audible’s website won’t let you resume your current audiobook — it wants you to shop

  • Slow websites, balky apps, intrusive authentication schemes

  • Basic tasks require fighting through distractions, sales pitches, coercive dark patterns

The cumulative effect: Mistrust, inefficiency, cynicism.

The internet is no longer a tool that helps you accomplish what you want. It’s an environment you must fight through.


IV. Media Theory Connections


Kierkegaard (1843–1855)

The article’s epigraph:

Suppose someone invented an instrument, a convenient little talking tube which, say, could be heard over the whole land… I wonder if the police would not forbid it, fearing that the whole country would become mentally deranged if it were used.

Why this matters: The anxiety that communication technology could drive people mad is not new. Kierkegaard imagined it before the telephone existed.

Silverman is placing today’s internet crisis in a very long history of media panic — but also suggesting that this time the structural conditions may be genuinely different.


John Durham Peters, “Broadcasting and Schizophrenia” (2010)

Peters argues that new communication technologies produce new forms of apparent madness.

Key claims:

  • Schizophrenia was first described as a disorder during the 19th-century explosion of telegraph, wireless, and radio

  • Early schizophrenics described hallucinations as being like receiving radio signals

  • Hearing disembodied voices and speaking to no one in particular — once signs of madness — are now routine (phones, podcasts, earbuds)

Peters, citing Foucault: “Each age gets the form of madness it deserves.”

For Silverman: Social media is our era’s schizophrenic medium — unfiltered, unmoderated, tuned to every channel at once.


Peters: Telepathy as Bedlam

Peters imagines what total communicative transparency would look like:

Liberated from all barriers, communication would be indistinguishable from madness.

If everyone could instantly perceive our unfiltered thoughts, the result would be chaos. The “mad” don’t violate communication norms — they show us what it would mean to take the project of total communication seriously.

Application to social media: Platforms promise total connection, unmediated expression, instant reach. The result? Something that looks increasingly like the bedlam Peters described.


Brecht, “The Radio As An Apparatus of Communication” (1932)

Brecht proposed turning radio from a one-way broadcast medium into a two-way communication tool — letting listeners become suppliers of content, not just consumers.

The internet seemed to fulfill this vision. But Silverman’s verdict:

  • Yes, the listeners became suppliers

  • But the relationships formed are as likely to be conflictual as nourishing

  • The “vast network of pipes” ended up controlled by the same kinds of moguls who gave us radio

  • And they “lined those pipes with lead”

Question for us: Is the problem that Brecht’s vision failed, or that it succeeded — and the results were not what anyone expected?


Thomas de Zengotita: “The Flattery of Representation”

Silverman briefly invokes this idea from de Zengotita:

  • Digital ads are supposed to reflect the world back to us as we’d like to see it

  • Someone who looks like you, enjoying something you could have if you click

But what happens when the flattery breaks down?

When ads become bizarre, irrelevant, or incomprehensible, the feedback loop that sustained the illusion collapses. We’re no longer being seduced — we’re being assaulted by noise.

V. Critical Assessment


Strengths of the Piece

  1. Synthetic framing — Unites several critiques (slop, enshittification, dead internet) under one concept

  2. Concrete, experiential evidence — Silverman’s own feed, his Grok exchange, the pirated book

  3. Historical depth — Connects to Kierkegaard, Brecht, Peters, Kraepelin, Foucault

  4. The architecture metaphor — Moynihan Train Hall is vivid and analytically productive

  5. Honest about his own position — He’s embedded in the system he critiques (journalist, X user, future AI-discussed author)


Weaknesses and Questions

  1. What’s the causal mechanism? — “Hostile internet” names a condition, but is it a theory? What would it predict that the component concepts don’t?

  2. Selection bias — Silverman’s examples are drawn heavily from X, which is arguably an outlier post-Musk. Is Instagram the same? Wikipedia? Discord?

  3. Agency and alternatives — The piece is strong on diagnosis, thin on prescription. What should be done? By whom?

  4. The schizophrenia metaphor — Analytically provocative (via Peters), but risks stigmatizing mental illness. How comfortable are we with this framing?

  5. Who is the “we”? — The article assumes a particular kind of internet user. How universal is this experience?


Broader Questions for Discussion

  • Is the internet getting worse, or are we just becoming more aware of problems that were always there?

  • Silverman cites AI slop as a crisis — but he’s writing in the Financial Times, which has a paywall. Is the “hostile internet” actually a free internet problem?

  • What would a non-hostile internet look like? Has it ever existed?

  • If platforms are hostile by design, is the solution regulation, competition, public alternatives, or something else entirely?

  • How does this argument relate to Shoshana Zuboff’s surveillance capitalism framework?

March 27

Research Question Brainstorm

Your final paper is 5–7 pages with a clear, arguable thesis. Open topic — but it needs to be specific enough to research and argue in four weeks.

Step 1: Write down 2–3 things from this course that stuck with you, surprised you, or made you angry.

Step 2: For each, try to turn it into a question — not “what is X?” but “why does X happen?” or “what are the consequences of X for Y?”

Step 3: Stress-test your best question. Can you imagine someone disagreeing with your answer? Can you find sources? If the answer is obvious or the scope is enormous, narrow it.

A good research question names a specific technology or practice, a specific population or context, and a specific tension or consequence.

By the end of class, have one working question written down — even if it’s rough.