Snow Storm Week

Asynchronous Snow Day Assignment

Please submit below. Due by the end of the day. Portal will close at Midnight.

Questioning the LLM in our heads.

Last week, many of us gravitated toward a common theme in our responses: the idea that technology is “cold and heartless” while humans are “warm and empathetic.” In writing studies, we call this a “nearest proximate cliché.” It’s not a wrong thought, always, but it’s the default our brains go to because we’ve seen it in so many movies. Like LLM chatbots that will find the most commonly said thing about a subject, we are performing a sort of autocomplete in our heads.

But this is what Ruha Benjamin’s Race After Technology warns against. Benjamin argues that the warmth of human intention can often be the very thing that masks systemic coldness. She wants to challenge our assumptions that machines are in some way different than the humans who make them and the human data that has been input into them. Whenever we are thinking at statistical scale (like a college admissions process) people are looking at rubrics and numbers just as much as a machine algorithm would, but algorithms are often sold as “removing human bias” to only assess the actual merits of a case. Benjamin says this is a myth because it is human data and assumptions that “code” the algorithm.

The Task: The “Human Mirror” Brainstorm

  1. Identify the Default: Write down one sentence where you previously framed a human as “better” than a machine because of “empathy” or “intuition.” If you didn’t do this on last weeks assignment, examine your own thinking and see where you might have made have made this assumption.

  2. The Benjamin Reality Check: Using Chapter 3 or 4, find a moment where Benjamin shows a human being “empathetic” or “well-intentioned” in a way that actually caused harm or excluded people.

    • Hint: Look at the “Technological Benevolence” section in Chapter 4.
  3. The Rewrite: Rewrite your original sentence. This time, instead of using “Warm vs. Cold,” use “Accountable vs. Unaccountable.” * Example: Instead of saying “A human teacher is warmer than an AI,” try “A human teacher can be held accountable for their grading biases in a way an automated system hides.” Be more specific than this as to HOW that accountability works and how the machine might evade it, even though it is the product of humans and institutions. Try to find HUMAN AGENTS for the harms you see.