The "Privacy-First" Classroom

This course is designed as an alternative to the surveillance-heavy environments common in modern higher education. I believe that true intellectual risk-taking requires a private, secure space for inquiry and that our intellectual work belongs to us, not the firms scraping Canvas data to build teaching and learning AIs.


Data Privacy & Sovereignty

  • No Third-Party Surveillance: This course explicitly rejects the use of SafeAssign, Turnitin, or automated AI-detection software.
  • Data Minimization: We will use the minimum amount of technology required to achieve our learning goals. Your student data stays within the Howard M365 ecosystem or on this local, version-controlled site.
  • Algorithmic Transparency: Any automation used in this course (such as my Power Automate workflows for communication) is designed to support you, not monitor you.

Generative AI & Critical Inquiry

Instead of “detecting” AI, we will analyze it.

  • Policy: While I do not permit the use of LLMs (like ChatGPT) to write your assignments for you, we will treat these tools as tools and objects of Critical Systems Analysis.
  • Reasoning: In the humanities, the process of interpretation is the work. Outsourcing that process to a black-box system undermines your capacity for narrative analysis.

Mentorship & Engagement

  • Human-in-the-Loop: All feedback and evaluation are performed by me, not an algorithm.
  • Communication: I use an automated pipeline to ensure timely logistics, which frees me up to provide higher-quality 1-on-1 mentorship and tutoring during office hours.

Classroom Governance

  • Decolonial & Critical Frameworks: We will examine our course materials through the lens of history, ideology, and economic power structures.
  • Accessibility (WCAG 2.1): If any part of this digital ecosystem is difficult for you to navigate, please let me know. This site is engineered for clarity and inclusion.