What Worked No Longer Works
Rethinking Introductory Survey Courses in the Age of AI
I’ve got a problem.
I require substantial writing in my 400-person U.S. history survey course—but now I largely receive 400 variations on the same essay. The wording, structure, transitions, tone, even the closing sentences are largely identical.
This isn’t about a few students cheating. It’s about the collapse of an entire pedagogical model. The mass lecture, the take-home prompt, the standardized rubric—all built for a world that no longer exists.
What AI has done is not simply make cheating easier; it has made my entire form of assessment obsolete.
What once measured students’ thought now measures nothing at all. I can’t ban AI or wish it away. I need to change how I teach—what I assign, what I value, and what I mean by learning itself.
For more than 40 years, I’ve taught survey courses to 400-600 students at a time. Every week, students wrote brief essays that 5-8 teaching assistants spent weekends grading. Today, we estimate at least half are AI-generated. The TAs grade machine output with rubrics a machine could apply.
Meanwhile, I stand at a podium delivering information students could absorb faster by reading, responding to questions I’ve asked identically for decades, never knowing which students are genuinely confused, which are disengaged, and which have insights worth developing.
This is what we call teaching.
The real scandal isn’t that AI threatens to disrupt this system. It’s that we’ve defended it for so long.
We’ve pretended that lecturing to hundreds constitutes mentorship, that having graduate students grade formulaic essays constitutes feedback, that this industrial model represents the humanistic tradition.
AI doesn’t threaten to dehumanize higher education. It reveals how thoroughly we’ve already dehumanized it—and offers us one last chance to recover what we’ve lost.
The Two Kinds of Learning We’ve Confused
Every course involves two distinct but intertwined forms of learning:
Mastery learning—acquiring foundational knowledge, conceptual frameworks, and procedural skills. Students need the vocabulary, chronology, basic facts, and grammar of a discipline.
Inquiry learning—cultivating judgment, creativity, ethical reasoning, and the ability to navigate ambiguity. Students learn to interpret, synthesize, critique, and do something meaningful with knowledge.
For more than a century, we’ve collapsed these two goals into one overburdened classroom experience. Faculty spend most of their time doing what machines can now do with greater patience: delivering content, administering assessments, grading routine work, and diagnosing and correcting common misunderstandings.
What remains—the interpretive, dialogical, meaning-making work that forms mature thinkers—gets squeezed into office hours most students never attend, or into discussion sections led by overworked graduate students, or into paper margins students often don’t read.
We’ve built a system backward. We’ve made humans do the mechanical work and made the humanistic work optional.
What AI Forces Us to Confront
The conventional freshman-level essay is dead. Ask ChatGPT to “explain how slavery caused the Civil War,” and it produces grammatically perfect prose drawing on a consensus of online sources. It even names key historians.
But what it produces is not thought. It’s the historical equivalent of an instant meal—blandly nutritious, superficially complete, and utterly without flavor.
For instructors of large classes, this creates a crisis of assessment. If 400 students can generate nearly identical AI-assisted responses, grading becomes not just futile but meaningless. The fundamental question remains: What am I trying to teach?
The history survey was never really about information transfer. It was about habits of mind—about cultivating the ability to interpret evidence, weigh competing claims, and understand how human beings in the past grappled with moral and political dilemmas that still haunt us.
That purpose remains. But the path to achieving it must change.
If AI can produce the “product,” then we must shift attention to the process—the reasoning, reflection, and struggle that precede a finished essay. The history survey should no longer test what students can write in isolation. It should be a workshop in how to think with and about the past.
Assignments That Make AI Irrelevant
The instinctive response to AI is surveillance: more proctoring, more detection software, more suspicion.
But detection always lags behind innovation, and policing is a miserable way to teach.
The surest way to make AI irrelevant is to design assignments that reveal actual thinking—assignments requiring attention, interpretation, memory, self-reflection, judgment, and personal specificity. These are tasks AI imitates poorly or cannot imitate at all.
Here are assignments that render machine-generated work useless:
1. The “Source Detective” — Students analyze a primary source they’ve never seen. Choose one slave narrative from our database that we haven’t discussed. Identify a passage that surprised you. Explain what it reveals about power, resistance, or selfhood. AI cannot guess what a student noticed or why.
2. A “Dialogue” Assignment — Students inhabit a voice grounded in course materials. Write a letter from a 1968 draftee explaining his decision to serve, resist, flee, or seek a deferment. Voice, empathy, and historically grounded imagination are difficult to fake.
3. A Historical Disagreement — Students weigh competing interpretations. Discuss “Who freed the slaves?” and explain which arguments you find most persuasive. AI can summarize arguments, but it cannot reproduce a student’s intellectual struggle with course readings.
4. “Apply the Past to the Present” — How has the memory of Vietnam shaped reactions to Afghanistan? What does Prohibition teach us about drug policy? How does 1850s nativism echo today’s immigration politics? This requires analogical thinking AI fumbles without deep context.
5. “If You Were There...” — Students evaluate a morally fraught moment from their own values: Nat Turner’s Rebellion, Japanese American incarceration, the Philippine–American War. The writing must combine emotion, evidence, and self-reflection.
6. “What Would X Say About This?” — How would Tocqueville interpret today’s polarization? How would Ida B. Wells analyze police violence? What would Hannah Arendt say about media echo chambers? This demands mastery of a thinker’s worldview.
7. “In Conversation With AI” — Ask an AI: “Why did the U.S. lose the Vietnam War?” Then critique what it got wrong, oversimplified, or missed—and explain how a stronger historical argument should look. Students learn to evaluate, not rely on, AI.
8. A Personal Analysis — Apply cognitive dissonance, stereotype threat, or confirmation bias to a real moment: a conflict with a friend, an academic setback, a misjudgment.
9. A Family or Cultural Tradition — Interpret a ritual from your background through Durkheim’s collective effervescence, symbolic interactionism, or rites of passage. AI cannot approximate the lived texture of a student’s actual cultural world.
10. A Mini-Ethnography — Observe a dorm hallway, restaurant kitchen, or rehearsal studio and apply course concepts. Authentic observation produces idiosyncratic details AI cannot plausibly guess.
11. The History of a Personal Object — Trace a meaningful item using material culture, technology history, or memory studies. The combination of personal story and scholarly method is impossible for AI to fabricate convincingly.
12. A Detective Narrative — How many men accompanied John Brown to Harpers Ferry? How many Cherokee died on the Trail of Tears? Students must lay out the evidence trail—ship manifests, muster rolls, demographic reconstruction. AI cannot fake step-by-step reasoning.
13. Track Down a Historical Claim — Investigate claims from documentaries or textbooks: “The U.S. has 5% of the world’s population but 25% of its prisoners.” “Half of colonial women were pregnant at marriage.” Students must verify or falsify with real evidence.
14. “Is This Really True?” — Analyze widely repeated assertions using course materials: “The Civil War was caused by slavery—was it?” “Was westward expansion inevitable?” “Was the 1950s really stable?”
15. Public Memory — Evaluate how an event should be remembered: the Mexican–American War, Reconstruction, the Tulsa Massacre. Students must weigh conflicting interpretations and propose ethically grounded remembrance.
Four principles undergird each of these assignments: personal specificity (AI cannot fabricate lived detail), contextual grounding (students must draw on course materials), interpretive work (analysis, not summary), and process visibility (showing how their thinking unfolds over time).
AI is not defeated by surveillance. It is defeated by pedagogy that foregrounds the human mind—its voice, doubt, curiosity, and interpretive labor.
But There’s a Deeper Possibility
These assignments help, but they don’t solve the fundamental problem: I’m still doing this at scale in ways that make authentic learning nearly impossible.
What if we stopped trying to fix the industrial model and actually transformed it?
Here’s what becomes possible when we separate mastery from inquiry:
Students work through AI-guided modules on chronology, concepts, and foundational knowledge. The system doesn’t just quiz them—it converses. When a student writes “checks and balances prevent tyranny,” the AI asks: “Can you give me an example of how this worked—or failed?”
The AI identifies that 40 percent of students think “Reconstruction” meant rebuilding infrastructure, and flags this for me. I receive a dashboard showing not grades but understanding: These 60 students can’t distinguish correlation from causation.
This diagnostic power changes everything. I know exactly where Tuesday’s discussion must begin. I can spot intellectual potential that would have remained invisible when silence could mean confusion, boredom, or intimidation—and I’d never know which.
Having relegated information transfer to technology, I can break the class into eight seminars of 50. My TAs and I don’t review chronology—students arrive having demonstrated mastery.
Instead, we debate: Did the New Deal save capitalism or betray it? What makes the Civil Rights Movement a masterclass in strategy? How do we reconcile the founders’ rhetoric of liberty with slavery’s reality?
Students write brief in-class essays requiring interpretation, not information retrieval. I read their work myself—not 800 formulaic responses, but 50 pieces of genuine intellectual effort—and give feedback that actually improves thinking.
The graduate students? They’re learning to teach—observing seminars, leading small groups, mentoring undergraduates. They’re being trained as educators, not used as cheap labor.
This is implementable today with existing technology.
The Human Dividend
If we automate the machinery of mastery, what becomes possible?
Real conversation. Not performative Q&A where three brave souls ask questions for 400, but sustained dialogue where half the class speaks, where ideas build on each other.
Genuine mentorship. Faculty could meet regularly with individual students—not to review material but to discuss ideas, recommend readings, and open doors to research.
Feedback on work that matters. No more weekends grading whether students remembered that the 14th Amendment was ratified in 1868. Instead, feedback on arguments, interpretations, and original research.
Collaboration and discovery. Time for students to work on projects that approximate what scholars actually do—archival research, experimental design, and interpretation critique.
This wouldn’t deskill faculty. It would re-professionalize us, allowing us to do what we entered this profession to do: guide intellectual transformation.
Why the Last Innovation Wave Failed—And Why This Moment Is Different
MOOCs were supposed to revolutionize education. Completion rates were 5 to 7 percent. Why? They automated the wrong thing—the impersonal lecture—and wondered why students didn’t persist.
Competency-based education made a similar mistake. It stripped away community, rhythm, and relationship. Students got lonely and quit.
This moment is different because we finally understand what to automate and what to preserve. We can use AI to handle the mechanical precisely so we invest human attention in the relational, motivational, and communal dimensions of learning.
The goal should never be replacement. It should be liberation—freeing teachers to be fully human in their work.
The Risks Are Real
No reform worth pursuing is without hazards, and AI introduces several that universities must confront directly.
The first is the illusion of mastery. Because AI can generate correct answers with ease, there is a danger of mistaking fluency for understanding. Students may appear competent while lacking the ability to apply, interpret, or transfer what they’ve learned. This makes human checkpoints indispensable—moments where faculty assess not recall or correctness but insight.
A second risk is the datafication of learning. If every click, keystroke, and hesitation becomes grist for analytics, students cease to be learners and become data subjects. Without strict ethical boundaries, transparency requirements, and limits on data retention, AI-enhanced learning could slide into AI-enhanced surveillance.
A third concern is the erosion of community. If AI becomes the primary venue for individualized instruction, students may drift toward solitary learning experiences, weakening the communal and dialogical dimensions that give education its human shape. The inquiry track—the part of education that fosters debate, curiosity, and shared intellectual struggle—must therefore remain intensely social.
Finally, there is the market logic of efficiency. Administrators may see AI as an invitation to cut costs: fewer sections, fewer instructors, larger classes, more automation. But austerity is already pervasive—adjuncts teach the majority of courses at many institutions.
Any adoption of AI must come with a non-negotiable condition: every dollar saved should be reinvested in human teaching—smaller seminars, better pay and training for graduate instructors, and more full-time faculty lines.
If leaders refuse, they reveal that the goal is not educational improvement but budgetary contraction.
The risks are real. But they are manageable—if faculty make clear that AI’s role is to support human learning, not replace the human beings who make learning possible.
What Faculty Should Actually Fear
If your primary value is delivering content students could learn more efficiently elsewhere, you should be worried. If you’ve been lecturing from the same notes for fifteen years, if your exams test recall rather than understanding—yes, AI can probably do your job.
But is that why you became an academic?
Most of us entered this profession to change minds, guide discoveries, witness the moment when a historical period comes alive, when a mathematical proof clicks, when an ethical dilemma reveals its complexity.
AI can deliver content, diagnose misunderstandings, and provide practice. But it cannot model intellectual engagement. It cannot demonstrate what it looks like to think carefully, to change one’s mind in response to evidence, to hold complexity without collapsing into certainty.
It cannot care about a specific student’s intellectual development the way a human mentor can.
If we use AI to automate the mechanical, we can finally spend our time doing this work—the irreplaceable work that only humans can do.
The Choice Before Us
Five years from now, one of two scenarios will have unfolded:
Scenario One: We’re still having exhausted debates about academic standards. We’re still defending lecture halls as “high-touch learning.” We’re still assigning essays, pretending we don’t know they’re AI-generated. Meanwhile, for-profit companies offer AI tutoring that outperforms what most universities deliver. Enrollment continues its slow decline. We’ve protected a system that was already failing.
Scenario Two: We’ve used this technological moment to recover what universities were meant to be. Students demonstrate genuine mastery because they can actually use what they’ve learned. Faculty spend their time in seminars, mentoring relationships, and collaborative inquiry. The university has rediscovered its ancient vocation: not as an information factory but as a community dedicated to the formation of human judgment and character.
Which scenario unfolds depends on choices we make now.
History as Counter-Technology
AI is astonishingly good at reproducing what already exists. History education, by contrast, reminds students that things could have been otherwise. The historian’s craft models precisely the reflective, interpretive, morally awake intelligence that machines cannot replicate.
The challenge of AI is therefore not the end of the history survey but its renewal. When we redesign assignments to privilege interpretation over recitation, process over product, and judgment over polish, we reclaim the essence of liberal education: the slow, demanding, and irreplaceably human work of understanding.
The deeper truth is this: The real automation already happened. It happened when we industrialized education—when we packed hundreds into lecture halls, when we standardized curricula, when we turned teaching into content delivery and assessment into sorting mechanisms.
We turned professors into content-transmission devices and students into exam-taking units. We built a system that made humans behave like machines.
AI doesn’t threaten to complete that dehumanization. It offers us a chance to reverse it—to finally let humans be human again in their teaching and learning.
The question isn’t whether we’ll use technology. We’re already using it—badly, defensively, in ways that don’t serve learning. The question is whether we’ll use it intentionally, humanely, in service of what universities are actually for.
If we can manage that, we won’t have mechanized learning. We’ll have liberated it.

If those 15 writing prompts are take-home assignments, I’m skeptical that these are AI-proof. Sure, lazy prompts may give you unimpressive results, and the personal ones will obviously be made up, but they can probably get you most of the way there. And next year they’ll only be better.
Ten minutes of writing at beginning of each class on a topic. Blue book and pen. That’s it. Base all assessment on these writing submissions.