I knew my writing students were using AI. Their confessions led to a powerful teaching moment | AI (artificial intelligence)

Published:

I have been teaching fiction writing at MIT since 2017. Many of my students last wrote fiction in middle school, and very few have experienced a proper workshop, so at the start of every semester I offer these directions for writer and reader alike:

Read the story at least twice. Mark what works and what doesn’t – underline great sentences, flag clunky syntax, gaps in logic and unrealistic dialogue. Ask yourself: does the story work? Why or why not? What could improve it? Answer in a signed letter to the author, attached to their story. Give your honest opinions. Remember that an effective peer review demands close reading of the text accompanied by a boldness of spirit.

As the directions foreshadow, most of the time we’re discussing why we didn’t like the story being workshopped, because writing a good story is immensely difficult even under the best conditions, especially for Stem-centric undergrads who thrive within a structure of quantitative problems and solutions – systems where there’s a right answer and a clean method for arriving at it.

Fiction writing isn’t quantitative. Good writing feels good to read; bad writing feels bad. An effective workshop is a paradox: students must provide textual evidence to support the qualitative as if it were the quantitative. This is a terrifying prospect for the habitually superb student, to sit in stony silence while their classmates and professor slash at their work. The act of confronting that terror is, itself, an education for the writer, because writing is both vehicle and vessel for thinking – abstract made concrete, feelings translated into words. This is what many writers talk about when they refer to good prose as not just poetic expression, but communication. Thus, when we criticize a writer’s work, not only are we criticizing their aesthetic choices, we’re also criticizing – and here’s where it can get personal – the writer’s feelings and their ability to communicate them.

It’s a lot for the ego to absorb. Prior to a few years ago, the only way a fiction writer could protect their ego was to either pay someone else to write for them, or resort to plagiarism. AI changed all that.


‘Dead perfection’

AI’s prose is perfectly mediocre, producing the sort of inert gloss that reads like a Frankensteinian amalgam of MFA-workshopped writing, an unintentional parody of the style it mimics. The resultant stories and essays are simulacra of thought, generated via pattern recognition learned from millions of human-penned words, rooted in no particular experience by no particular person. AI writing reminds me of Tennyson’s description of the beautiful Maud in the titular poem:

Faultily faultless, icily regular, splendidly null
Dead perfection; no more

Insightful readers feel that emptiness even if they can’t articulate it. They sense that the body moves without a brain. By contrast, student-written fiction is gloriously flawed, a struggle on the page between what the author is trying to say and what’s actually being said. The prose stumbles in a way reminiscent of a foal learning how to walk: even in their trembling legs I see hints of future grace. Such clumsiness is necessary; its absence would be proof of the foal never having learned to walk.

Death and taxes; technophobia is the third certainty. In 1565, nearly a century after Gutenberg invented the printing press, the Swiss scientist Conrad Gessner was already worrying about the “confusing and harmful abundance of books”. An 1889 article in Nature claimed the telephone is the most dangerous of all inventions “because it enters into every dwelling. Its interminable network of wires is a perpetual menace to life and property.” Now we’ve added AI to the list of worries: a 2025 MIT Media Lab preliminary study found that participants who used ChatGPT to write essays showed lower neural connectivity than those who wrote without assistance.

Other studies warn of similar dangers, from not-yet-peer-reviewed reports with self-explanatory titles such as “AI Assistance Reduces Persistence and Hurts Independent Performance” and “Generative Artificial Intelligence Reliance and Executive Function Attenuation: Behavioral Evidence of Cognitive Offload in High-Use Adults”. Dire stuff, if proven true. But whatever the peer-reviewed findings may be, the central warning is hard to ignore and doesn’t require a study for validation: by letting students routinely and thoughtlessly use AI, we’re weakening their minds. That warning shaped how I addressed AI in my syllabus. Specifically, how I planned to discourage its use:

Playing the AI-detection game drags me into a surveillance mindset that undermines the workshop environment. If you use AI, it reveals your orientation toward writing. Do you want to make art, or just turn in text? Do you want to actually learn how to write, or just pretend to do so?

I was certain my questions would shame them into compliance even without an explicit prohibition. So at the start of last semester, when I read two of my student’s stories for the first workshop and knew within their opening paragraphs that both stories were written by AI, I was hurt. I was also worried, because I realized that for the first time as a writing professor, I had to deal with students producing words without work, which wasn’t quite plagiarism and wasn’t quite paying for someone else to do the job, but it felt like a kind of naive chicanery; a perversion of the contract between writer and reader.

As the first workshop started that night, I turned to the ostensible authors and told them I knew that AI wrote their stories. I didn’t need AI-detection software to know; I just knew. The prose was too polished for a young writer, the arcs too tidy, every character prepackaged, every metaphor a pastiche without context. I told the class the workshop couldn’t proceed because I won’t give feedback to an author who doesn’t exist, but I assured the would-be authors that they weren’t in trouble. MIT’s policies regarding AI usage were in flux, and my syllabus offered an opening. Besides, had AI been available during my undergrad years, would I have resisted its help? Of course not. The pedagogical borderlands have always been filled with students questing for shortcuts. The technology changes but the quest remains.

For a few moments, all was quiet except the classroom’s ticking radiators. Then, a teary-eyed confession: one of the ostensible authors said she only used AI because she was scared of looking stupid, of being criticized for bad writing. She said she loved writing stories and hated having used AI. But she couldn’t stop herself, recounting a sequence similar to an addict’s descent: at first she fed her story into AI for a grammar check, it suggested line edits and she accepted, then it asked if she wanted structural edits, then it offered to rewrite the entire piece.

The other would-be author admitted he had never written a short story before and he had an idea but didn’t know where to start. I asked him why he didn’t reach out to me for help. He shrugged.

One of the other students raised her hand, saying she didn’t understand why it was bad for AI to write stories as long as the stories are based on their ideas. More students spoke: one wanted to know how using AI was any different from using a human editor. Another wanted me to answer why, at a university that launched one of the world’s first AI research programs in 1959, were we even having this debate? Isn’t AI meant to make everyone’s life easier? Less stressful? Isn’t the point of AI to free humans from the tedium of rote tasks?

The conversation that followed their confessions was one of the most productive teaching moments of my eight years at MIT. Writing, I told them, isn’t supposed to be easy, and of course it can be tedious but that doesn’t make it rote. Writing isn’t just the production of sentences – it’s the training of endurance by way of sustained attention. It’s a way of learning what one thinks by attempting to say it. An LLM can reproduce the appearance of that activity, but it can’t replace it, because the value lies not only in the object produced but in the transformation that occurs during its making.


Bringing back friction

In George Orwell’s 1946 essay Confessions of a Book Reviewer, Orwell describes himself surrounded by unread books, “constantly inventing reactions towards books about which one has no spontaneous feelings whatever”. High-volume, on-deadline reviewing, he argues, does not merely deform the work of reading – it deforms the self. The mindless manufacture of responses erodes judgment, and standards collapse.

Orwell is describing what happens when language is produced under conditions that disconnect it from thought: the reviewer performs the shape of a response without having actually responded. What Orwell couldn’t have anticipated is that this condition would eventually be outsourced upstream. When a workshop fills with AI-generated fiction, every writer and reader becomes the reviewer Orwell describes.

Orwell ends his essay by arguing that criticism would be healthier if it were slower, more selective, and less industrial. The same argument now applies to writing fiction. AI speeds the writing process, but isn’t at all selective, and – in an ironic cycle – turns the act of creation into the kind of rote task it’s meant to automate.

Going forward, my policy is now plainly stated: I don’t want students using AI to write their work. I want their words. I want access to their thinking, their voice, their struggles to find what they want to say and the best way to say it. I want to see what happens when someone tries to move through language without an intermediary finishing the thought.

This is a pedagogical position, not a moral or technical one. The workshop only works if there’s a writer in the room, someone whose thinking is visible on the page, and who can speak directly to that thinking. Using AI to write not only nullifies the entire peer review concept – we’re here to workshop each other, not to workshop AI slop – it also guarantees a weakening of the muscles needed to wrestle with writing.

The danger isn’t that AI will replace writers or render the workshop obsolete. It’s that students are becoming accustomed to bypassing the friction that once revealed their process.

Since that night, our workshops have changed in ways I didn’t anticipate. We talk more openly about frustration, about the moments when a draft resists its own author. I still teach craft – form, structure, revision – but find myself returning to the tension between thought and language, the stories where abstraction refuses to take shape. We discuss why their thinking matters, that their struggle to translate thoughts into word isn’t evidence of failure, but a sign of growth. Even when, and especially when, words fail. What my students and I now guard isn’t a boundary against machines so much as a sanctuary for authorship, a place where everything on the page and not yet on the page belongs to an actual person.

  • Micah Nathan is a novelist, essayist and MIT lecturer in fiction and nonfiction writing whose books include Gods of Aberdeen and Losing Graceland. His fiction and essays have appeared in Vanity Fair, the Paris Review, Little White Lies, Kinfolk and elsewhere

  • Spot illustrations by Cristina Spanò

Related articles

Recent articles