In practically every classroom, there is a time when the students teach the lesson while the professor is paying attention. Recently, Dan Kennedy, who teaches a graduate ethics seminar at Northeastern University in Boston, encountered that exact situation, and it took him by surprise.
Kennedy had created an artificial intelligence course. He gave his five students access to Claude, the artificial intelligence tool that his university licenses through an enterprise agreement with Anthropic, and assigned them a practical assignment: take a transcript of a journalism interview, run it through the AI, and assess the results.
| Category | Details |
|---|---|
| Topic | AI Skepticism Among Journalism Students |
| Institution | Northeastern University, Boston, Massachusetts |
| Key Figure | Dan Kennedy — Professor, Graduate Ethics Seminar |
| AI Tool Used | Claude by Anthropic (via Northeastern’s enterprise agreement) |
| Reference Subject | Tracy Baim — Head of the LGBTQ+ Media Mapping Project |
| Core Exercise | Students evaluated AI-generated bullet points, news stories, outlines, headlines, and social media posts from an interview transcript |
| Podcast Referenced | What Works — co-hosted by Dan Kennedy and Ellen Clegg |
| Class Size | Five students (one advanced undergrad, rest graduate-level) |
| Related Policy | Chris Quinn’s AI-writing practice at Cleveland.com / The Plain Dealer |
| Publication Date | April 2, 2026 |
Tracy Baim, the head of the LGBTQ+ Media Mapping Project and a prominent figure in LGBTQ+ media circles, was interviewed. Before the class even started, Kennedy had already used the episode, which was taken from his own podcast, What Works, and had it transcribed using an AI-powered service. He believed he had a general idea of what the students would discover. He was mistaken.
He was not as skeptical of AI as the students were. Kennedy isn’t exactly an evangelist, so that’s saying something. He frequently uses Claude to conduct background research, come up with interview questions, and create podcast summaries. He takes care of it. He considers the morality. Even so, as he sat in that seminar room, he discovered that his students’ doubts were more acute than he had anticipated.

One pupil categorically declined to participate in the activity. He wrote in his reflection afterward, “I’m terrified of generative AI, and I have yet to open any of them,” adding that he learned to write in journalism school and that, in his opinion, giving that to a machine was a disqualifying act.
“If you don’t have the time or creativity to put together an article or come up with a tweet on your own, then this might not be the field for you.” When you read the entire context, it sounds more like a principled stance than a refusal to participate. It’s the kind of sentence that could sound precious or naïve.
The actual exercise was meticulously planned. Kennedy divided the students into two groups and asked them to produce a variety of outputs from the Baim transcript, including bullet points, a 600-word synopsis, a complete news article, a headline, and a social media post. They then assessed each one.
Was it true? Would you put it to use? Would you have to reveal it? Students observed that the bullet points were too lengthy to be truly helpful. One student saw that a quote in the AI-generated outline seemed to have been flipped; it wasn’t drastically incorrect, but it was subtly and quietly off. A journalist would pick up on something that a reader might miss.
It may seem insignificant, but that little detail is crucial. Precision has always been essential to journalism. In the wrong context, a reversed quote can harm a source’s reputation, misrepresent a community, or compromise the credibility of a publication. It’s not just an error in a vacuum. The pupils were aware of this. They may have initially pursued journalism because they were aware of it.
Additionally, the class faced a more difficult question that is currently being discussed in newsrooms across the nation. Kennedy brought up the practice of editor Chris Quinn at Cleveland.com and The Plain Dealer, where reporters submit their notes to AI, which then drafts the story for human editors to review prior to publication.
Assuming that readers are informed, he questioned his students about the ethicality of that. The answers were direct. A pupil described it as “lazy.” Another approached it in a different way but came to the same conclusion: somewhere during that handoff, the human component of journalism—what distinguishes it from content creation—is lost.
In the industry as a whole, there is a belief that disclosure is the solution, meaning that ethical issues vanish as long as readers are informed that AI was used. It seems that journalism students disagree with that. They are considering not only what is labeled but also what is lost.
The irony in this situation is difficult to ignore. Claude is used by Kennedy himself. Anthropic and his university have an agreement. An organization actively involved in the AI economy put the tool in the hands of students. However, the pupils who experienced it firsthand emerged as the most resilient. That does not indicate that the experiment was unsuccessful. Perhaps that’s precisely the point.
Skepticism is not the same as ignorance, as the classroom demonstrated and journalism schools are gradually starting to acknowledge. These pupils weren’t rejecting AI because they didn’t comprehend it. Because they were familiar with journalism, they were rejecting some of its applications. As one student put it succinctly, “generative AI has no place in written articles” and should only be used for tasks like data cleaning or coding.
Another stated that sometimes an AI-generated outline inspires a good idea, but she would prefer to do it herself if she is going to alter the majority of what it provides. That’s not a Luddite viewpoint. It’s a professional one.
Kennedy concluded his description of the class with a subdued concern about the future: the upcoming generation of students who will come to journalism school having grown up with generative AI as a given and who won’t remember a world before ChatGPT. Nobody knows yet if those students will be more accustomed to the technology or more critical of it. The experiment is still ongoing.
However, the veterans aren’t currently the most doubtful voices in the room. They are the recent arrivals. And that is either the most comforting or the most complicated aspect of journalism’s future, depending on your point of view.
