Alarm bells seemed to sound in teachers’ lounges across America late last year with the debut of ChatGPT — an AI chatbot that was both easy to use and capable of producing dialogue-like responses, including longer-form writing and essays. Some writers and educators went so far as to even forecast the death of student papers. However, not everyone was convinced it was time to panic. Plenty of naysayers pointed to the bot’s unreliable results, factual inaccuracies and dull tone, and insisted that the technology wouldn’t replace real writing.
Indeed, ChatGPT and similar AI systems are being used in realms beyond education, but classrooms seem to be where fears about the bot’s misuse — and ideas to adapt alongside evolving technology — are playing out first. The realities of ChatGPT are forcing professors to take a long look at today’s teaching methods and what they actually offer to students. Current types of assessment, including the basic essays ChatGPT can mimic, may become obsolete. But instead of branding the AI as a gimmick or threat, some educators say this chatbot could end up recalibrating the way they teach, what they teach and why they teach it.
At Santa Clara University this month, 32 students began a course called “Artificial Intelligence and Ethics” where the usual method of assessment — writing — would no be longer in use. The course is taught by Brian Green, who also serves as director of the university’s Markkula Center for Applied Ethics, and in lieu of essays, he’ll be setting up one-on-one sessions with each student to hold ten-minute conversations. He said it doesn’t take any more time to evaluate that than to grade an essay.
“In that context, you really remove any possibility of text-generating software. And in talking to them, it really becomes all about whether they understand the material,” he said.
The EPA is finally addressing 4 dangerous ‘forever chemicals’ — out of over 4,000
But such an approach may not be realistic in all educational contexts, especially in schools where resources are scarcer and teacher-to-student ratios are worse.
On some campuses, the response to such technology has simply been to restrict access. Earlier this month, the New York City Department of Education announced that ChatGPT would be banned on networks and devices throughout its public schools. “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” a department spokesperson said in a statement.
And the country’s largest public-school system isn’t alone: Educators at different levels across the world have aired their concerns, and other districts in the U.S., like the Seattle public-school system, have also restricted the technology.
But such bans are hardly a solution. Anyone with access to a smartphone — such as 95 percent of Americans between the ages of 13 and 17, according to Pew Research Center polling conducted last spring — can easily bypass these restrictions without needing a school computer or campus Wi-Fi.
And some teachers told FiveThirtyEight they see bans on ChatGPT as misguided responses that misunderstand what the tool can and cannot provide.
“ChatGPT may have better syntax than humans, but it’s shallow on research and critical thinking,” said Lauren Goodlad, a professor of English and comparative literature at Rutgers University and the chair of its Critical Artificial Intelligence initiative. She said she understands where concern about the tool is coming from but that — at least at the college level — the type and caliber of written tasks that ChatGPT can offer does not replace critical thinking and human creativity. “These are statistical models,” she said. “And so they favor probability, as in they are trained on data, and the only reason they work as well as they do is that they are looking for probable responses to a prompt.”
These point to limitations that stifle chatbots’ originality, such as how statistical models favor using more common words at the expense of rarer ones that human authors might use. Goodlad also pointed out that, for now, the tool is not always accurate. For example, ChatGPT is prone to “hallucinations” — or providing false sources and quotations.
It’s those kinds of markers that may also currently help teachers not only catch students attempting to pass ChatGPT-generated text off as their own writing, but also institute measures that encourage students to do the work themselves. Some suggestions that she and her colleagues have outlined include asking students to reference class discussions in their work, to attach a reflection video or blurb of why they chose the writing points they did and to require that specific rhetorical skills appear in the piece.
But it’s most important that schools evolve by changing what they emphasize in their syllabi, Goodlad said, suggesting that educators instead lean into teaching methods and written assessments that underscore critical thinking. Otherwise, these approaches could quickly become outdated.
“The entire space has essentially become an arms race,” Green said, adding that anti-cheating technology remains in perpetual competition with the technology to circumvent it, as has been the case for years with plagiarism detectors like TurnItIn. The dynamic with ChatGPT will likely follow the same pattern. For example, earlier this month, Princeton student Edward Tian revealed that he’d developed software to detect ChatGPT-written work. And while the news did enjoy some praise, many also see it as merely a stopgap measure.
“These tools are only going to get more advanced,” said Hod Lipson, a professor of mechanical engineering and Data Science at Columbia University. “This is not unlike the beginning of the internet or Wikipedia. And it would have been a mistake to prohibit students from using Wikipedia or Google search, right?” The question is not whether to ban the technology but how to evolve alongside it, he said.
Lipson is trying to integrate ChatGPT and similar technologies in his teaching. For example, in his introductory robotics course this semester, he’ll be asking his students to use DALL-E — an image-generating software developed by ChatGPT’s parent company, OpenAI, and underpinned by similar tech — to help ideate their initial sketches for the robotics project they’ll work on throughout the term. “With just a few keywords,” he said, “it takes the machine about 25 seconds to generate maybe 25 designs or concepts — something that would have otherwise taken students a week to produce.”
In lieu of bans, then, the future of teaching may be some combination of new methods utilizing tools like ChatGPT and older approaches — like pen-and-paper exams, as some Australian universities are bringing back — that help regulate students’ reliance on technology.
And many educators, no matter their current approaches to ChatGPT, continue to lean into the optimism that such technology will ultimately push us to get at the heart and soul of what it means to educate, with a focus on deeper comprehension rather than simply developing a skill.
“We know calculators exist,” said Green. “But we still teach math.”