The AI Cheating Problem and How it’s Affecting Learning
As AI tools like ChatGPT enter classrooms, schools are scrambling to define what counts as cheating, raising deeper questions about trust, fairness, and the future of academic integrity.


One of the most recent and pressing questions for students and educators alike is whether using AI constitutes cheating. Upon the introduction of ChatGPT, a significant new AI platform, influential school districts like Los Angeles and New York swiftly imposed bans, concerned it would facilitate cheating among students (Singer, 2023). However, these prohibitions were eventually revoked as administrators acknowledged their ineffectiveness as they observed that some students were still accessing the program at home, or finding alternative methods or substitutes to use within the school premises (Singer, 2023). The swift move to prohibit ChatGPT appeared as a knee-jerk reaction to the burgeoning technology, which was still largely misunderstood (Singer, 2023).
Mr. Keith Ross, the Director of Technology and Information Services at the Walla Walla school district, who initiated the ChatGPT blockade in February, admitted, “We blocked it to buy us some time to get up to speed on the technology and figure out how to support its use among teachers and students” (Singer, 2023). As comprehension of AI’s role in our lives deepens, educators, exemplified by middle school teacher Yazmin Bahena, a dual language and social studies teacher, increasingly advocate for its integration into the learning process, emphasizing, “I do want students to learn to use it. They are going to grow up in a world where this is the norm” (Singer, 2023).
This sentiment echoes the era of the pocket calculator's introduction in the 1970s, positing AI tools like ChatGPT as indispensable in the modern educational landscape.
Despite its benefits, ChatGPT and similar AI-driven technology's temptation to cheat is a major concern among educators. The question now becomes: with so many ways to cheat, should AI really be the main concern? A group of Stanford researchers argues that this is not the case (Spector, 2023). These researchers found that long before ChatGPT hit the scene, 60 to 70 percent of students had reported engaging in at least one “cheating” behavior during the previous month in the anonymous survey given to them (Spector, 2023). According to the researchers, the percentage stayed about the same or even decreased slightly in the surveys post-release of universally accessible popular AI software.
The researchers conclude that even when questions were added specific to new AI technologies, like ChatGPT, and how students are using it for school assignments, these percentages stayed more or less the same (Spector, 2023). Within the boundaries of the current educational system, the study suggests that cheating often reflects broader issues within the educational system and not the technologies per se. Students who perceive themselves as respected and integral to the learning community tend to participate more actively and honestly in their education. A sense of inclusion and relevance in their coursework reduces their propensity to cheat. Therefore, fostering an environment where students feel involved and appreciated may prove more beneficial than strictly enforcing rules against AI. Given AI’s persistent presence and potential to enhance learning engagement, adopting strategies that promote student connection and satisfaction should be prioritized.
In addition, MIT professor Melissa Work ran an experiment in her class where students were only allowed to use ChatGPT to create a cover letter. They could use it as much as they wanted to come up with the outcome they so desired, but it had to be created with AI. Her main learning from that experiment was that the people who had the best ChatGPT cover letters were also the best writers in the class (Spector, 2023).
While the Stanford researchers found that 60–70 percent of students reported cheating, Pew Research further found that only a small minority—13 percent—said they had ever used ChatGPT to help with their schoolwork (Spector, 2023).
Below is a chart that shows the findings of the Pew Research survey and teens' various attitudes to using AI for writing essays, solving math problems, and researching new topics. However, with ChatGPT being relatively new, this is likely to change the future, and will need to be addressed more systematically to somehow level the academic “playing field.”
While AI “checkers” are suggested as a popular potential solution to people passing off AI’s work as their own, research focused on identifying AI-generated texts indicates that there is currently no completely dependable method for distinguishing between human and AI-authored texts, making this approach not much different than having parents, friends or a tutor who “help” with your essays (Spector, 2023). Both human evaluators and AI detection tools have shown limitations in accurately and consistently identifying AI-generated content, with AI detectors demonstrating somewhat greater reliability compared to human evaluators (Spector, 2023). There are also plenty of websites where a student can hire someone to write their essays for them. It might not be the best essay, but it’s not too different from asking an AI to write your paper.
References
Bennett, J. (2023). Despite cheating fears, schools repeal ChatGPT bans. The New York Times. https://www.nytimes.com/2023/05/04/technology/chatgpt-schools-ban.html
Spector, J. M. (2023). What do AI chatbots really mean for students and cheating? EdSurge. https://www.edsurge.com/news/2023-04-17-what-do-ai-chatbots-really-mean-for-students-and-cheating
Work, M. (2023). Using ChatGPT in the classroom: A real-world experiment. Massachusetts Institute of Technology. https://www.mit.edu/news/chatgpt-classroom-experiment
Pew Research Center. (2023). Teens and AI: How youth are using ChatGPT in schoolwork. https://www.pewresearch.org/internet/2023/08/28/teens-and-chatgpt/
Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., & Choi, Y. (2020). Defending against neural fake news. Advances in Neural Information Processing Systems, 32. https://proceedings.neurips.cc/paper_files/paper/2019/file/6c7de1f8363381e08c61da7a3bfa0f9f-Paper.pdf