• Work & Careers
  • Life & Arts

Become an FT subscriber

Limited time offer save up to 40% on standard digital.

  • Global news & analysis
  • Expert opinion
  • Special features
  • FirstFT newsletter
  • Videos & Podcasts
  • Android & iOS app
  • FT Edit app
  • 10 gift articles per month

Explore more offers.

Standard digital.

  • FT Digital Edition

Premium Digital

Print + premium digital.

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial.

  • 10 additional gift articles per month
  • Global news & analysis
  • Exclusive FT analysis
  • Videos & Podcasts
  • FT App on Android & iOS
  • Everything in Standard Digital
  • Premium newsletters
  • Weekday Print Edition

Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.

  • Everything in Print
  • Everything in Premium Digital

The new FT Digital Edition: today’s FT, cover to cover on any device. This subscription does not include access to ft.com or the FT App.

Terms & Conditions apply

Explore our full range of subscriptions.

Why the ft.

See why over a million readers pay to read the Financial Times.

International Edition

Spring trend report: Shop skin solutions and fashion problem solvers — from $9

  • TODAY Plaza
  • Share this —

Health & Wellness

  • Watch Full Episodes
  • Read With Jenna
  • Inspirational
  • Relationships
  • TODAY Table
  • Newsletters
  • Start TODAY
  • Shop TODAY Awards
  • Citi Music Series
  • Listen All Day

Follow today

More Brands

  • On The Show

Teachers sound off on ChatGPT, the new AI tool that can write students’ essays for them

Teachers are talking about a new artificial intelligence tool called ChatGPT — with dread about its potential to help students cheat, and with anticipation over how it might change education as we know it.

On Nov. 30, research lab OpenAI released the free AI tool ChatGPT , a conversational language model that lets users type questions — “What is the Civil War?” or “Who was Leonardo da Vinci?” — and receive articulate, sophisticated and human-like responses in seconds. Ask it to solve complex math equations and it spits out the answer, sometimes with step-by-step explanations for how it got there.

According to a fact sheet sent to TODAY.com by OpenAI, ChatGPT can answer follow-up questions, correct false information, contextualize information and even acknowledge its own mistakes.

Some educators worry that students will use ChatGPT to get away with cheating more easily  — especially when it comes to the five-paragraph essays assigned in middle and high school and the formulaic papers assigned in college courses. Compared with traditional cheating in which information is plagiarized by being copied directly or pasted together from other work, ChatGPT pulls content from all corners of the internet to form brand new answers that aren't derived from one specific source, or even cited.

Therefore, if you paste a ChatGPT-generated essay into the internet, you likely won't find it word-for-word anywhere else. This has many teachers spooked — even as OpenAI is trying to reassure educators .

"We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system," an OpenAI spokesperson tells TODAY.com "We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence."

Still, #TeacherTok is weighing in about potential consequences in the classroom.

"So the robots are here and they’re going to be doing our students' homework,” educator Dan Lewer said in a TikTok video . “Great! As if teachers needed something else to be worried about.”

“If you’re a teacher, you need to know about this new (tool) that students can use to cheat in your class,” educational consultant Tyler Tarver said on TikTok .

“Kids can just tell it what they want it to do: Write a 500-word essay on ‘Harry Potter and the Deathly Hallows,’” Tarver said. “This thing just starts writing it, and it looks legit.”

Taking steps to prevent cheating

ChatGPT is already being prohibited at some K-12 schools and colleges.

On Jan. 4, the New York City Department of Education restricted ChatGPT on school networks and devices "due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content," Jenna Lyle, a department spokesperson, tells TODAY.com. "While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success."

A student who attends Lawrence University in Wisconsin tells TODAY.com that one of her professors warned students, both verbally and in a class syllabus, not to use artificial intelligence like ChatGPT to write papers or risk receiving a zero score.

And last month, a student at Furman University in South Carolina got caught using ChatGPT to complete a 1,200-word take-home exam on the 18th century philosopher David Hume.

“The essay confidently and thoroughly described Hume’s views on the paradox of horror in (ways) that were thoroughly wrong,” Darren Hick, an assistant professor of philosophy, explained in a Dec. 15 Facebook post . “It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullsh--ting after that.”

Hick tells TODAY.com that traditional cheating signs — for example, sudden shifts in a person’s writing style — weren’t apparent in the student’s essay.

To confirm his suspicions, Hick says he ran passages from the essay through a separate OpenAI detector, which indicated the writing was AI-generated. Hick then did the same thing with essays from other students. That time around, the detector suggested that the essays had been written by human beings.

Eventually, Hick met with the student, who confessed to using ChatGPT. She received a failing grade for the class and faces further disciplinary action.

“I give this student credit for being updated on new technology,” says Hick. “Unfortunately, in their case, so am I.”

Getting at the heart of teaching

OpenAI acknowledges that its ChatGPT tool is capable of providing false or harmful answers. OpenAI Chief Executive Officer Sam Altman tweeted that ChatGPT is meant for “ fun creative inspiration ” and that “ it’s a mistake to be relying on it for anything important right now.” 

Kendall Hartley, an associate professor of educational technology at the University of Las Vegas, Nevada, notes that ChatGPT is "blowing up fast," presenting new challenges for detection software like iThenticate and TurnItIn , which teachers use to cross-reference student work to material published online.

Still, even with all the concerns being raised, many educators say they are hopeful about ChatGPT's potential in the classroom.

When you think about the amazing teachers you’ve had, it’s likely because they connected with you as a student. That won’t change with the introduction of AI.”

Tiffany Wycoff, a former school principal

"I'm excited by how it could support assessment or students with learning disabilities or those who are English language learners," Lisa M. Harrison, a former seventh grade math teacher and a board of trustee for the Association for Middle Level Education , tells TODAY.com. Harrison speculates that ChatGPT could support all sorts of students with special needs by supplementing skills they haven’t yet mastered.

Harrison suggests workarounds to cheating through coursework that requires additional citations or verbal components. She says personalized assignments — such as asking students to apply a world event to their own personal experiences — could deter the use of AI.

Educators also could try embracing the technology, she says.

"Students could write essays comparing their work to what's produced by ChatGPT or learn about AI," says Harrison.

Tiffany Wycoff, a former elementary and high school principal who is now the chief operating officer of the professional development company Learning Innovation Catalyst (LINC), says AI offers great potential in education.

“Art instructors can use image-based AI generators to (produce) characters or scenes that inspire projects," Wycoff tells TODAY.com. "P.E. coaches could design fitness or sports curriculums, and teachers can discuss systemic biases in writing.”

Wycoff went straight to the source, asking ChatGPT, "How will generative AI affect teaching and learning in classrooms?" and published a lengthy answer on her company's blog .

According to ChatGPT's answer, AI can give student feedback in real time, create interactive educational content (videos, simulations and more), and create customized learning materials based on individual student needs.

The heart of teaching, however, can't be replaced by bots.

"When you think about the amazing teachers you’ve had, it’s likely because they connected with you as a student," Wycoff says. "That won’t change with the introduction of AI."

Tarver agrees, telling TODAY.com, "If a student is struggling and then suddenly gets a 98 (on a test), teachers will know."

"And if students can go in and type answers in ChatGPT," he adds, "we're asking the wrong questions.”

Elise Solé is a writer and editor who lives in Los Angeles and covers parenting for TODAY Parents. She was previously a news editor at Yahoo and has also worked at Marie Claire and Women's Health. Her bylines have appeared in Shondaland, SheKnows, Happify and more.

ChatGPT was tipped to cause widespread cheating. Here's what students say happened

Science ChatGPT was tipped to cause widespread cheating. Here's what students say happened

High school students sitting in a classroom using laptops

As teachers met before the start of the 2023 school year, there was one big topic of conversation: ChatGPT.

Education departments banned students from accessing the artificial intelligence (AI) writing tool, which could produce essays, complete homework, and cheat on take-home assignments.

Some experts said schools would be swamped by a wave of cheating.

And then? Well, school continued.

The expected wave never broke. Or if it did, it was difficult to detect.

Many teachers said they strongly suspected students were cheating, but it was hard to tell for sure. 

Meanwhile, some schools went in the opposite direction, embracing the new AI tools. Principals said students needed to learn how to use a technology that would probably define their futures.

But there was one perspective missing in all this: that of the students themselves.

As the school year drew to a close, we spoke to Year 11 and 12 students about how they actually ended up using generative AI tools.

Overall, these are stories from a historic moment, and an insight into the future of education. This is the first year where high school students could easily access high quality AI writing tools.

Here's what they had to say.

'The chatbot could smash it out in seconds'

Some were initially curious, but also cautious.

Some used it once then stopped. Others kept going.

Some got caught, but many didn't.

A woman sits in front of a computer open to a screen showing purple and green colours

For Eric, who asked to remain anonymous, the arrival of ChatGPT in the summer of 2022–23 was a mixed blessing.

To stop students using AI to cheat on take-home assignments, his school switched to more in-class assessments.

But Eric, who has ADHD, struggles to concentrate in exams.

"That sort of crippled my chances at doing great in school," he says.

"Exams are the death of me."

But the new AI writing tool had its uses. In term one, he experimented with cheating, using ChatGPT to write a take-home geography assignment (though it didn't count towards his HSC).

He wasn't caught and got a decent grade.

Next, he gave ChatGPT his homework. 

"It's just meaningless, monotonous work. And, you know, the chatbot could just smash it out in seconds."

Eric estimates that, over the course of 2023, ChatGPT wrote most of his homework.

"So around 60 per cent of my homework was written by ChatGPT," he says.

AI cheating creates divide among classmates

Students at other schools told similar stories.

Chrysoula, a Year 11 student in Sydney, initially used ChatGPT "very often to complete homework I deemed just tedious".

A lot of her classmates were doing the same, she says.

"Any time someone read a good answer out in class discussions, there was someone leaning over whispering 'Chat[GPT]?'

"Everyone doubted the authenticity of everyone's answers."

But as the year drew on, Chrysoula found her "critical and analytical thinking was slightly impaired".

"I was beginning to form a dependence on the AI to feed me the knowledge to rote learn."

Worried about AI harming her learning abilities, Chrysoula blocked herself from accessing the ChatGPT website.

The students who are still cheating are easy to spot, she says.

They often waste time in class and leave coursework to complete at home, when they can access AI tools. Their essays overuse words that ChatGPT favours, like "profound", or "metaphors for things like tapestries".

"They even hand in entire assessments written entirely by AI."

Phil, a Year 12 student at a different high school, also sees a divide that's formed among his classmates, based on how much each student uses AI to do their coursework.

His school allows students to use ChatGPT for ideas and research but not to directly write assessments.

But many students still cheat on take-home assignments, Phil says. He's worried they "aren't learning anything" and their poor performance in the HSC exams will ultimately hurt his ATAR.

"There's a significant population of the school who just aren't doing any work, and that drives us down because of how the ATAR system works."

Harry, a Year 11 student at another school, uses ChatGPT for some homework, but not for assignments.

His reason? The AI's answers aren't good enough.

"When you want to get the top marks, I'd say that's when you do want to use your own brain and you do want to critically think."

How common is AI cheating in schools?

Many of the high school students we spoke with weren't using AI to cheat, even though they could get away with it.

Whether this is representative of students in general is hard to say, as there's very little reliable data on cheating rates.

But at least one survey suggests that rates of AI cheating among students may be lower than generally assumed.

In mid-2023, the Centre for Democracy and Technology asked US high school students how much they used generative AI tools to do their coursework, and then compared this with the estimates of teachers and parents.

Two boys sitting in a classrooms with one hand raised, and the teacher in the background.

Kristin Woelfel, a co-author of the report , says teachers and parents consistently overestimate how many students use AI to cheat.

"While 58 per cent of students report having used generative AI in the past year, only 19 per cent of that group reports using it to write and submit papers," she says.

She says the survey data doesn't support the inflammatory predictions about cheating made at the start of the year.

"Students are primarily using generative AI for personal reasons rather than academic."

But Kane Murdoch, head of academic integrity at Macquarie University, says that even if the rate of AI cheating is low now, it's likely to go up.

He believes students will gain confidence, and learn how to use AI to automate more of their coursework.

"It could be 2023 was the year they dipped their toe in the water, but 2024 and moving ahead you’ll get increasingly large body parts into the pool.

"And soon enough they’ll be in the deep end."

Banning AI cheating hasn't worked

Many of the students we spoke to say teachers have little power to stop them from using AI tools for homework or assignments.

The past year proved blocking access to AI tools, as well as detecting AI-written coursework, was ineffective.

Students described numerous ways of evading the blocks on accessing ChatGPT on school computers, or through a school's Wi-Fi network.

They also told how they copied their ChatGPT-written answers into other AI tools, designed to confuse the schools' AI-detection software.

"Nearly every AI detector I've come across is inaccurate," Chrysoula says.

Mr Murdoch agrees.

"There's lot of skepticism about the efficacy of detection — and I'm among those who are skeptical."

He says educators were reluctant to rely on flawed plagiarism-detection tools to accuse a student of cheating.

"As an investigator [of cheating] I'm unwilling to accept the detectors word on it," he says.

ChatGPT-maker OpenAI has warned that there is no reliable way for educators to work out if students are using ChatGPT.

Several major universities in the US have turned off AI detection software, citing concerns over accuracy and a fear of falsely accusing students of using AI to cheat.

Mr Murdoch says Australian universities are also wary of relying on detection software.

But they disagree over what to do next. Some believe that better detection is the answer, while others are pushing for a change to the way learning is assessed.

"Programmatic" assessment, such as interviews and practical exams, may be one answer.

"It would mean we don't assess as much, and what we do assess we can actually hang our hat on," Mr Murdoch says.

"It's more like turning the ship in a very different direction rather than a slight course change.

"It's a much more difficult thing to grapple with."

Schools may go from banning bots to letting them teach

While the impact of AI on students has won most of the public attention, some education experts say the bigger story of 2023 may be how this technology changes the work of teaching.

Over the course of the year, schools have broadly gone from banning the technology to cautiously embracing it.

In February, state and federal education ministers slapped a year-long ban on student use of AI at state- and government-run schools.

Then, in October, the ministers agreed to lift the ban from term one next year.

The announcement brought public schools more in line with the approach of private and Catholic schools.

Federal education minister Jason Clare said at the time that "ChatGPT was not going away".

"We've got to learn how to use it," he said.

"Kids are using it right across the country ... we're playing catch-up."

But some observers are worried this new embrace of AI will replace in-person, teacher-led instruction.

The education industry is now promoting an idea of "platformisation", says Lucinda McKnight, a digital literacy expert at Deakin University.

"This is the idea that bots deliver education. The teacher shortage is solved by generative AI."

Custom-built chatbots, personalised for each student and loaded with the relevant curriculum, would teach, encourage and assess. Teachers would be experts in teaching , rather than the subject being taught.

But robo-classrooms like this have their drawbacks, Dr McKnight says.

Interacting with (human) teachers plays an important part in a child's emotional and social development.

"It's not the robots that are the problem, it's the fact students are going to be treated like robots," she says.

Macquarie University's Kane Murdoch says education institutions made a series of kneejerk reactions and false starts, such as the ChatGPT ban, in response to emergent AI technology in 2023.

Next year will see them make big changes, he predicts.

He's concerned that AI cheating will ultimately devalue educational qualifications.

"We can't expect this to go away. It is a game changer — it is existential," he says.

"If the public lose faith in what we do, we lose."

Student accounts suggest that, although not all want to cheat when they can, many are happy to automate tasks they see as rote learning rather than true tests of knowledge.

For Eric, who used ChatGPT to do most of his homework this year, the AI tool provided a shortcut through an already flawed system.

"I would have had such a worse time this year if ChatGPT wasn't prominent," he says.

"I think it would be very difficult for you to find a student that was sitting the HSC right now that hasn't used it for something."

Listen to the full story of how students used AI in 2023 and subscribe to RN Science Friction .

Science in your inbox

  • X (formerly Twitter)

Related Stories

'bigger than the pandemic': this new ai tool promises to disrupt student assessment.

A young woman with glasses pensively staring at a computer.

'I'm seriously considering ChatGPT': Uni staff have thousands of words to read when marking student papers

A woman sits in front of a computer open to a screen showing purple and green colours

What is ChatGPT and how will it change the world?

A child is writing on a notepad and wearing headphones while studying from home.

  • Artificial Intelligence
  • Science and Technology

Students seen talking with a professor.

ChatGPT and cheating: 5 ways to change how students are graded

chatgpt cheating on essays

Professor of Education Governance and Policy Analysis, Brock University

chatgpt cheating on essays

Associate Dean, School of Graduate Studies & Professor, Faculty of Education, Queen's University, Ontario

chatgpt cheating on essays

Pro Vice-Chancellor of Te Wānanga Toi Tangata Division of Education; Professor of Measurement, Assessment and Evaluation, University of Waikato

Disclosure statement

Louis Volante receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).

Don A. Klinger receives funding from the Social Sciences and Humanities Research Council and the New Zealand Qualifications Authority.

Christopher DeLuca does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Queen's University, Ontario provides funding as a founding partner of The Conversation CA.

Queen's University, Ontario and Brock University provide funding as members of The Conversation CA-FR.

University of Waikato provides funding as a member of The Conversation NZ.

University of Waikato provides funding as a member of The Conversation AU.

Brock University provides funding as a member of The Conversation CA.

View all partners

  • Bahasa Indonesia

Universities and schools have entered a new phase in how they need to address academic integrity as our society navigates a second era of digital technologies , which include publicly available generative artificial intelligence (AI) like ChatGPT. Such platforms allow students to generate novel text for written assignments .

While many worry these advanced AI technologies are ushering in a new age of plagiarism and cheating , these technologies also introduce opportunities for educators to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills.

We believe the emergence of ChatGPT creates an opportunity for schools and post-secondary institutions to reform traditional approaches to assessing students that rely heavily on testing and written tasks focused on students’ recall, remembering and basic synthesis of content.

Hands seen on a keyboard.

Cheating and ChatGPT

Estimates of cheating vary widely across national contexts and sectors .

Sarah Elaine Eaton, an expert who studies academic integrity, cautions cheating may be under-reported : she has estimated that at Canadian universities, 70,000 students buy cheating services every year.

How the recent launch of ChatGPT by OpenAI will impact cheating in both compulsory and higher education settings is unknown, but how this evolves may depend on whether or not institutions retain or reform traditional assessment practices.

Evading plagiarism detection software?

The ability of popular plagiarism detection tools to identify cheating using ChatGPT to generate assignments remains a challenge.

A recent study , not yet peer reviewed, found that 50 essays generated using ChatGPT produced sophisticated texts that were able to evade the traditional plagiarism check software.

Given that ChatGPT reached an estimated 100 million monthly active users in January, just two months after its launch, it is understandable why some have argued AI applications such as ChatGPT will spur enormous changes in contemporary schooling.

Policy responses to AI and ChatGPT

Not surprisingly, there are opposing views on how to respond to ChatGPT and other AI language models.

Some argue educators should embrace AI as a valuable technological tool, provided applications are cited correctly .

Others believe more resources and training are required so educators are better able to catch instances of cheating.

Still others, such as New York City’s Department of Education, have resorted to blocking AI applications such as ChatGPT from devices and networks .

Forward-thinking assessment

The figure below depicts three critical elements of a forward-thinking assessment system. Although each element could be elaborated, our focus is in offering educators a series of strategies that will allow them to maintain academic standards and promote authentic learning and assessment in the face of current and future AI applications.

Three circles are seen overlapping in the middle; the circles say AI, student assessment and academic integrity.

Teachers and university professors have relied heavily on “one and done” essay assignments for decades. Essentially, a student is assigned or asked to pick a generic essay topic from a list and submit their final assignment on a specific date.

Such assignments are particularly susceptible to new AI applications, as well as contract cheating — whereby a student buys a completed essay. Educators now need to rethink such assignments. Here are some strategies.

1. Consider ways to incorporate AI in valid assessment.

It’s not useful or practical for institutions to outright ban AI and applications like ChatGPT.

AI has already been incorporated into some university classrooms . We believe AI technologies must be selectively integrated so that students are able to reflect on appropriate uses and connect their reflections to learning competencies.

For example, Paul Fyfe, an English professor who teaches about how humans interact with data describes a “pedagogical experiment” in which he required students to take content from text-generating AI software and weave this content into their final essay.

Students were then asked to confront the availability of AI as a writing tool and reflect on the ethical use and evaluation of language modes.

2. Engage students in setting learning goals.

Ensuring students understand how they will be graded is key to any good assessment system.

Inviting students to collaboratively establish learning goals and criteria for the task, with consideration for the role of AI software, would help students to evaluate and judge appropriate contexts in which AI can work as a learning tool.

Read more: Unlike with academics and reporters, you can't check when ChatGPT's telling the truth

3. Require students to submit drafts for feedback.

Although students should still complete essay assignments, research into academic integrity policy in response to generative AI suggests students should be required to submit drafts of their work for review and feedback. Apart from helping to detect plagiarism, this kind of “formative assessment” practice is positive for guiding student learning .

Feedback can be offered by the teacher or by students themselves. Peer- and self-feedback can serve to critically evaluate work in progress (or work generated by AI software).

4. Grade subcomponents of the task.

Students could receive a grade for each subcomponent — including their involvement in feedback processes. They would also be evaluated in relation to how well they incorporated and attended to the specific feedback provided.

The assignment becomes bigger than a final essay, it becomes a product of learning, where students’ ideas are evaluated from development to final submission.

Students seen sitting with a teacher.

5. Move to more authentic assessments or include performance elements.

Good assessment practice involves an educator observing student learning across multiple contexts.

For example, educators can invite students to present their work, discuss an essay in a conference format or share a video articulation or an artistic representation. The aim here is to encourage students to share their learning through an alternative format. An important question to ask is whether or not you need the essay component at all? Is there a more authentic way to effectively assess student learning?

A educator seen in a library with students.

Authentic assessments are those that relate content to context. When students are asked to do this, they must apply knowledge in more practical settings, often making AI tools less helpful.

For help in rethinking assessment practices towards more authentic and alternative approaches, educators can consider taking the free course, Transforming Assessment: Strategies for Higher Education .

Improve benefits for students

Collectively, these suggestions may be more time-consuming, particularly in larger undergraduate classes.

But they do provide greater learning and synergy between forms of assessment that benefit students: formative assessment to guide teaching and learning, and “summative assessment,” primarily used for grading and evaluation purposes.

AI is here and here to stay, and we must embrace it as part of our learning environment. Incorporating AI into how we assess student learning will yield more reliable assessment processes and valid and valued assessment outcomes.

  • Artificial intelligence (AI)
  • Universities
  • Digital age
  • Academic integrity
  • University study
  • university policy
  • Student assessment
  • post-secondary study

chatgpt cheating on essays

Biocloud Project Manager - Australian Biocommons

chatgpt cheating on essays

Director, Defence and Security

chatgpt cheating on essays

Opportunities with the new CIEHF

chatgpt cheating on essays

School of Social Sciences – Public Policy and International Relations opportunities

chatgpt cheating on essays

Deputy Editor - Technology

A professor accused his class of using ChatGPT, putting diplomas in jeopardy

Ai-generated writing is almost impossible to detect and tensions erupting at a texas university expose the difficulties facing educators.

chatgpt cheating on essays

Students at Texas A&M University at Commerce were in celebration mode this past weekend, as parents filed into the university’s Field House to watch students donned in cap and gown walk the graduation stage.

But for pupils in Jared Mumm’s animal science class, the fun was cut short when they received a heated email Monday afternoon saying that students were in danger of failing the class for using ChatGPT to cheat. “The final grade for the course is due today at 5 p.m.,” the instructor warned, according to a copy of the note obtained by The Washington Post. “I will be giving everyone in this course an ‘X,'” indicating incomplete.

Mumm, an instructor at the university’s agricultural college, said he’d copied the student essays into ChatGPT and asked the software to detect if the artificial intelligence-backed chatbot had written the assignments. Students flagged as cheating “received a 0."

He accompanied the email with personal notes in an online portal hosting grades. “I will not grade chat Gpt s***,” he wrote on one student’s assignment, according to a screenshot obtained by The Post. “I have to gauge what you are learning not a computer.”

Students and teachers: Tell The Post how you use ChatGPT and other AI tools

The email caused a panic in the class, with some students fearful their diplomas were at risk. One senior, who had graduated over the weekend, said the accusation sent her into a frenzy. She gathered evidence to prove her innocence — she’d written her essays in Google Docs, which records timestamps — and presented it to Mumm at a meeting.

The student, who spoke to The Post under the condition of anonymity to discuss matters without fear of academic retribution, said she felt betrayed.

“We’ve been through a lot to get these degrees,” she said in an interview with The Post. “The thought of my hard work not being acknowledged, and my character being questioned. … It just really frustrated me.” (Mumm did not return a request for comment.)

Teachers are on alert for inevitable cheating after release of ChatGPT

The rise of generative artificial intelligence , which underlies software that creates words, texts and images, is sparking a pivotal moment in education. Chatbots can craft essays, poems, computer code and songs that can seem human-made, making it difficult to ascertain who is behind any piece of content.

While ChatGPT cannot be used to detect AI-generated writing, a rush of technology companies are selling software they claim can analyze essays to detect such text. But accurate detection is very difficult, according to educational technology experts, forcing American educators into a pickle : adapt to the technology or make futile attempts to limit the ways it’s used.

The responses range the gamut. The New York City Department of Education has banned ChatGPT in its schools, as has the University of Sciences Po, in Paris , citing concerns it may foster rampant plagiarism and undermine learning. Other professors openly encourage use of chatbots, comparing them to educational tools like a calculator, and argue teachers should adapt curriculums to the software.

Yet educational experts say the tensions erupting at Texas A&M lay bare a troubling reality: protocols on how and when to use chatbots in classwork are vague and unenforceable, with any effort to regulate use risking false accusations.

“Do you want to go to war with your students over AI tools?” said Ian Linkletter, who serves as emerging technology and open-education librarian at the British Columbia Institute of Technology. “Or do you want to give them clear guidance on what is and isn’t okay, and teach them how to use the tools in an ethical manner?”

Michael Johnson, a spokesman for Texas A&M University at Commerce, said in a statement that no students failed Mumm’s class or were barred from graduating. He added that “several students have been exonerated and their grades have been issued, while one student has come forward admitting his use of [ChatGPT] in the course.”

He added that university officials are “developing policies to address the use or misuse of AI technology in the classroom.”

A curious person’s guide to artificial intelligence

In response to concerns in the classroom, a fleet of companies have released products claiming they can flag AI generated text. Plagiarism detection company Turnitin unveiled an AI-writing detector in April to subscribers. A Post examination showed it can wrongly flag human generated text as written by AI. In January, ChatGPT-maker OpenAI said it created a tool that can distinguish between human and AI-generated text, but noted that it “is not fully reliable” and incorrectly labels such text 9 percent of the time.

Detecting AI generated text is hard. The software searches lines of text and looks for sentences that are “too consistently average,” Eric Wang, Turnitin’s vice president of AI, told The Post in April.

Educational technology experts said use of this software may harm students — particularly nonnative English speakers or basic writers, whose writing style may more closely match what an AI generated tool might generate. Chatbots are trained on troves of text, working like an advanced version of auto-complete to predict the next word in a sentence — a practice often resulting in writing that is by definition eerily average.

But as ChatGPT use spreads, it’s imperative that teachers begin to tackle the problem of false positives, said Linkletter.

He says AI detection will have a hard time keeping pace with the advances in large language models. For instance, Turnitin.com can flag AI text written by GPT-3.5, but not its successor model, GPT-4, he said. “Error detection is not a problem that can be solved,” Linkletter added. “It’s a challenge that will only grow increasingly more difficult.”

But he noted that even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.”

We tested a new ChatGPT-detector for teachers. It flagged an innocent student.

At Texas A&M, there is still confusion. Mumm offered students to a chance to submit a new assignment by 5 p.m. Friday to not receive an incomplete for the class. “Several” students chose to do that, Johnson said, noting that their diplomas “are on hold until the assignment is complete.”

Bruce Schneier, a public interest technologist and lecturer at Harvard University’s Kennedy School of Government, said any attempts to crackdown on the use of AI chatbots in classrooms is misguided, and history proves that educators must adapt to technology. Schneier doesn’t discourage the use of ChatGPT in his own classrooms.

“There are lots of years when the pocket calculator was used for all math ever, and you walked into a classroom and you weren’t allowed to use it,” he said. “It took probably a generational switch for us to realize that’s unrealistic.”

Educators must grapple with the concept of “what does it mean to test knowledge.” In this new age, he said, it will be hard to get students to stop using AI to write first drafts of essays, and professors must tailor curriculums in favor of other assignments, such as projects or interactive work.

“Pedagogy is going to be different,” he said. “And fighting [AI], I think it’s a losing battle.”

chatgpt cheating on essays

  • Skip to main content
  • Keyboard shortcuts for audio player

'Everybody is cheating': Why this teacher has adopted an open ChatGPT policy

Mary Louise Kelly, photographed for NPR, 6 September 2022, in Washington DC. Photo by Mike Morgan for NPR.

Mary Louise Kelly

chatgpt cheating on essays

Not all educators are shying away from artificial intelligence in the classroom. Jeff Pachoud/AFP via Getty Images hide caption

Not all educators are shying away from artificial intelligence in the classroom.

Ethan Mollick has a message for the humans and the machines: can't we all just get along?

After all, we are now officially in an A.I. world and we're going to have to share it, reasons the associate professor at the University of Pennsylvania's prestigious Wharton School.

"This was a sudden change, right? There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of how we teach people to write in a world with ChatGPT," Mollick told NPR.

Ever since the chatbot ChatGPT launched in November, educators have raised concerns it could facilitate cheating.

Some school districts have banned access to the bot, and not without reason. The artificial intelligence tool from the company OpenAI can compose poetry. It can write computer code. It can maybe even pass an MBA exam.

One Wharton professor recently fed the chatbot the final exam questions for a core MBA course and found that, despite some surprising math errors, he would have given it a B or a B-minus in the class .

A new AI chatbot might do your homework for you. But it's still not an A+ student

A new AI chatbot might do your homework for you. But it's still not an A+ student

And yet, not all educators are shying away from the bot.

This year, Mollick is not only allowing his students to use ChatGPT, they are required to. And he has formally adopted an A.I. policy into his syllabus for the first time.

He teaches classes in entrepreneurship and innovation, and said the early indications were the move was going great.

"The truth is, I probably couldn't have stopped them even if I didn't require it," Mollick said.

This week he ran a session where students were asked to come up with ideas for their class project. Almost everyone had ChatGPT running and were asking it to generate projects, and then they interrogated the bot's ideas with further prompts.

"And the ideas so far are great, partially as a result of that set of interactions," Mollick said.

chatgpt cheating on essays

Users experimenting with the chatbot are warned before testing the tool that ChatGPT "may occasionally generate incorrect or misleading information." OpenAI/Screenshot by NPR hide caption

He readily admits he alternates between enthusiasm and anxiety about how artificial intelligence can change assessments in the classroom, but he believes educators need to move with the times.

"We taught people how to do math in a world with calculators," he said. Now the challenge is for educators to teach students how the world has changed again, and how they can adapt to that.

Mollick's new policy states that using A.I. is an "emerging skill"; that it can be wrong and students should check its results against other sources; and that they will be responsible for any errors or omissions provided by the tool.

And, perhaps most importantly, students need to acknowledge when and how they have used it.

"Failure to do so is in violation of academic honesty policies," the policy reads.

This 22-year-old is trying to save us from ChatGPT before it changes writing forever

Planet Money

This 22-year-old is trying to save us from chatgpt before it changes writing forever.

Mollick isn't the first to try to put guardrails in place for a post-ChatGPT world.

Earlier this month, 22-year-old Princeton student Edward Tian created an app to detect if something had been written by a machine . Named GPTZero, it was so popular that when he launched it, the app crashed from overuse.

"Humans deserve to know when something is written by a human or written by a machine," Tian told NPR of his motivation.

Mollick agrees, but isn't convinced that educators can ever truly stop cheating.

He cites a survey of Stanford students that found many had already used ChatGPT in their final exams, and he points to estimates that thousands of people in places like Kenya are writing essays on behalf of students abroad .

"I think everybody is cheating ... I mean, it's happening. So what I'm asking students to do is just be honest with me," he said. "Tell me what they use ChatGPT for, tell me what they used as prompts to get it to do what they want, and that's all I'm asking from them. We're in a world where this is happening, but now it's just going to be at an even grander scale."

"I don't think human nature changes as a result of ChatGPT. I think capability did."

The radio interview with Ethan Mollick was produced by Gabe O'Connor and edited by Christopher Intagliata.

ChatGPT Cheating: What to Do When It Happens

chatgpt cheating on essays

  • Share article

The latest version of ChatGPT has only been around for a few months. But Aaron Romoslawski, the assistant principal at a Michigan high school, has already seen a handful of students trying to pass off writing produced by the artificial-intelligence-powered tool as their own work.

The signs are almost always obvious, Romoslawski said. Typically, a student will have been turning in work of a certain quality throughout the year, and then “suddenly, we’re seeing these much higher quality assignments pop up out of nowhere,” he said.

Romoslawski and his colleagues don’t start with a punitive response, however. “We see it as an opportunity to have a conversation.”

Those “don’t let the robot do your homework” talks are becoming all too common in schools these days. More than a quarter of K-12 teachers have caught their students cheating using ChatGPT , according to a recent survey by study.com, an online learning platform.

What’s the best way for educators to handle this high-tech form of plagiarism? Here are six tips drawn from educators and experts, including a handy guide created by CommonLit and Quill , two education technology nonprofits focused on building students’ literacy skills.

1. Make your expectations very clear

Students need to know what exactly constitutes cheating, whether AI tools are involved or not.

“Every school or district needs to put stakes in the ground [on a] policy around academic dishonesty, and what that means specifically,” said Michelle Brown, the founder and CEO of CommonLit. Schools can decide how much or how little students can rely on AI to make cosmetic changes or do research, she said, and should make that clear to students. She recommended “the heart of the policy [be] about allowing students to do intellectually rigorous work.”

2. Talk to students about AI in general and ChatGPT in particular

If it appears a student may have passed off ChatGPT’s work as their own, sit down with them one on one, CommonLit and Quill recommend. Then talk about the tool and AI in general. Questions could include: Have you heard of ChatGPT? What are other students saying about it? What do you think it should be used for? Discuss the promises—and potential pitfalls—of artificial intelligence.

“One of the big concerns right now is that teachers want to encourage curiosity about AI,” said Peter Gault, Quill’s founder and executive director. Strict discipline at this point “doesn’t sit right with teachers where there’s a lot of natural curiosity here.”

Romoslawski uses that approach. And so far, he hasn’t had a student try to use ChatGPT on an assignment twice. “We’ve gotten to the point where it’s a conversation and students are redoing the assignment in their own words,” he said.

3. If students use ChatGPT for an assignment, they must attribute what material they used from it

If students are allowed to use ChatGPT or another AI tool for research or other help, let them know how and why they should credit that information, Brown said. Since users can’t link back to a ChatGPT response, she suggested students share the prompt they used to generate the information in their citation.

When Romoslawski and his colleagues suspect a student used ChatGPT to complete an assignment when they weren’t supposed to, he also brings up citation, in part as a way into the conversation.

“We ask the students ‘did you use any resources that you don’t cite?’” he said. “And often, the student says ‘yes.’ And so, then it creates a conversation about how to properly cite and attribute and why we do that.”

4. Ask students directly if they used ChatGPT

Don’t beat around the bush if you suspect a student may have used AI to cheat. Ask them in a very straightforward way if they did, CommonLit and Quill say.

If students say “yes,” Romoslawski likes to get a sense of why. “More often than not, the student was just struggling on the assignment. They had a roadblock. They didn’t know what to do,” he said. “They were crunched for time, because we’re a high-achieving high school and our students are taking some pretty rigorous courses. This was their third homework assignment of the night and they just wanted to get through it.”

If the student says “no,” but you still suspect them of cheating, ask if they got other help with the assignment. If they still say “no,” explain your concerns by pointing out differences between the work they turned in and their previous writing, CommonLit and Quill suggest.

5. Don’t rely on ChatGPT detectors alone to determine if there was cheating

There are a number of tools—including one from OpenAI, ChatGPT’s developer—that purport to be able to distinguish an AI-crafted story or essay from one written by a human . But most of these detectors don’t publish their accuracy rates. And those that do are ineffective about 10 to 20 percent of the time.

“You can’t fully rely on that as the sole proof of academic dishonesty,” Brown said.

6. Make it clear why learning to write on your own is important

Students in general, and particularly students who take advantage of AI to cheat, need to understand what they are missing out on when they take a technology-enabled shortcut. Educators should try to persuade students that learning to write on their own will help them reason and think, or be critical to future job success, Gault said.

But others will need a more immediate incentive. The strongest argument one teacher came up with, according to Quill’s Gault? Tell students that learning to write will make them more persuasive, and therefore, “you can convince your parents to do what you want.”

A version of this article appeared in the March 08, 2023 edition of Education Week as ChatGPT Cheating: What to Do When It Happens

Sign Up for EdWeek Tech Leader

Edweek top school jobs.

Brightly colored custom illustration of a young depressed female sitting inside of a chat bubble and looking at a laptop with her head in her hand while there is another chat bubble with the ellipsis as if someone is typing something to her. Digital and techie textures applied to the background.

Sign Up & Sign In

module image 9

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base
  • Using AI tools

Is Using ChatGPT Cheating?

Published on June 29, 2023 by Eoghan Ryan . Revised on September 14, 2023.

Using ChatGPT and other AI tools to cheat is academically dishonest and can have severe consequences.

However, using these tools is not always academically dishonest . It’s important to understand how to use these tools correctly and ethically to complement your research and writing skills. You can learn more about how to use AI tools responsibly on our  AI writing  resources page.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How can chatgpt be used to cheat, what are the risks of using chatgpt to cheat, how to use chatgpt without cheating, frequently asked questions.

ChatGPT and other AI tools can be used to cheat in various ways. This can be intentional or unintentional and can vary in severity. Some examples of the ways in which ChatGPT can be used to cheat include:

  • AI-assisted plagiarism: Passing off AI-generated text as your own work (e.g., essays, homework assignments, take-home exams)
  • Plagiarism : Having the tool rephrase content from another source and passing it off as your own work
  • Self-plagiarism : Having the tool rewrite a paper you previously submitted with the intention of resubmitting it
  • Data fabrication: Using ChatGPT to generate false data and presenting them as genuine findings to support your research

Using ChatGPT in these ways is academically dishonest and very likely to be prohibited by your university. Even if your guidelines don’t explicitly mention ChatGPT, actions like data fabrication are academically dishonest regardless of what tools are used.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

chatgpt cheating on essays

Try for free

ChatGPT does not solve all academic writing problems and using ChatGPT to cheat can have various negative impacts on yourself and others. ChatGPT cheating:

  • Leads to gaps in your knowledge
  • Is unfair to other students who did not cheat
  • Potentially damages your reputation
  • May result in the publication of inaccurate or false information
  • May lead to dangerous situations if it allows you to avoid learning the fundamentals in some contexts (e.g., medicine)

When used correctly, ChatGPT and other AI tools can be helpful resources that complement your academic writing and research skills. Below are some tips to help you use ChatGPT ethically.

Follow university guidelines

Guidelines on how ChatGPT may be used vary across universities. It’s crucial to follow your institution’s policies regarding AI writing tools and to stay up to date with any changes. Always ask your instructor if you’re unsure what is allowed in your case.

Use the tool as a source of inspiration

If allowed by your institute, use ChatGPT outputs as a source of guidance or inspiration, rather than as a substitute for coursework. For example, you can use ChatGPT to write a research paper outline or to provide feedback on your text.

You can also use ChatGPT to paraphrase or summarize text to express your ideas more clearly and to condense complex information. Alternatively, you can use Scribbr’s free paraphrasing tool or Scribbr’s free text summarizer , which are designed specifically for these purposes.

Practice information literacy skills

Information literacy skills can help you use AI tools more effectively. For example, they can help you to understand what constitutes plagiarism, critically evaluate AI-generated outputs, and make informed judgments more generally.

You should also familiarize yourself with the user guidelines for any AI tools you use and get to know their intended uses and limitations .

Be transparent about how you use the tools

If you use ChatGPT as a primary source or to help with your research or writing process, you may be required to cite it or acknowledge its contribution in some way (e.g., by providing a link to the ChatGPT conversation). Check your institution’s guidelines or ask your professor for guidance.

Using ChatGPT in the following ways is generally considered academically dishonest:

  • Passing off AI-generated content as your own work
  • Having the tool rephrase plagiarized content and passing it off as your own work
  • Using ChatGPT to generate false data and presenting them as genuine findings to support your research

Using ChatGPT to cheat can have serious academic consequences . It’s important that students learn how to use AI tools effectively and ethically.

Using ChatGPT to cheat is a serious offense and may have severe consequences.

However, when used correctly, ChatGPT can be a helpful resource that complements your academic writing and research skills. Some tips to use ChatGPT ethically include:

  • Following your institution’s guidelines
  • Understanding what constitutes plagiarism
  • Being transparent about how you use the tool

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT , you’ll still be penalized unless this is specifically allowed by your university . Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, September 14). Is Using ChatGPT Cheating?. Scribbr. Retrieved March 25, 2024, from https://www.scribbr.com/ai-tools/chatgpt-cheating/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, how to write an essay with chatgpt | tips & examples, how to use chatgpt in your studies, what are the limitations of chatgpt.

Eoghan Ryan

Eoghan Ryan (Scribbr Team)

Thanks for reading! Hope you found this article helpful. If anything is still unclear, or if you didn’t find what you were looking for here, leave a comment and we’ll see if we can help.

Still have questions?

"i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Editor's Pick

ChatGPT, Cheating, and the Future of Education

In the depths of fall term finals, having completed a series of arduous exams, one student was exhausted. The only thing between her and winter break was a timed exam for a General Education class that she was taking pass-fail. Drained, anxious and feeling “like a deflated balloon,” she started the clock. The exam consisted of two short essays. By the time she finished the first one, she had nothing left to give. To pass the class, all she needed was to turn in something for the second essay. She had an idea.

Before finals started, her friends had told her about ChatGPT, OpenAI’s free new chatbot which uses machine learning to respond to prompts in fluent natural language and code. She had yet to try it for herself. With low expectations, she made an account on OpenAI’s website and typed in the prompt for her essay. The quality of the results pleasantly surprised her. With some revision, she turned ChatGPT’s sentences into her essay. Feeling guilty but relieved, she submitted it: Finally, she was done with the semester.

This student and others in this article were granted anonymity by The Crimson to discuss potential violations of Harvard’s Honor Code and other policies out of concerns for disciplinary action.

Since its Nov. 30, 2022 release, ChatGPT has provoked awe and fear among its millions of users. Yet its seeming brilliance distracts from important flaws: It can produce harmful content and often writes fiction as if it were fact.

Because of these limitations and the potential for cheating, many teachers are worried about ChatGPT’s impact on the classroom. Already, the application has been banned by school districts across the country, including those of New York City, Seattle, and Los Angeles.

These fears are not unfounded. At Harvard, ChatGPT quickly found its way onto students’ browsers. In the midst of finals week, we encountered someone whose computer screen was split between two windows: on the left, an open-internet exam for a statistics class, and on the right, ChatGPT outputting answers to his questions. He admitted that he was also bouncing ideas for a philosophy paper off the AI. Another anonymous source we talked to used the chatbot to complete his open-internet Life Sciences exam.

But at the end of the fall term, Harvard had no official policy prohibiting the use of ChatGPT. Dean of the College Rakesh Khurana, in a December interview with The Crimson, did not view ChatGPT as representing a new threat to education: “There have always been shortcuts,” he said. “We leave decisions around pedagogy and assignments and evaluation up to the faculty.”

On Jan. 20, Acting Dean of Undergraduate Education Anne Harrington sent an email to Harvard educators acknowledging that ChatGPT’s abilities have “raised questions for all of us.”

In her message, Harrington relayed guidance from the Office of Undergraduate Education. At first glance, the language seemed to broadly prohibit the use of AI tools for classwork, warning students that Harvard’s Honor Code “forbids students to represent work as their own that they did not write, code, or create,” and that “Submission of computer-generated text without attribution is also prohibited by ChatGPT’s own terms of service.”

But, the email also specified that instructors could “use or adapt” the guidance as they saw fit, allowing them substantial flexibility. The guidance did not clarify how to view the work of students who acknowledge their use of ChatGPT. Nor did it mention whether students can enlist ChatGPT to give them feedback or otherwise supplement their learning.

Some students are already making the most of this gray area. One student we talked to says that he uses ChatGPT to explain difficult mathematical concepts, adding that ChatGPT explains them better than his teaching fellow.

Natalia I. Pazos ’24 uses the chatbot as a kind of interactive SparkNotes. After looking through the introduction and conclusion of a dense Gen Ed reading herself, she asks ChatGPT to give her a summary. “I don’t really have to read the full article, and I feel like it gives me sometimes a better overview,” she says.

Professors are already grappling with whether to ban ChatGPT or let students use it. But beyond this semester, larger questions loom. Will AI simply become another tool in every cheater’s arsenal, or will it radically change what it means to learn?

‘Don’t Rely on Me, That’s a Crime’

Put yourself in our place: It’s one of those busy Saturdays where you have too much to do and too little time to do it, and you set about writing a short essay for History 1610: “East Asian Environments.” The task is to write about the impact of the 2011 earthquake, tsunami, and subsequent nuclear meltdown in Japan. In a database, you encounter an image of a frozen clock in a destroyed school. Two precious hours pass as you research the school, learning about the ill-prepared evacuation plans and administrative failures that led to 74 children’s deaths. As you type up a draft, your fingers feel tense. It’s a harrowing image — you can’t stop envisioning this clock ticking down the moments until disaster. After spending six hours reading and writing, you finally turn the piece in.

But what if the assignment didn’t have to take so much time? We tried using ChatGPT to write this class essay (which, to be clear, was already turned in). After a minute or two refining our prompts, we ended up with a full essay, which began with:

Ticking away the moments of a typical school day, the clock on the wall of Okawa Elementary School suddenly froze on March 11, 2011, as the world around it was shattered by a massive earthquake and tsunami. The clock, once a symbol of the passage of time, now stood as a haunting reminder of the tragedy that had struck the school.

In less than five minutes, ChatGPT did what originally took six hours. And it did it well enough.

ChatGPT can condense hours of work into minutes, an enticing prospect for many. Students have many different demands on their time, and not everyone puts academics first.

Condensing hours into minutes is no small feat, and an enticing prospect for many. Students have many different demands on their time, and not everyone puts academics first. Some pour their time into intense extracurriculars, pre-professional goals, or jobs. (People also want to party.)

Yet the principle underlying a liberal arts curriculum — one that’s enshrined in Harvard’s mission — is the value of intellectual transformation. Transforming yourself is necessarily difficult. Learning the principles of quantum mechanics or understanding what societies looked like before the Industrial Revolution requires deconstructing your worldview and building it anew. Harvard’s honor code, then, represents not just a moral standard but also an expectation that students go through that arduous process.

That’s the theory. In practice, many students feel they don’t always have the time to do the difficult work of intellectual transformation. But they still care about their grades, so they cut corners. And now, just a few clicks away, there’s ChatGPT: a tool so interactive it practically feels like it’s your own work.

So, can professors stop students from using ChatGPT? And should they?

This semester, many instructors at Harvard prohibited students from using ChatGPT, treating it like any other form of academic dishonesty. Explicit bans on ChatGPT became widespread, greeting students on syllabi for classes across departments, from Philosophy to Neuroscience.

Some instructors, like professor Catherine A. Brekus ’85, who teaches Religion 120: “Religion and Nationalism in the United States: A History,” directly imported the Office of Undergraduate Education’s suggested guidance onto their syllabus. Others, like Spanish 11, simply told students not to use it in an introductory lecture. The syllabus for Physical Sciences 12a went so far as to discourage use of the tool with multiple verses of a song written by ChatGPT:

I’m just a tool, a way to find some answers

But I can’t do the work for you, I’m not a dancer

You gotta put in the effort, put in the time

Don’t rely on me, that’s a crime

Making these professors’ lives difficult is that, at the moment, there is no reliable way to detect whether a student’s work is AI-generated. In late January, OpenAI released a classifier to distinguish between AI and human-written text, but it only correctly identified AI-written text 26 percent of the time . GPTZero, a classifier launched in January by Princeton undergraduate Edward Tian, now claims to correctly identify human-written documents 99 percent of the time and AI-written documents 85 percent of the time.

Still, a high likelihood of AI involvement in an assignment may not be enough evidence to bring a student before the Honor Council. Out of more than a dozen professors we’ve spoken with, none currently plan to use an AI detector.

Not all instructors plan to ban ChatGPT at all. Incoming assistant professor of Computer Science Jonathan Frankle questions whether students in advanced computer science classes should be forced to use older, more time-consuming tools if they’ve already mastered the basics of coding.

“It would be a little bit weird if we said, you know, in CS 50, go use punch cards, you’re not allowed to use any modern tools,” he says, referring to the tool used by early computer scientists to write programs.

Harvard Medical School professor Gabriel Kreiman feels similarly. In his courses, students are welcome to use ChatGPT, whether for writing their code or their final reports. His only stipulation is that students inform him when they’ve used the application and understand that they’re still responsible for the work. “If it’s wrong,” he says, “you get the grade, not ChatGPT.”

Kumaresh Krishnan, a teaching fellow for Gen Ed 1125: “Artificial & Natural Intelligence,” believes that if the class isn’t focused on how to code or write, then ChatGPT use is justified under most circumstances. Though he is not responsible for the academic integrity policy of the course, Krishnan believes that producing a nuanced, articulate answer with ChatGPT requires students to understand key concepts.

“If you’re using ChatGPT that well, maybe you don’t understand all the math behind it, maybe you don’t understand all the specifics — but you're understanding the game enough to manipulate it,” he says. “And that itself, that’s a win.”

The student that used ChatGPT for an open-internet life sciences exam last semester says he had mastered the concepts but just couldn’t write fast enough. ChatGPT, he says, only “fleshed out” his answers. He received one of the highest grades in the class.

While most of the teachers we spoke with prohibit the use of ChatGPT, not everyone has ruled out using it in the future. Harvard College Fellow William J. Stewart, in his course German 192: “Artificial Intelligences: Body, Art, and Technology in Modern Germany,” explicitly forbids the use of ChatGPT. But for him, the jury is still out on ChatGPT’s pedagogical value: “Do I think it has a place in the classroom? Maybe?”

‘A Pedagogical Challenge’

“There are two aspects that we need to think about,” says Soroush Saghafian when asked about ChatGPT. “One is that, can we ban it? Second, should we ban it?” To Saghafian, an associate professor of public policy at the Kennedy School who is teaching a course on machine learning and big data analytics, the answer to both questions is no. In his view, people will always find ways around prohibitive measures. “It’s like trying to ban use of the internet,” he says.

Most educators at Harvard who we spoke with don’t share the sense of panic that permeates the headlines. Operating under the same assumption as Saghafian — that it is impossible to prevent students from using ChatGPT — educators have adopted diverse strategies to adapt their curricula.

In some language classes, for example, threats posed by intelligent technology are nothing new. “Ever since the internet, really, there have been increasingly large numbers of things that students can use to do their work for them,” says Amanda Gann, an instructor for French 50: “Advanced French II: Justice, Equity, Rights, and Language.”

Even before the rise of large language models like ChatGPT, French 50 used measures to limit students’ ability to use tools like Google Translate for assignments. “The first drafts of all of their major assessments are done in class,” Gann says.

In some language classes threats posed by intelligent technology are nothing new. Prior iterations of French 50 already included measures to limit students’ ability to use tools like Google Translate for assignments.

Still, Gann and the other instructors made additional changes this semester in response to the release of ChatGPT. After writing first drafts in class, French 50 students last semester revised their papers at home. This spring, students will instead transform their draft composition into a conversational video. To ensure that students don’t write their remarks beforehand — or have ChatGPT write them — the assignment will be graded on how “spontaneous and like fluid conversation” their speech is.

Instructors were already considering an increased emphasis on oral assessments, Gann says, but she might not have implemented it without the pressure of ChatGPT.

Gann welcomes the change. She views emergence of large language models like ChatGPT as a “pedagogical challenge.” This applies both to making her assignments less susceptible to AI — “Is this something only a human could do?” — as well as reducing the incentive to use AI in the first place. In stark contrast to the projected panic about ChatGPT, Gann thinks the questions it has posed to her as an educator “make it kind of fun.”

Stewart thinks that ChatGPT will provide “a moment to reflect from the educator’s side.” If ChatGPT can do their assignments, perhaps their assignments are “uninspired, or they’re kind of boring, or they’re asking students to be repetitive,” he says.

Stewart also trusts that his students see the value in learning without cutting corners. In his view, very few of the students in his high-level German translation class would “think that it’s a good use of their time to take that class and then turn to the translating tool,” he says. “The reason they’re taking that class is because they also understand that there’s a way to get a similar toolbox in their own brain.” To Stewart, students must see that developing that toolbox for themselves is “far more powerful and far more useful” than copying text into Google Translate.

Computer Science professor Boaz Barak shares Stewart’s sentiment: “Generally, I trust students. I personally don't go super out of my way to try to detect student cheating,” he says. “And I am not going to start.”

Frankle, too, won’t be going out of his way to detect whether his students are cheating — instead, he assumes that students in his CS classes will be using tools like ChatGPT. Accordingly, he intends to make his assignments and exams significantly more demanding. In previous courses, Frankle says he might have asked students to code a simple neural network. With the arrival of language models that can code, he’ll ask that his students reproduce a much more complex version inspired by cutting-edge research. “Now you can get more accomplished, so I can ask more of you,” he says.

Other courses may soon follow suit. Just last week, the instructor for CS 181: “Machine Learning,” offered students extra credit if they used ChatGPT as an “educational aid” to support them in actions like debugging code.

Educators across disciplines are encouraging students to critically engage with ChatGPT in their classes.

Harvard College Fellow Maria Dikcis, who teaches English 195BD: “The Dark Side of Big Data,” assigned students a threefold exercise — first write a short analytical essay, then ask ChatGPT to produce a paper on the same topic, and finally compare their work and ChatGPT’s. “I sort of envisioned it as a human versus machine intelligence,” she says. She hopes the assignment will force students to reflect on the seeming brilliance of the model but also to ask, in her words, “What are its shortcomings, and why is that important?”

Saghafian also thinks it is imperative that students interact with this technology, both to understand its uses as well as to see its “cracks.” In the 2000s, teachers helped students learn the benefits and pitfalls of internet resources like Google search. Saghafian recommends that educators use a similar approach with ChatGPT.

And these cracks can be easy to miss. When she first started using ChatGPT to summarize her readings, Pazos recalls feeling “really impressed by how fast it happened.” To her, because ChatGPT displays its responses word by word, “it feels like it’s thinking.”

“One of the hypes about this technology is that people think, oh, it can do everything, it can think, it can reason,” Saghafian says. Through critical engagement with ChatGPT, students can learn that “none of those is correct.” Large language models, he explains, “don't have the ability to think.” Their writing process, in fact, can show students the difference between reasoning and outputting language.

A Troublesome Model

In the Okawa elementary school essay written by ChatGPT, one of the later paragraphs stated: The surviving students and teachers were quickly evacuated to safety.

In fact, the students and teachers were not evacuated to safety. They were evacuated toward the tsunami — which was exactly why Okawa Elementary School became such a tragedy. ChatGPT could describe the tragedy, but since it did not understand what made it a tragedy, it spat out a fundamental falsehood with confidence.

This behavior is not out of the ordinary. ChatGPT consistently makes factual errors, even though OpenAI designed it not to and has repeatedly updated it. And that’s just the tip of the iceberg. Despite impressive capabilities, ChatGPT and other large language models come with fundamental limitations and potential for harm.

Many of these flaws are baked into the way ChatGPT works. ChatGPT is an example of what researchers call a large language model. LLMs work primarily by processing huge amounts of data. This is called training, and in ChatGPT’s case, the training likely involved processing most of the text on the internet — an ocean of niche Wikipedia articles, angry YouTube comment threads, poorly written Harry Potter fan fiction, recipes for lemon poppy seed muffins, and everything in between.

Through that ocean of training data, LLMs become adept at recognizing and reproducing the complex statistical relationships between words in natural language. For ChatGPT, this might mean learning what types of words appear in a Wikipedia article as opposed to a chapter of fanfiction, or what lists of ingredients are most likely to follow the title “pistachio muffins.” So, when ChatGPT is given a prompt, like “how do I bake pistachio muffins,” it uses the statistical relationships it has learned to predict the most likely response to that prompt.

ChatGPT’s training likely involved processing most of the text on the internet — an ocean of niche Wikipedia articles, angry YouTube comment threads, poorly written fan fiction, recipes for muffins, and everything in between.

Occasionally, this means regurgitating material from its training set (like copying a muffin recipe) or adapting a specific source to the prompt (like summarizing a Wikipedia article). But more often, ChatGPT synthesizes its responses from the correlations it has learned between words. This synthesis is what gives it the uncanny yet hilarious ability to write the opening of George Orwell’s “Nineteen Eighty-Four” in the style of a SpongeBob episode, or explain the code for a Python program in the voice of a wiseguy from a 1940s gangster movie.

This explains the propensity of LLMs to produce false claims even when asked about real events. The algorithms behind ChatGPT have no conception of truth — only of correlations between words. Moreover, the distinction between truth and falsehood on the written internet is rarely clear from the words alone.

Take the Okawa Elementary School example. If you read a blog post about the effects of a disastrous earthquake on an elementary school, how would you determine whether it was true? You might consider the plausibility of the story, the reputability of the writer, or whether corroborating evidence, like photographs or external links, were available. Your decision, in other words, would not depend solely on the text of the post. Instead, it would be informed by digital literacy, fact-checking, and your knowledge of the outside world. Language models have none of that.

The difference between fact and fiction is not the only elementary concept left out of ChatGPT’s algorithm. Despite its ability to predict and reproduce complex patterns of writing, ChatGPT often cannot parse comparatively simple logic. The technology will output confident-sounding incorrect answers when asked to solve short word problems, add two large numbers, or write a sentence ending with the letter “c.” Questioning its answer to a math problem may lead it to admit a mistake, even if there wasn’t one. Given the list of words: “ChatGPT” “has” “endless” “limitations,” it told us that the third-to-last-word on that list was: “ChatGPT.” (Narcissistic much?)

When James C. Glaser ’25 asked ChatGPT to compose a sestina — a poetic form with six-line stanzas and other constraints — the program outputted stanzas with four lines, no matter how explicit he made the prompt. At some point during the back-and-forth, he says, “I just sort of gave up and realized that it was kind of ridiculous.”

Lack of sufficient training data in certain areas can also affect ChatGPT’s performance. Multiple faculty members who teach languages other than English told us ChatGPT performed noticeably worse in those languages.

The content of the training data also matters. The abundance of bias and hateful language on the internet filters into the written output of LLMs, as leading AI ethics researchers such as Timnit Gebru have shown . In English language data, “white supremacist and misogynistic, ageist, etc., views are overrepresented,” a 2021 study co-authored by Gebru found, “setting up models trained on these datasets to further amplify biases and harms.”

Indeed, OpenAI’s GPT-3, a predecessor of ChatGPT that powers hundreds of applications today, is quick to output paragraphs with racist, sexist, anti-semitic, or otherwise harmful messages if prompted, as the MIT Technology Review and others have shown.

Because OpenAI has invested heavily in making these outputs harder to reproduce for ChatGPT, ChatGPT will often refuse to answer prompts deemed dangerous or harmful. These barriers, however, are easily sidestepped, leading some to point out that AI technology could be used to manufacture fake news and hateful, extremist content.

In order to reduce the likelihood of such outputs, OpenAI feeds explicitly labeled examples of harmful content into its LLMs. This might be effective, but it also requires humans to label thousands of examples, often by reading through nightmarish material to decide whether it qualifies as harmful.

As many other AI companies have done, OpenAI reportedly chose to outsource this essential labor. In January, Time reported that OpenAI had contracted out the labeling of harmful content to Kenyan workers paid less than $2 per hour. Multiple workers recalled encountering horrifying material in their work, Time reported.

“Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content,” an OpenAI spokesperson told Time.

Even with the viral popularity of ChatGPT and a new $10 billion investment from Microsoft, legal issues loom over OpenAI. If, in some sense, large language models merely synthesize text from across the internet, does that mean they are stealing copyrighted material?

Some argue that OpenAI’s so-called breakthrough might be illegal. A class action lawsuit filed just weeks before the release of ChatGPT alleges that OpenAI’s Codex, a language model optimized for writing code, violated the licenses of thousands of software developers whose code was used to train the model.

This lawsuit could open the gates for similar proceedings against other language models. Many believe that OpenAI and other tech giants train AI systems using massive datasets indiscriminately pulled from the internet, meaning that large language models might be stealing or repurposing copyrighted and potentially private material that appears in their datasets without licensing or attribution.

If OpenAI could be sued for Codex, the same logic would likely apply to ChatGPT. In the past year, OpenAI doubled the size of their legal team. “This might be the first case,” said Matthew Butterick, one of the attorneys representing the software developers, in an interview with Bloomberg Law , “but it will not be the last.”

OpenAI did not respond to a request for comment.

As ChatGPT and LLMs grow more popular, the question of what to do about these flaws only becomes more pressing.

‘A Life Without Limits’

When you’re watching a disembodied green icon spit out line after line of articulate, seemingly original content, it’s hard not to feel like you’re living in the future. It’s also hard not to worry that this technology’s capabilities will render your education obsolete.

So will ChatGPT transform learning as much as the hype would have us believe?

It’s undeniable that ChatGPT and other LLMs — through their ability to generate readable paragraphs and functioning programs — are revolutionary technology. But in their own way, so were calculators, the internet, Google search, Wikipedia, and Google Translate.

Every professor we talked to cited at least one of these tools as having catalyzed a similar paradigm shift within education. German and Comparative Literature professor John T. Hamilton likens ChatGPT to an “interactive Wikipedia.” Saghafian, the HKS professor, views it as playing a similar role to Google.

People have been adapting to these technologies for decades. Children growing up in the 2000s and 2010s were told, “Don’t trust everything you see on the internet.” Gradually, they became digitally literate. They saw the value, for example, in using Wikipedia as a starting point for research, but knew never to cite it.

Like Google and Wikipedia in their earliest stages, people are currently using ChatGPT to cut corners. But as experts highlight its flaws, teachers are beginning to promote a kind of AI literacy. (This would prove essential if an LLM professes its love for you or says it will hack you, as the AI-powered Bing Chat did to Kevin Roose in his New York Times article .)

To Barak, the computer science professor, a liberal arts education can help prepare students for an uncertain future. “The main thing we are trying to teach students is tools for thinking and for adapting,” Barak says. “Not just for the jobs that exist today, but also for the jobs that will exist in 10 years.”

While ChatGPT currently can’t follow simple logic, tell true from false, or write complex, coherent arguments, what about in a year? A decade? The amount of computing power devoted to training and deploying machine learning applications has grown exponentially over the past few years. In 2018, OpenAI’s state-of-the-art GPT-1 model had 100 million parameters. By 2020, the number of parameters in GPT-3 had grown to 175 billion. With this pace of change, what new abilities might GPT-4 — OpenAI’s rumored next language model — have? And how will universities, not to mention society as a whole, adapt to this emerging technology?

Some instructors are already imagining future uses for AI that could benefit students and teachers alike.

“What I’d love to see is, for example, someone to make a French language chatbot that I could tell my students to talk to,” Gann, the French instructor, says. She says an app that could give students feedback on their accent or pronunciation would also be useful. Such technology, she explains, would allow students to improve their skills without the expensive attention of a teacher.

Saghafian believes that ChatGPT could act as “a sort of free colleague” that students could talk to.

Silicon Valley researchers and machine learning professors don’t know where the field is heading, but they are convinced that it’ll be big.

“I do believe there is going to be an AI revolution,” says Barak. In his view, AI-based tools will not make humans redundant, but rather change the nature of jobs on the scale of the industrial revolution.

As such, it’s impossible to predict exactly what the AI-powered future will look like. It would be as difficult as trying to predict what the internet would look like “in 1993,” says Frankle, the incoming CS professor.

Underlying these claims — and the perspectives of many professors we talked to — is an assumption that the cat is out of the bag, that AI’s future has already been set in motion and efforts to shape it will be futile.

Not everyone makes this assumption. In fact, some believe that shaping AI’s future is not only possible, but vital. “What’s needed is not something out of science fiction — it’s regulation, empowerment of ordinary people and empowerment of workers,” wrote University of Washington professor Emily M. Bender in a 2022 blog post.

Thus far, the AI industry has faced little regulation.

However, some fear any form of constraint could stifle progress. When asked for specific ideas about regulating AI, Saghafian, the public policy professor, muses that he wouldn’t want policymakers “to be too worried about the negative sides of these technologies, so that they end up blocking the future, positive side of it.”

In a regulation-free environment, Silicon Valley companies may not prioritize ethics or public knowledge. Frankle, who currently builds language models like ChatGPT as the chief scientist for an AI startup called MosaicML, explains how at startups, the incentive is not “to publish and share knowledge” — that’s a side bonus — but rather, “to build an awesome product.”

Hamilton, however, urges caution. Technology empowers us to live as easily and conveniently as possible, he explains, to live without limits: we can fly across the world, read any language just by pointing our smartphones at it, or learn any fact by tapping a few words into Google.

But limits, Hamilton says, are ultimately what allow us to ascribe meaning within our lives. We wouldn’t care about gold if it was plentiful, he points out, and accordingly, we wouldn’t care much about living if we lived forever. “We care because we’re so limited,” Hamilton says. “A life without limits is ultimately a life without value.”

As we continue to create more powerful technology, we may not only lose sight of our own limits, but also become dependent on our creations.

For instance, students might be tempted to rely on ChatGPT’s outputs for critical thinking. “That’s great,” Hamilton says. “But am I losing my ability to do precisely that for myself?”

Students might be tempted to rely on ChatGPT’s outputs for critical thinking. “That’s great,” Hamilton says. “But am I losing my ability to do precisely that for myself?”

We think back to the Okawa Elementary School essay. ChatGPT’s version wasn’t just worse than the student-written one because it repeated cliched phrases, lacked variation in its sentence structure, or concluded by saying “in conclusion.”

ChatGPT’s draft was worse because ChatGPT did not understand why what transpired at Okawa Elementary School was a tragedy. It did not spend hours imagining such an unfathomable chain of events. It did not feel the frustration of its initial expressions falling short, nor did it painstakingly revise its prose to try to do it justice.

ChatGPT didn’t feel satisfied when, after such a process, it had produced a work approaching what it wanted. It did not feel fundamentally altered by its engagement with the cruel randomness of human suffering. It did not leave the assignment with a renewed gratitude for life.

ChatGPT, in other words, did not go through the human process of learning.

If we asked ChatGPT to write us a longform article about ChatGPT and the future of education, would it be worth reading? Would you learn anything?

— Associate Magazine Editor Hewson Duffy can be reached at [email protected] .

— Magazine writer Sam E. Weil can be reached at [email protected] .

  • Future Students
  • Current Students
  • Faculty/Staff

Stanford Graduate School of Education

News and Media

  • News & Media Home
  • Research Stories
  • School's In
  • In the Media

You are here

What do ai chatbots really mean for students and cheating.

Student working on laptop and phone and notebook

The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release of ChatGPT.  

“There’s been a ton of media coverage about AI making it easier and more likely for students to cheat,” said Denise Pope , a senior lecturer at Stanford Graduate School of Education (GSE). “But we haven’t seen that bear out in our data so far. And we know from our research that when students do cheat, it’s typically for reasons that have very little to do with their access to technology.”

Pope is a co-founder of Challenge Success , a school reform nonprofit affiliated with the GSE, which conducts research into the student experience, including students’ well-being and sense of belonging, academic integrity, and their engagement with learning. She is the author of Doing School: How We Are Creating a Generation of Stressed-Out, Materialistic, and Miseducated Students , and coauthor of Overloaded and Underprepared: Strategies for Stronger Schools and Healthy, Successful Kids.  

Victor Lee is an associate professor at the GSE whose focus includes researching and designing learning experiences for K-12 data science education and AI literacy. He is the faculty lead for the AI + Education initiative at the Stanford Accelerator for Learning and director of CRAFT (Classroom-Ready Resources about AI for Teaching), a program that provides free resources to help teach AI literacy to high school students. 

Here, Lee and Pope discuss the state of cheating in U.S. schools, what research shows about why students cheat, and their recommendations for educators working to address the problem.

Denise Pope

Denise Pope

What do we know about how much students cheat?

Pope: We know that cheating rates have been high for a long time. At Challenge Success we’ve been running surveys and focus groups at schools for over 15 years, asking students about different aspects of their lives — the amount of sleep they get, homework pressure, extracurricular activities, family expectations, things like that — and also several questions about different forms of cheating. 

For years, long before ChatGPT hit the scene, some 60 to 70 percent of students have reported engaging in at least one “cheating” behavior during the previous month. That percentage has stayed about the same or even decreased slightly in our 2023 surveys, when we added questions specific to new AI technologies, like ChatGPT, and how students are using it for school assignments.

Victor Lee

Isn’t it possible that they’re lying about cheating? 

Pope: Because these surveys are anonymous, students are surprisingly honest — especially when they know we’re doing these surveys to help improve their school experience. We often follow up our surveys with focus groups where the students tell us that those numbers seem accurate. If anything, they’re underreporting the frequency of these behaviors.

Lee: The surveys are also carefully written so they don’t ask, point-blank, “Do you cheat?” They ask about specific actions that are classified as cheating, like whether they have copied material word for word for an assignment in the past month or knowingly looked at someone else’s answer during a test. With AI, most of the fear is that the chatbot will write the paper for the student. But there isn’t evidence of an increase in that.

So AI isn’t changing how often students cheat — just the tools that they’re using? 

Lee: The most prudent thing to say right now is that the data suggest, perhaps to the surprise of many people, that AI is not increasing the frequency of cheating. This may change as students become increasingly familiar with the technology, and we’ll continue to study it and see if and how this changes. 

But I think it’s important to point out that, in Challenge Success’ most recent survey, students were also asked if and how they felt an AI chatbot like ChatGPT should be allowed for school-related tasks. Many said they thought it should be acceptable for “starter” purposes, like explaining a new concept or generating ideas for a paper. But the vast majority said that using a chatbot to write an entire paper should never be allowed. So this idea that students who’ve never cheated before are going to suddenly run amok and have AI write all of their papers appears unfounded.

But clearly a lot of students are cheating in the first place. Isn’t that a problem? 

Pope: There are so many reasons why students cheat. They might be struggling with the material and unable to get the help they need. Maybe they have too much homework and not enough time to do it. Or maybe assignments feel like pointless busywork. Many students tell us they’re overwhelmed by the pressure to achieve — they know cheating is wrong, but they don’t want to let their family down by bringing home a low grade. 

We know from our research that cheating is generally a symptom of a deeper, systemic problem. When students feel respected and valued, they’re more likely to engage in learning and act with integrity. They’re less likely to cheat when they feel a sense of belonging and connection at school, and when they find purpose and meaning in their classes. Strategies to help students feel more engaged and valued are likely to be more effective than taking a hard line on AI, especially since we know AI is here to stay and can actually be a great tool to promote deeper engagement with learning.

What would you suggest to school leaders who are concerned about students using AI chatbots? 

Pope: Even before ChatGPT, we could never be sure whether kids were getting help from a parent or tutor or another source on their assignments, and this was not considered cheating. Kids in our focus groups are wondering why they can't use ChatGPT as another resource to help them write their papers — not to write the whole thing word for word, but to get the kind of help a parent or tutor would offer. We need to help students and educators find ways to discuss the ethics of using this technology and when it is and isn't useful for student learning.

Lee: There’s a lot of fear about students using this technology. Schools have considered putting significant amounts of money in AI-detection software, which studies show can be highly unreliable. Some districts have tried blocking AI chatbots from school wifi and devices, then repealed those bans because they were ineffective. 

AI is not going away. Along with addressing the deeper reasons why students cheat, we need to teach students how to understand and think critically about this technology. For starters, at Stanford we’ve begun developing free resources to help teachers bring these topics into the classroom as it relates to different subject areas. We know that teachers don’t have time to introduce a whole new class, but we have been working with teachers to make sure these are activities and lessons that can fit with what they’re already covering in the time they have available. 

I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.

More Stories

Students in a classroom using a digital tablet

⟵ Go to all Research Stories

Get the Educator

Subscribe to our monthly newsletter.

Stanford Graduate School of Education

482 Galvez Mall Stanford, CA 94305-3096 Tel: (650) 723-2109

Improving lives through learning

  • Contact Admissions
  • GSE Leadership
  • Site Feedback
  • Web Accessibility
  • Career Resources
  • Faculty Open Positions
  • Explore Courses
  • Academic Calendar
  • Office of the Registrar
  • Cubberley Library
  • StanfordWho
  • StanfordYou

Make a gift now

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

UK Edition Change

  • UK Politics
  • News Videos
  • Paris 2024 Olympics
  • Rugby Union
  • Sport Videos
  • John Rentoul
  • Mary Dejevsky
  • Andrew Grice
  • Sean O’Grady
  • Photography
  • Theatre & Dance
  • Culture Videos
  • Food & Drink
  • Health & Families
  • Royal Family
  • Electric Vehicles
  • Lifestyle Videos
  • UK Hotel Reviews
  • News & Advice
  • Simon Calder
  • Australia & New Zealand
  • South America
  • C. America & Caribbean
  • Middle East
  • Politics Explained
  • News Analysis
  • Today’s Edition
  • Home & Garden
  • Fashion & Beauty
  • Travel & Outdoors
  • Sports & Fitness
  • Sustainable Living
  • Climate Videos
  • Behind The Headlines
  • On The Ground
  • Decomplicated
  • You Ask The Questions
  • Binge Watch
  • Travel Smart
  • Watch on your TV
  • Crosswords & Puzzles
  • Most Commented
  • Newsletters
  • Ask Me Anything
  • Virtual Events
  • Betting Sites
  • Online Casinos
  • Wine Offers

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in Please refresh your browser to be logged in

Experts reveal the tell-tale sign that a student has used ChatGPT in an essay

Researchers compared essays written by three first-year undergraduate students, with the aid of chatgpt, with 164 essays written by igcse students, article bookmarked.

Find your bookmarks in your Independent Premium section, under my profile

A student uses a phone during an exam

Voices Dispatches

Sign up for a full digest of all the best opinions of the week in our Voices Dispatches email

Sign up to our free weekly voices newsletter, thanks for signing up to the voices dispatches email.

Experts have revealed the tell-tale signs that an essay has been written by ChatGPT and not a student .

It comes after the rise of generative AI tools, like ChatGPT, has sparked concerns about cheating among pupils in the education sector.

Repetition of words, tautology and paragraphs starting with “however” are some tell-tale features, researchers said.

The writing style of the artificial intelligence tool is “bland” and “journalistic”, according to a Cambridge University Press and Assessment study.

Researchers compared essays written by three first-year undergraduate students, with the aid of ChatGPT, with 164 essays written by IGCSE students.

These essays were marked by examiners and the undergraduates were then interviewed and their essays were analysed.

The study found essays written with the help of ChatGPT performed poorly on analysis and comparison skills compared to non-ChatGPT-assisted essays.

But ChatGPT-assisted essays performed strongly on information and reflection skills.

Researchers identified a number of key features of the ChatGPT writing style, which included the use of Latinate vocabulary, repetition of words or phrases and ideas, and pleonasms.

Researchers identified a number of key features of the ChatGPT writing style

Essays written with the help of ChatGPT were also more likely to use paragraphs starting with discourse markers like “however”, “moreover”, and “overall”, and numbered lists with items.

The researchers said ChatGPT’s default writing style “echoes the bland, clipped, and objective style that characterises much generic journalistic writing found on the internet”.

The report said: “The students found ChatGPT useful for gathering information quickly.

“However, they considered that complete reliance on this technology would produce essays of a low academic standard.”

Lead researcher Jude Brady, of Cambridge University Press and Assessment, said: “Our findings offer insights into the growing area of generative AI and assessment, which is still largely uncharted territory.

“Despite the small sample size, we are excited about these findings as they have the capacity to inform the work of teachers as well as students.”

She added: “We hope our research might help people to identify when a piece of text has been written by ChatGPT.

“For students and the wider population, learning to use and detect generative AI forms an increasingly important aspect of digital literacy.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Subscribe to Independent Premium to bookmark this article

Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.

New to The Independent?

Or if you would prefer:

Want an ad-free experience?

Hi {{indy.fullName}}

  • My Independent Premium
  • Account details
  • Help centre

The College Essay Is Dead

Nobody is prepared for how AI will transform academia.

An illustration of printed essays arranged to look like a skull

Suppose you are a professor of pedagogy, and you assign an essay on learning styles. A student hands in an essay with the following opening paragraph:

The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.

Pass or fail? A- or B+? And how would your grade change if you knew a human student hadn’t written it at all? Because Mike Sharples, a professor in the U.K., used GPT-3, a large language model from OpenAI that automatically generates text from a prompt, to write it. (The whole essay, which Sharples considered graduate-level, is available, complete with references, here .) Personally, I lean toward a B+. The passage reads like filler, but so do most student essays.

Sharples’s intent was to urge educators to “rethink teaching and assessment” in light of the technology, which he said “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.

The world of generative AI is progressing furiously. Last week, OpenAI released an advanced chatbot named ChatGPT that has spawned a new wave of marveling and hand-wringing , plus an upgrade to GPT-3 that allows for complex rhyming poetry; Google previewed new applications last month that will allow people to describe concepts in text and see them rendered as images; and the creative-AI firm Jasper received a $1.5 billion valuation in October. It still takes a little initiative for a kid to find a text generator, but not for long.

The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. Kevin Bryan, an associate professor at the University of Toronto, tweeted in astonishment about OpenAI’s new chatbot last week: “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.” Neither the engineers building the linguistic tech nor the educators who will encounter the resulting language are prepared for the fallout.

A chasm has existed between humanists and technologists for a long time. In the 1950s, C. P. Snow gave his famous lecture, later the essay “The Two Cultures,” describing the humanistic and scientific communities as tribes losing contact with each other. “Literary intellectuals at one pole—at the other scientists,” Snow wrote. “Between the two a gulf of mutual incomprehension—sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding. They have a curious distorted image of each other.” Snow’s argument was a plea for a kind of intellectual cosmopolitanism: Literary people were missing the essential insights of the laws of thermodynamics, and scientific people were ignoring the glories of Shakespeare and Dickens.

The rupture that Snow identified has only deepened. In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days , is a famously proud illiterate. “I would never read a book,” he once told an interviewer . “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.” Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before. He probably didn’t imagine there was much to think about.

The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus , but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust .

These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences. Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.

As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide. As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone. Needless to say, humanists’ understanding of technology is partial at best. The state of digital humanities is always several categories of obsolescence behind, which is inevitable. (Nobody expects them to teach via Instagram Stories.) But more crucially, the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.

Read: The humanities are in crisis

Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine. In a tech-centered world, language matters, voice and style matter, the study of eloquence matters, history matters, ethical systems matter. But the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations. The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major ?

The case for the value of humanities in a technologically determined world has been made before. Steve Jobs always credited a significant part of Apple’s success to his time as a dropout hanger-on at Reed College, where he fooled around with Shakespeare and modern dance, along with the famous calligraphy class that provided the aesthetic basis for the Mac’s design. “A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem,” Jobs said . “The broader one’s understanding of the human experience, the better design we will have.” Apple is a humanistic tech company. It’s also the largest company in the world.

Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed . The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.

And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.

And yet, despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.

The humanists will need to understand natural-language processing because it’s the future of language, but also because there is more than just the possibility of disruption here. Natural-language processing can throw light on a huge number of scholarly problems. It is going to clarify matters of attribution and literary dating that no system ever devised will approach; the parameters in large language models are much more sophisticated than the current systems used to determine which plays Shakespeare wrote, for example . It may even allow for certain types of restorations, filling the gaps in damaged texts by means of text-prediction models. It will reformulate questions of literary style and philology; if you can teach a machine to write like Samuel Taylor Coleridge, that machine must be able to inform you, in some way, about how Samuel Taylor Coleridge wrote.

The connection between humanism and technology will require people and institutions with a breadth of vision and a commitment to interests that transcend their field. Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance. But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.

an image, when javascript is unavailable

  • facebook-rs

Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers

  • By Miles Klee

A number of seniors at Texas A&M University–Commerce who already walked the stage at graduation this year have been temporarily denied their diplomas after a professor ineptly used AI software to assess their final assignments, the partner of a student in his class — known as DearKick on Reddit — claims to Rolling Stone .

Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an “X” in the course, Mumm explained, because he had used “Chat GTP” (the OpenAI chatbot is actually called “ ChatGPT “) to test whether they’d used the software to write the papers — and the bot claimed to have authored every single one.

Texas A&M commerce professor fails entire class of seniors blocking them from graduating- claiming they all use “Chat GTP” by u/DearKick in ChatGPT

There’s just one problem: ChatGPT doesn’t work that way. The bot isn’t made to detect material composed by AI — or even material produced by itself — and is known to sometimes emit damaging misinformation . With very little prodding, ChatGPT will even claim to have written passages from famous novels such as Crime and Punishment . Educators can choose among a wide variety of effective AI and plagiarism detection tools to assess whether students have completed assignments themselves, including Winston AI and Content at Scale ; ChatGPT is not among them. And OpenAI’s own tool for determining whether a text was written by a bot has been judged “ not very accurate ” by a digital marketing agency that recommends tech resources to businesses.

Editor’s picks

The 250 greatest guitarists of all time, the 500 greatest albums of all time, the 50 worst decisions in movie history, every awful thing trump has promised to do in a second term.

Mumm’s email was shared on Reddit by a user with the handle DearKick, who identified himself as the fiancé of one of the students to receive a failing grade for submitting supposedly AI-generated essays. He claims to Rolling Stone that his partner had never heard of ChatGPT herself and was baffled by the accusation, noting that “she feels even worse considering it’s something she knows nothing about.” She immediately “reached out to the dean and CC’d the president of the university,” DearKick alleges, but did not immediately receive assistance and went to plead her case with administrators in person on Tuesday. DearKick adds that Mumm allegedly flunked “several” whole classes in similar fashion rather than question the validity of his methods for detecting cheaters.

Sarah Silverman-Led AI Copyright Suit Against ChatGPT Partially Dismissed

‘eternal you’: a horrifying doc about ai companies recreating the dead, wtf is happening at openai.

DearKick tells Rolling Stone that their fiancée’s meeting on Tuesday with the university’s Dean of Agricultural Science “should clear things up, I hope,” and speculates that Mumm had little familiarity with chatbots before attempting to run student papers through one. In an update to his original post, he revealed that while at least one student has already been exonerated through Google Docs timestamps and received an apology from Mumm, another two had admitted to using ChatGPT earlier in the semester, which “no doubt greatly complicates the situation for those who did not.”

In a statement sent to  Rolling Stone on Wednesday, Texas A&M University-Commerce said they were investigating the incident and developing policies related to AI in the classroom. The university denied that anyone had received a failing grade. “A&M-Commerce confirms that no students failed the class or were barred from graduating because of this issue,” the school noted. “Dr. Jared Mumm, the class professor, is working individually with students regarding their last written assignments. Some students received a temporary grade of ‘X’ — which indicates ‘incomplete’ — to allow the professor and students time to determine whether AI was used to write their assignments and, if so, at what level.” The university also confirmed that several students had been cleared of any academic dishonesty.

“University officials are investigating the incident and developing policies to address the use or misuse of AI technology in the classroom,” the statement continued. “They are also working to adopt AI detection tools and other resources to manage the intersection of AI technology and higher education. The use of AI in coursework is a rapidly changing issue that confronts all learning institutions.”

UPDATE: This article was updated at 2:17 p.m. ET, May 17, to include a statement from Texas A&M University-Commerce .

'Just Die': Colorado Elections Chief Who Took on Trump Sees 600% Spike in Threats

Gop lawmaker thinks he exposed busload of 'illegals' ... it was the gonzaga basketball team, kenan thompson says 'investigate more' in emotional response to ‘quiet on set’ revelations, skid row finally find the right frontperson: lzzy hale, sam bankman-fried sentenced to 25 years in prison in ftx fraud trial.

  • By Daniel Kreps

Karlie Kloss, Joshua Kushner to Relaunch Life Magazine

  • 'Life' Choices
  • By Kory Grow

Can Harry Daniels Sing for You?

  • Pass the Mic
  • By Tomás Mier

Billionaire Says His Long-Delayed 'Titanic II' Ship Will Be Antidote to 'Woke' Politics

  • Hope Floats

YouTuber Ninja Diagnosed With Skin Cancer at 32

  • Get Checked
  • By Ej Dickson

Most Popular

Anne hathaway lost roles after oscar win because of 'how toxic my identity had become online,' says christopher nolan backed her: 'i had an angel' in him, where to stream 'quiet on set: breaking the silence' episode 5 online, body language experts believe this is the reason kate middleton was alone in her cancer announcement video, uncle luke adds his take on diddy's infamous parties, explains why he'd always "leave early", you might also like, larry david rails against ‘sociopath’ donald trump: he’s a ‘sick man’ and ‘little baby’ who ‘just couldn’t admit to losing. and we know he lost’, chanel culture fund hands 1 million euros to 10 winning artists as part of biannual prize, the best exercise mats for working out, according to fitness experts, with new youtube feature, attention spans are officially zero, wrexham’s annual revenue jumps 75% as losses also soar.

Rolling Stone is a part of Penske Media Corporation. © 2024 Rolling Stone, LLC. All rights reserved.

Verify it's you

Please log in.

Some students are using ChatGPT to cheat — here's how schools are trying to stop it

Niagara college says it has seen students use chatgpt on assignments but can't say if it is a trend.

chatgpt cheating on essays

Social Sharing

School boards and at least one college in Hamilton and surrounding areas are on high alert for any students trying to cheat using a new artificial intelligence tool called ChatGPT.

ChatGPT is a chat software that uses massive databases to generate original, human-sounding responses to prompts from users.

While it doesn't always produce correct answers, the tool has gained some notoriety as some people have used it to write essays and other assignments almost instantly. 

"It is something we've seen at the college but at this point it would be difficult to speak to trends," said Niagara College Canada spokesperson Michael Wales.

"We do know that AI [artificial intelligence]   technology being used this way is something that's happening across the post-secondary sector … We've started to work with our faculty — providing resources and development — to build understanding of the technology and its impact on academic integrity."

The college is one of numerous local education institutions implementing training or measures to prevent students from using the tool.

School boards educating staff about AI tool

Hamilton's public school board said ChatGPT is currently blocked on all Board devices and within Wi-Fi networks to restrict use for students and staff.

"As we have experienced in the past with emerging technologies, we are monitoring the use of ChatGPT as it relates to the education sector, but we are not in a position to give further comment on its use at this time," said Shawn McKillop, spokesperson for Hamilton-Wentworth District School Board.

Niagara Catholic District School Board said it's aware of the "growing popularity" of ChatGPT and "shares the concerns of other school boards of its use among students in our schools."

"Staff will continue to monitor ChatGPT, particularly at the intermediate and secondary level, to ensure that if this new technology is being used, it is being used in a way that supports learning in a positive, meaningful way," said spokesperson Jennifer Pellegrini.

  • From instant essays to phishing scams, ChatGPT has experts on edge
  • Opinion Regulating artificial intelligence: Things are about to get a lot more interesting

Hamilton's Catholic school board said it is in the "very early awareness stages" of the tool.

"We have presentations scheduled with our curriculum team, school administration and teachers over the next several weeks," said Hamilton-Wentworth Catholic District School Board chair Pat Daly, adding the board isn't promoting the app.

Daly said staff can mitigate the use of the tool by moving culminating assignments and exams into the classroom and assign less homework.

"There is also some concern about user agreements," he said. These include: users have to be older than 13, and where all of the data that is generated with Chat GPT is stored.

Grand Erie District School Board said it is exploring the pros and cons of technology like ChatGPT.

WATCH: Students share their thoughts about ChatGPT and AI tools for assignments

chatgpt cheating on essays

Students share their thoughts about ChatGPT and AI tools for assignments

Brock University, McMaster University, Mohawk College, Niagara's public school board didn't respond before deadline.

This all comes as the creator of ChatGPT launched a tool — AI Text Classifier — on Tuesday to help educators detect someone used artificial intelligence to complete an assignment.

OpenAI cautions that its new tool may not catch everyone who uses artificial intelligence.

The method for detecting AI-written text "is imperfect and it will be wrong sometimes," said Jan Leike, head of OpenAI's alignment team tasked to make its systems safer.

"Because of that, it shouldn't be solely relied upon when making decisions," Leike said.

OpenAI is also launching a paid version of ChatGPT in the U.S. that will cost $20 a month.

ABOUT THE AUTHOR

chatgpt cheating on essays

Bobby Hristova is a journalist with CBC Hamilton. He reports on all issues, but has a knack for stories that hold people accountable, stories that focus on social issues and investigative journalism. He previously worked for the National Post and CityNews in Toronto. You can contact him at [email protected].

  • Send a pitch directly to Bobby
  • Follow @bobbyhristova on Twitter

With files from The Associated Press

Related Stories

  • Faced with criticism it's a haven for cheaters, ChatGPT adds tool to catch them

7 Surefire Signs That ChatGPT Has Written an Essay Revealed

chatgpt cheating on essays

Researchers at the University of Cambridge have revealed the seven telltale signs that a piece of written content was generated by ChatGPT , after carefully analyzing more than 150 essays written by high school students and undergraduates.

They found that ChatGPT loves an Oxford Comma, repeats phrases and spits out tautological statements practically empty of meaning at a much higher frequency than humans.

While the findings are interesting, the sample size is quite small. There's also no guarantee that the linguistic habits and techniques identified couldn’t and wouldn't be used by a human. What’s more, AI content detection tools are largely unreliable; there’s still no way to know for certain that any given written content is AI-generated.

Get the latest tech news, straight to your inbox

Stay informed on the top business tech stories with Tech.co's weekly highlights reel.

By signing up to receive our newsletter, you agree to our Privacy Policy . You can unsubscribe at any time.

The 7 Telltale Signs Content is AI-Generated

The researchers at Cambridge analyzed 164 essays written by high school students with four essays written with a helping hand from ChatGPT.

The ChatGPT-assisted essays were generally more information-heavy and had more reflective elements, but the markers at Cambridge found that they lacked the level of comparison and analysis typically found in human-generated content. 

According to UK-based publication The Telegraph , which broke the story, the researchers identified seven key indicators of AI content:

  • Frequent use of Latin root words and “vocabulary above the expected level”
  • Paragraphs starting with singular words like “however”, and then a comma 
  • Lots of numbered lists with colons
  • Unnecessary clarificatory language (e.g. “true fact”)
  • Tautological language (“Lets come together to unite”)
  • Repetition of the same word or phrase twice 
  • Consistent and frequent use of Oxford commas in sentences

Are There Any Other Ways to Spot ChatGPT Plagiarism?

Yes and no. There are many tools online that claim to be able to detect AI content, but when I tested a wide range of them last year, I found many to be wildly inaccurate.

For instance, OpenAI’s own text classifier – which was eventually shut down because it performed so poorly – was unable to identify that text written by ChatGPT (effectively itself) was AI-generated.

Even Turnitin has been using automated processes to detect plagiarized content in academic work for years, and they’ve also developed a powerful AI content checker. The company has always maintained that verdicts arrived at by their tools should be treated as an indication, not a cast-iron accusation.

“Given that our false positive rate is not zero” Turnitin explains in a blog post discussing its AI content detection capabilities.

Surfshark logo

“You as the instructor will need to apply your professional judgment, knowledge of your students, and the specific context surrounding the assignment”.

None of these tools are infallible – and worse still, many of the free ones you’ll find lurking at the top of the Google Search results are completely and utterly useless.

Is It Wrong to Use AI for School or College Work?

While asking AI tools like ChatGPT and Gemini to write you an essay isn’t quite “plagiarism” in the same way copying content written by other people and passing it off as your own is, it’s certainly not advised.

Whether it’s objectively plagiarism or not is likely irrelevant – the educational institution you’re enrolled in has probably created guidelines explicitly banning generative AI. Many universities have already taken a similar approach to peer review and other academic processes.

Besides, the whole point of writing an essay is to consider the range of ideas and views on the topic you’re writing about and evaluate them using your head. Getting an AI to do it for you defeats the whole point of writing the essay in the first place.

Our advice – considering the consequences of being accused of plagiarism while at university – is to stick to the rules. Who knows – you might learn something while you're at it!

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at [email protected]

  • Artificial Intelligence

Written by:

chatgpt cheating on essays

Study: 77% of Businesses Have Faced AI Security Breaches

AI systems are particularly vulnerable to security...

chatgpt cheating on essays

All the New Copilot Features Microsoft Revealed for Windows 11

Email drafts, meeting recaps, and much more: The average...

chatgpt cheating on essays

GPT-5: OpenAI May Launch “Better” Model for ChatGPT This Summer

Sam Altman says GPT-4 "kinda sucks" but says he's excited...

chatgpt cheating on essays

What Is AI Washing, and Why Are Companies Being Fined For It?

Two firms have just been fined for making false and...

More From Forbes

The tell-tale signs students are using chatgpt to help write their essays.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Researchers have identified key features of ChatGPT-generated content that makes it easier to spot ... [+] (Pic: Getty Creative)

Researchers have identified tell-tale signs that students have used AI to help write their essays.

Excessive used of words derived from Latin, using unnecessary words and repeated use of the Oxford comma are among the hallmarks of using a generative chatbot to complete coursework, researchers found .

But while students taking part in the trial said they found using AI had some advantages, they acknowledged that relying on it completely would likely result in work of a low standard.

The impact of generative AI on education has been exercising educators since Open AI launched ChatGPT — a chatbot that generates text by predicting which words are likely to follow a particular prompt — in November 2022.

While some regard AI as a potentially transformative technology, creating a more inclusive and personalized education, for others it makes it impossible to trust coursework grades. Even academics have not been immune to using AI to enhance their work.

Now researchers at Cambridge University have tried to see if they could identify characteristics of AI’s writing style that could make it easy to spot.

And although their trial was small scale, they say it has the potential to help teachers work out which students used AI in their essays, and which did not.

Three undergraduates were enlisted to write two essays each with the help of ChatGPT, which were then compared with essays on the same topic written by 164 high school students. The undergraduates were then interviewed about their experience of using AI.

Russia s Ivan Khurs Spy Ship Keeps Tabs On Ukrainian Radars And Missiles No Wonder The Ukrainians Have Been Hunting The Ship For A Year

Ex trump attorney john eastman should be disbarred over 2020 election interference charges judge says, the ladoga was the soviet union s plush nuclear war command vehicle a drone just blew one up in eastern ukraine.

(Undergraduates were included in the study because ChatGPT requires users to be 18 or over).

The ChatGPT essays performed better on average, being marked particularly highly for ‘information’ and ‘reflection’. They did poorly, however, for ‘analysis’ and ‘comparison’ — differences the researchers suggest reflect the chatbot’s strengths and weaknesses.

But when it comes to style, there were a number of features that made the ChatGPT assisted version easily recognizable.

The default AI style “echoes the bland, clipped, and objective style that characterizes much generic journalistic writing found on the internet,” according to the researchers, who identified a number of key features of ChatGPT content:

  • A high frequency of words with a Latin root, particularly multisyllable words and a vocabulary above the expected level;
  • Paragraphs starting with specific markers, such as ‘however’, ‘moreover’ and ‘overall’, followed by a comma;
  • Numbered lists, with items followed by colons;
  • Pleonasms: using unnecessary words, such as ‘free gift’ or ‘true fact’;
  • Tautology: saying the same thing twice, such as ‘We must come together to unite’;
  • Repeating words or phrases;
  • Consistent use of Oxford commas, a comma used after the penultimate item in a list, before ‘and’ or ‘or’, for example “ChatGPT has many uses for teaching, learning at home, revision, and assessment”.

Although the students taking part in the trial used ChatGPT to different extents, from copying and pasting whole passages to using it as prompts for further research, there was broad agreement that it was useful for gathering information quickly, and that it could be integrated into essay writing through specific prompts, on topics and essay structure, for example.

But the students also agreed that using AI to write the essay would produce work of a low academic standard.

“Despite the small sample size, we are excited about these findings as they have the capacity to inform the work of teachers as well as students,” said Jude Brady of Cambridge University Press and Assessment, lead researcher on the study.

Future work should include larger and more representative sample sizes of students, she said. Learning to use and detect generative AI was an increasingly important part of digital literacy, she added.

“We hope our research might help people to identify when a piece of text has been written by ChatGPT,” she said.

Nick Morrison

  • Editorial Standards
  • Reprints & Permissions

chatgpt cheating on essays

ChatGPT: Tell-tale signs of essays written with AI tools revealed by researchers

R epetition of words, tautology and paragraphs starting with “however” are some tell-tale features of ChatGPT ’s writing style, researchers have found.

The writing style of the artificial intelligence tool is “bland” and “journalistic”, according to a Cambridge University Press and Assessment study.

It comes after the rise of generative AI tools, like ChatGPT, has sparked concerns about cheating among pupils in the education sector.

Researchers compared essays written by three first-year undergraduate students , with the aid of ChatGPT, with 164 essays written by IGCSE students.

These essays were marked by examiners and the undergraduates were then interviewed and their essays were analysed.

The study found essays written with the help of ChatGPT performed poorly on analysis and comparison skills compared to non-ChatGPT-assisted essays.

But ChatGPT-assisted essays performed strongly on information and reflection skills.

Researchers identified a number of key features of the ChatGPT writing style, which included the use of Latinate vocabulary , repetition of words or phrases and ideas, and pleonasms.

Essays written with the help of ChatGPT were also more likely to use paragraphs starting with discourse markers like “however”, “moreover”, and “overall”, and numbered lists with items.

The researchers said ChatGPT’s default writing style “echoes the bland, clipped, and objective style that characterises much generic journalistic writing found on the internet”.

The report said: “The students found ChatGPT useful for gathering information quickly.

“However, they considered that complete reliance on this technology would produce essays of a low academic standard.”

Lead researcher Jude Brady, of Cambridge University Press and Assessment, said: “Our findings offer insights into the growing area of generative AI and assessment, which is still largely uncharted territory.

“Despite the small sample size, we are excited about these findings as they have the capacity to inform the work of teachers as well as students.”

She added: “We hope our research might help people to identify when a piece of text has been written by ChatGPT.

“For students and the wider population, learning to use and detect generative AI forms an increasingly important aspect of digital literacy.”

Register now for one of the Evening Standard’s newsletters. From a daily news briefing to Homes & Property insights, plus lifestyle, going out, offers and more. For the best stories in your inbox, click here .

AI warning

College professors are considering creative ways to stop students from using AI to cheat

  • Some professors say students are using new tech to pass off AI-generated content as their own.
  • Academics are concerned that colleges are not set up to combat the new style of cheating.
  • Professors say they considering switching back to written assessments and oral exams. 

Insider Today

College professors are feeling the heat when it comes to AI. 

Some professors say students are using OpenAI's buzzy chatbot, ChatGPT , to pass off AI-generated content as their own.

Antony Aumann, a philosophy professor at Northern Michigan University, and Darren Hick, a philosophy professor at Furman University, both say they've caught students submitting essays written by ChatGPT.

The issue has led to professors considering creative ways to stamp out the use of AI in colleges.

Blue books and oral exams 

"I'm perplexed about how to handle AI going forward," Aumann told Insider.

He said one way he was considering tackling the problem was shifting to lockdown browsers, a type of software that aims to prevent students from cheating when taking exams online or remotely.

Other academics are considering more drastic action. 

Related stories

"I'm planning on going medieval on the students and going all the way back to oral exams," Christopher Bartel, a philosophy professor at Appalachian State University, said. "They can AI generate text all day long in their notes if they want, but if they have to be able to speak it, that's a different thing."

Bartel said there were inclusivity concerns around this, however. "Students who have deep social anxieties about speaking in public is something we'll have to figure out."

"Another way to deal with AI is for faculty to avoid giving students assignments that are very well covered," he said. "If the students have to be able to engage with a unique idea that hasn't been covered very deeply in other places there isn't going to be a lot of text that the AI generator can draw from."

Aumann said some professors were suggesting going back to traditional written assessments like blue books.

"Since the students would be writing their essays in class by hand, there would not be an opportunity for them to cheat by consulting the chatbot," he said.

'The genie is out of the bottle'

Although there were red flags in the AI-generated essays that alerted both Aumann and Hick to the use of ChatGPT, Aumann thinks these are just temporary.  

He said the chatbot's essays lacked individual personality but after playing around with it, he was able to get it to write less formally. "I think that any of the red flags we have are just temporary as far as students can get around," he said.

"My worry is the genie is out of the bottle," said Hick, who believed the technology was going to get better. "That's kind of inevitable," he said. 

Bartel agreed that students could get away with using AI very easily. "If they ask the program to write one paragraph summarizing an idea, then one paragraph summarizing another idea, and edit those together it would be completely untraceable for me and it might even be a decent essay," he said.

A representative for OpenAI told Insider they didn't want ChatGPT to be used for misleading purposes.

"Our policies require that users be upfront with their audience when using our API and creative tools like DALL-E and GPT-3," the representative said. "We're already developing mitigations to help anyone identify text generated by that system."

Although there are AI detection programs that offer an analysis of how likely the text is to be written by an AI program, some academics are concerned this wouldn't be enough to prove a case of AI plagiarism.

"We will need something to account for the fact that we now have an imperfect way of testing whether or not something is a fake," Bartel said. "I don't know what that new policy is."

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

chatgpt cheating on essays

  • Main content

7-Week SSP & 2-Week Pre-College Program are still accepting applications until April 10, or earlier if all course waitlists are full. 4-Week SSP Application is closed.

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

COMMENTS

  1. Professors Caught Students Cheating on College Essays With ChatGPT

    Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove. ChatGPT, an AI chatbot, has had the internet in a frenzy since it ...

  2. ChatGPT sparks cheating, ethical concerns as students try realistic

    Chat Generative Pre-Trained Transformer, known as ChatGPT, fluently answers questions from users online and has the ability to write bespoke essays and exam responses.

  3. AI breakthrough ChatGPT raises alarm over student cheating

    ChatGPT, a program created by Microsoft-backed company OpenAI that can form arguments and write convincing swaths of text, has led to widespread concern that students will use the software to ...

  4. ChatGPT Artificial Intelligence Can Help Students Cheat On Essays

    Some educators worry that students will use ChatGPT to get away with cheating more easily — especially when it comes to the five-paragraph essays assigned in middle and high school and the ...

  5. Cheating on your college essay with ChatGPT won't get you good grades

    ChatGPT stunned users with its ability to write like a human. Some fear it could help students cheat at essays, but these professors aren't all that concerned. ... Cheating on your college essay ...

  6. ChatGPT was tipped to cause widespread cheating. Here's what students

    Their essays overuse words that ChatGPT favours, like "profound", or "metaphors for things like tapestries". AlphaGo and the birth of modern AI A board game contest between human and machine in ...

  7. ChatGPT and cheating: 5 ways to change how students are graded

    Educators now need to rethink such assignments. Here are some strategies. 1. Consider ways to incorporate AI in valid assessment. It's not useful or practical for institutions to outright ban AI ...

  8. A professor accused his class of using ChatGPT, putting diplomas in

    A Texas A&M instructor falsely accused students of using ChatGPT to write essays, putting them at risk of failing. ... Students flagged as cheating "received a 0." Advertisement.

  9. 'Everybody is cheating': Why this teacher has adopted an open ChatGPT

    Earlier this month, 22-year-old Princeton student Edward Tian created an app to detect if something had been written by a machine. Named GPTZero, it was so popular that when he launched it, the ...

  10. ChatGPT Cheating: What to Do When It Happens

    Here are six tips drawn from educators and experts, including a handy guide created by CommonLit and Quill , two education technology nonprofits focused on building students' literacy skills. 1 ...

  11. Is Using ChatGPT Cheating?

    The consequences of using ChatGPT to cheat depend on your institution's policies and the severity of the incident. Students who are caught cheating may face academic probation, failing grades, or even expulsion from university. Cheating can have long-term implications for your academic records and future educational and career opportunities.

  12. ChatGPT, Cheating, and the Future of Education

    Harvard College Fellow Maria Dikcis, who teaches English 195BD: "The Dark Side of Big Data," assigned students a threefold exercise — first write a short analytical essay, then ask ChatGPT ...

  13. What do AI chatbots really mean for students and cheating?

    The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release

  14. Experts reveal the tell-tale sign that a student has used ChatGPT in an

    Experts have revealed the tell-tale signs that an essay has been written by ChatGPT and not a student. It comes after the rise of generative AI tools, like ChatGPT, has sparked concerns about ...

  15. Will ChatGPT Kill the Student Essay?

    The College Essay Is Dead. Nobody is prepared for how AI will transform academia. By Stephen Marche. Paul Spella / The Atlantic; Getty. December 6, 2022. Suppose you are a professor of pedagogy ...

  16. Texas A&M Professor Wrongly Accuses Class of Cheating With ChatGPT

    Texas A&M University-Commerce seniors who have already graduated were denied their diplomas because of an instructor who incorrectly used AI software to detect cheating. By Miles Klee. May 17 ...

  17. Universities warn against using ChatGPT for assignments

    It comes after a former University of Bristol student experimented with the ChatGPT bot to get a 2:2 on one of his old essay questions. "I got 65% and it took two weeks when I did it.

  18. Professors Find Ways to 'ChatGPT-Proof' Assignments

    Aaron Mok and Associated Press. Aug 10, 2023, 12:32 PM PDT. Professors are finding ways to "ChatGPT=proof" their assignments to prevent cheating. PeopleImages/Getty. College professors are looking ...

  19. Some students are using ChatGPT to cheat

    School boards and at least one college in Hamilton and surrounding areas are on high alert for any students trying to cheat using a new artificial intelligence tool called ChatGPT. ChatGPT is a ...

  20. From tort law to cheating, what is ChatGPT's future in higher education

    Things like ChatGPT go far beyond ethical concerns about cheating on essays or taking shortcuts to a degree. Higher education officials across the country are looking at how tools might revolutionize how universities operate, from using AI-integrated chatbots that can immediately answer enrollment questions to harnessing virtual teaching ...

  21. Educators Battle Plagiarism As 89% Of Students Admit To Using ...

    48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper. 72% of college students believe that ChatGPT should ...

  22. cheating

    you send a follow-up declaration email: indicating that ChatGPT was used as a consulted tool to polish your essay. you could even send through a supplemented copy of your essay (unofficially, as the submitted essay remains the official), along with your declarative email. The supplementary will be in two parts.

  23. 7 Surefire Signs That ChatGPT Has Written an Essay Revealed

    The 7 Telltale Signs Content is AI-Generated. The researchers at Cambridge analyzed 164 essays written by high school students with four essays written with a helping hand from ChatGPT.

  24. The Tell-Tale Signs Students Are Using ChatGPT To Help Write Their Essays

    Three undergraduates were enlisted to write two essays each with the help of ChatGPT, which were then compared with essays on the same topic written by 164 high school students. The undergraduates ...

  25. How teachers started using ChatGPT to grade assignments

    A new tool called Writable, which uses ChatGPT to help grade student writing assignments, is being offered widely to teachers in grades 3-12.. Why it matters: Teachers have quietly used ChatGPT to grade papers since it first came out — but now schools are sanctioning and encouraging its use. Driving the news: Writable, which is billed as a time-saving tool for teachers, was purchased last ...

  26. Essays written with ChatGPT feature repetition of words and ideas

    It comes after the rise of generative AI tools, like ChatGPT, has sparked concerns about cheating among pupils in the education sector. Researchers compared essays written by three first-year ...

  27. ChatGPT: Tell-tale signs of essays written with AI tools revealed ...

    Researchers compared essays written by three first-year undergraduate students, with the aid of ChatGPT, with 164 essays written by IGCSE students.. These essays were marked by examiners and the ...

  28. Professors Are Getting Creative to Stop Students Using AI to Cheat

    Jan 21, 2023, 12:00 AM PST. Professors have reported examples of students cheating on their assignments by using AI. Getty Images. Some professors say students are using new tech to pass off AI ...

  29. Should I Use ChatGPT to Write My Essays?

    Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, "Please give me five ideas for essays I can write on topics related to WWII," or "Please give me five ideas for essays I can write comparing characters in twentieth century novels.". Then, use what it provides as a starting point for your ...

  30. Teachers are split on the impact of ChatGPT and generative AI in the

    Educators are split on whether artificial intelligence will help or hinder their careers, according to results of a study by the AI Education Project, first shared with Axios.. Why it matters: Education has been touted as one of the areas that could benefit the most from AI, yet the advent of ChatGPT and other tools has also added fresh challenges for teachers already stretched thin.