ChatGPT Cheating: What to Do When It Happens

chatgpt homework cheat

  • Share article

The latest version of ChatGPT has only been around for a few months. But Aaron Romoslawski, the assistant principal at a Michigan high school, has already seen a handful of students trying to pass off writing produced by the artificial-intelligence-powered tool as their own work.

The signs are almost always obvious, Romoslawski said. Typically, a student will have been turning in work of a certain quality throughout the year, and then “suddenly, we’re seeing these much higher quality assignments pop up out of nowhere,” he said.

Romoslawski and his colleagues don’t start with a punitive response, however. “We see it as an opportunity to have a conversation.”

Those “don’t let the robot do your homework” talks are becoming all too common in schools these days. More than a quarter of K-12 teachers have caught their students cheating using ChatGPT , according to a recent survey by study.com, an online learning platform.

What’s the best way for educators to handle this high-tech form of plagiarism? Here are six tips drawn from educators and experts, including a handy guide created by CommonLit and Quill , two education technology nonprofits focused on building students’ literacy skills.

1. Make your expectations very clear

Students need to know what exactly constitutes cheating, whether AI tools are involved or not.

“Every school or district needs to put stakes in the ground [on a] policy around academic dishonesty, and what that means specifically,” said Michelle Brown, the founder and CEO of CommonLit. Schools can decide how much or how little students can rely on AI to make cosmetic changes or do research, she said, and should make that clear to students. She recommended “the heart of the policy [be] about allowing students to do intellectually rigorous work.”

2. Talk to students about AI in general and ChatGPT in particular

If it appears a student may have passed off ChatGPT’s work as their own, sit down with them one on one, CommonLit and Quill recommend. Then talk about the tool and AI in general. Questions could include: Have you heard of ChatGPT? What are other students saying about it? What do you think it should be used for? Discuss the promises—and potential pitfalls—of artificial intelligence.

“One of the big concerns right now is that teachers want to encourage curiosity about AI,” said Peter Gault, Quill’s founder and executive director. Strict discipline at this point “doesn’t sit right with teachers where there’s a lot of natural curiosity here.”

Romoslawski uses that approach. And so far, he hasn’t had a student try to use ChatGPT on an assignment twice. “We’ve gotten to the point where it’s a conversation and students are redoing the assignment in their own words,” he said.

3. If students use ChatGPT for an assignment, they must attribute what material they used from it

If students are allowed to use ChatGPT or another AI tool for research or other help, let them know how and why they should credit that information, Brown said. Since users can’t link back to a ChatGPT response, she suggested students share the prompt they used to generate the information in their citation.

When Romoslawski and his colleagues suspect a student used ChatGPT to complete an assignment when they weren’t supposed to, he also brings up citation, in part as a way into the conversation.

“We ask the students ‘did you use any resources that you don’t cite?’” he said. “And often, the student says ‘yes.’ And so, then it creates a conversation about how to properly cite and attribute and why we do that.”

4. Ask students directly if they used ChatGPT

Don’t beat around the bush if you suspect a student may have used AI to cheat. Ask them in a very straightforward way if they did, CommonLit and Quill say.

If students say “yes,” Romoslawski likes to get a sense of why. “More often than not, the student was just struggling on the assignment. They had a roadblock. They didn’t know what to do,” he said. “They were crunched for time, because we’re a high-achieving high school and our students are taking some pretty rigorous courses. This was their third homework assignment of the night and they just wanted to get through it.”

If the student says “no,” but you still suspect them of cheating, ask if they got other help with the assignment. If they still say “no,” explain your concerns by pointing out differences between the work they turned in and their previous writing, CommonLit and Quill suggest.

5. Don’t rely on ChatGPT detectors alone to determine if there was cheating

There are a number of tools—including one from OpenAI, ChatGPT’s developer—that purport to be able to distinguish an AI-crafted story or essay from one written by a human . But most of these detectors don’t publish their accuracy rates. And those that do are ineffective about 10 to 20 percent of the time.

“You can’t fully rely on that as the sole proof of academic dishonesty,” Brown said.

6. Make it clear why learning to write on your own is important

Students in general, and particularly students who take advantage of AI to cheat, need to understand what they are missing out on when they take a technology-enabled shortcut. Educators should try to persuade students that learning to write on their own will help them reason and think, or be critical to future job success, Gault said.

But others will need a more immediate incentive. The strongest argument one teacher came up with, according to Quill’s Gault? Tell students that learning to write will make them more persuasive, and therefore, “you can convince your parents to do what you want.”

A version of this article appeared in the March 08, 2023 edition of Education Week as ChatGPT Cheating: What to Do When It Happens

Sign Up for EdWeek Tech Leader

Edweek top school jobs.

People staring into their phones. Conceptual Illustration.

Sign Up & Sign In

module image 9

The ultimate homework cheat? How teachers are facing up to ChatGPT

ChatGPT took the internet by storm when it launched in late 2022, impressing by generating stories, poems, coding solutions, and beyond. Its potential to answer questions has seen New York City's education board ban it from schools - but could it really provide a homework shortcut?

By Tom Acres, technology reporter

Monday 9 January 2023 13:11, UK

Human Finger Touches Robotic Finger stock photo

"Have I seen this somewhere before?"

It's a question teachers have had to ask themselves while marking assignments since time immemorial.

But never mind students trawling through Wikipedia, or perusing SparkNotes for some Great Gatsby analysis, the backend of 2022 saw another challenge emerge for schools: ChatGPT.

The online chatbot, which can generate realistic responses on a whim, took the world by storm by its ability to do everything from solving computer bugs, to helping write a Sky News article about itself .

Last week, concerned about cheating students, America's largest education department banned it.

New York City 's teaching authority said while it could offer "quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success".

Of course, that's not going to stop pupils using it at home - but could they really use it as a homework shortcut?

More on Artificial Intelligence

chatgpt homework cheat

Pupils to 'talk' to 3D Holocaust survivors with artificial intelligence technology

Nvidia CEO Jensen Huang present NVIDIA Blackwell platform at an event ahead of the COMPUTEX forum, in Taipei, Taiwan June 2, 2024. REUTERS/Ann Wang

A lucky bet and unlimited coffee: How Nvidia became the world's most valuable company

chatgpt homework cheat

Pope Francis warns AI poses risk to 'human dignity itself' as he becomes first pontiff to address G7

Related Topics:

  • Artificial Intelligence

Teachers vs ChatGPT - round one

First up, Sky News asked a secondary school science teacher from Essex, who was not familiar with the bot, to feed ChatGPT a homework question.

Galaxies contain billions of stars. Compare the formation and life cycles of stars with a similar mass to the Sun to stars with a much greater mass than the Sun.

It's fair to say that ChatGPT let the mask slip almost immediately, as you can see in the images below.

FOR TOM'S FEATURE

Asking ChatGPT to answer the same question "to secondary school standard" prompted another detailed response.

The teacher's assessment?

"Well, this is definitely more detailed than any of my students. It does go beyond what you'd expect for GCSE, so I would be very suspicious if someone submitted it. I would assume that they'd copied and pasted from somewhere."

Teachers vs ChatGPT - round two

Next was a Kent primary school teacher, also unfamiliar with ChatGPT, who gave it a recent homework task.

Research a famous Londoner and write a biography of their lives, including their childhood and their career achievements.

No problem, said ChatGPT, though it's fair to say that any nine-year-old who submitted the answer below is either being fast-tracked to university or going straight into a lunchtime detention.

FOR TOM'S FEATURE

"Even just glancing at that, I'd say they copied it straight off the internet," said the teacher.

"No 11-year-old knows the word tumultuous."

'Key decisions' facing schools

So just as copying straight from a more familiar website is going to set alarm bells ringing for teachers, so too would lifting verbatim from ChatGPT.

But pupils are among the most internet-savvy people around, and ChatGPT's ability to instantly churn out seemingly textbook-level responses will still need to be monitored, teachers say.

Jane Basnett, director of digital learning at Downe House School in Berkshire, told Sky News the chatbot presented schools with some "key decisions" to make.

"As with all technology, schools have to teach students how to use technology properly," she said.

"So, with ChatGPT, students need to have the knowledge to know whether the work produced is any good, which is why we need to teach students to be discerning."

Click to subscribe to the Sky News Daily wherever you get your podcasts

Given its rapid emergence, Ms Basnett is already exploring how her school's anti-plagiarism systems will cope with auto-generated essays.

But just as teachers must consider teaching students about the benefits and pitfalls of using AI, Ms Basnett said her colleagues should also be open to its potential.

"ChatGPT is incredibly powerful and as a teacher I can see some benefits," she said.

"For example, I can type in a request to create a series of lessons on a particular grammar point, and it will create a lesson for me. It would take a teacher to analyse the created lesson and amend it, because the suggested lesson, whilst not bad, was not ideal. But, the key elements were there and it could be really useful.

"I could imagine using a created essay from ChatGPT and working through it with my students to examine the merits and faults of the essay."

Please use Chrome browser for a more accessible video player

Chat GPT explained

Dr Peter Van der Putten, assistant professor of AI at Leiden University in the Netherlands, said institutions which chose to prohibit or ignore the technology would only be burying their head in the sand.

"It's there, just how like Google is there," said Dr Van der Putten.

"You can write it into your policies for preventing plagiarism, but it's a reality that the tool exists.

"Sometimes you do need to embrace these things, but be very clear about when you don't want it to be used."

'Bull****er on steroids'

For students and teachers alike, it's an opportunity to improve their digital literacy.

While it has proved its worth when tasked with being creative, such as to problem-solve or come up with ideas, true comprehension and understanding remains beyond it.

Developer OpenAI acknowledges answers can be "overly verbose" and even "incorrect or nonsensical", despite sounding legitimate in most cases, like some sort of desperate, underprepared job interviewee.

As Dr Van der Putten says, ChatGPT is often little more than a "bull*****er on steroids".

Teaching students about those limitations is the best way to ensure they don't over rely on it - even in a pinch.

Related Topics

Universities, schools react to student use of generative AI programs including ChatGPT

Uni student Daniel hesitates when asked if he has used ChatGPT to cheat on assignments before.

His answer is "no", but the 22-year-old feels the need to explain it further.

"I don't think it's cheating," he said.

"As long as you accredit it and use it for like a foundation for your assignment I think it's fine."

A man stands on the steps of a stone building. He is wearing a hooded jumper.

Schools and universities have been scrambling to keep up since ChatGPT and other generative AI language programs were released in late 2022.

University student Lan Lang, 18, said quite a few people used generative AI for assessments such as English assignments.

"I do get Chat to like explain stuff to me if teachers don't really explain it that well," Lan Lang said.

A teenage girl and boy stand next to each other in a university building, smiling.

She said she used AI detection software on her work.

"We put it through Turnitin, which just basically detects if you've used AI, or if you've copied off anyone else's work," she said.

Caught out in schools

High school teacher Ryan Miller said he wasn't seeing a lot of generative AI used in the Year 12 and Year 8 classes he taught but understood from colleagues other age groups were using it.

A man wearing a bow tie smiles at the camera.

"What I hear, when I'm in the staff room, is that a lot of Year 9s, 10s, [and] 11s are pushing the boundaries," Mr Miller said.

He said Year 12 students tended to be more careful after being warned at the start of the year and constantly reminded of consequences.

"Basically, they're told if their work is seen to be made ... predominantly with AI, that it won't be assessed," he said.

Mr Miller said Year 8s, being a little newer to the school, hadn't used it as much.

He said teachers tended to give students a warning if they were detected using generative AI.

"And nine times out of 10 they'll probably own up to it and say, 'Yeah, look, it wasn't ... 100 per cent my own work'," he said. 

He said students would rewrite the work so it could be assessed again.

"But it's sort of a one warning per kid, per year for most teachers, I think," he said.

Fellow teacher Hugh Kinnane said generative AI was probably "pretty rife" in assignment work.

He said he most regularly saw it cropping up with students who were trying to avoid doing any work.

"And then it's a last-minute job," he said.

A woman sits at a desk. She is looking at the camera with a blank expression.

Drawing the line

University of Adelaide Deputy Vice-Chancellor Academic Jennie Shaw said while her university embraced the use of AI, it could still be used to cheat.

"So we're saying, of course, that is not allowed," Dr Shaw said.

She said generative AI was included in academic integrity modules for first-year students.

"We make it really clear to students what is OK and what is not OK," she said.

Dr Shaw said there were instances when students were encouraged to use generative AI and then critique the quality of its answer.

"What we are asking our students and our staff to do is to reference when they do use it," she said.

She said it was a requirement that as much content as possible was checked by similarity detection software.

According to Turnitin's website — which is used by the University of Adelaide as well as many other universities across Australia to detect AI-generated content— the company is committed to a false positive rate of less than 1 per cent to ensure not students are falsely accused of misconduct.

AI arms race

The software has put students at the centre of a battle for superiority between programs generating answers for their assignments and those designed to catch them out.

And according to Australian Institute for Machine Learning senior lecturer Feras Dayoub, some are getting caught in the crossfire.

A man stands in front of a whiteboard. He is smiling.

He said companies that created AI chatbots were trying to be undetectable while companies that created AI detection software wanted to detect everything.

"There will be a lot of false positives," Dr Dayoub said.

He said it could be an unpleasant experience for the student if the detector was wrong.

Two men stand in the courtyard of a university. They are smiling at the camera.

University student Ethan, 19, said single words were sometimes highlighted in his Turnitin submissions.

"It can be a bit inaccurate," Ethan said.

Dr Shaw said she understood the detection software had its faults.

"We would find probably two thirds of anything they pick up saying there's some unacceptably high levels of similarity here is often just picking up patterns in language," she said.

"I know some universities have chosen to turn it off because it does turn up lots of false positives.

"We're choosing to use it at this point."

Changing education

The Department of Education released a nationwide framework in December last year for the use of generative AI in schools.

Dr Shaw said the technology was changing the way teachers taught and students learned.

"But we still need students to have deep knowledge," she said. 

"We need them to know how to use the tools in their profession. 

"And again, one of those in many professions will now be generative AI, and we need them to be able to call out when it's wrong."

Dr Dayoub said he would prefer a future in which there was no need for detectors because people had changed the way they taught and assessed.

He said another option would be to take a stricter approach, where students did the work themselves and there would be no help.

"In that case you need the detectors so there will be a huge market for these detectors and it will become a race," he said.

"I don't like that future."

  • X (formerly Twitter)

Related Stories

Chatgpt's class divide: are public school bans on the ai tool giving private school kids an unfair edge.

A composite image of headshots of Arlene, Matt and Freya.

Scarlett Johansson 'shocked' by ChatGPT voice 'eerily similar' to her own

Scarlett Johansson at Avengers premiere.

ChatGPT was tipped to cause widespread cheating. Here's what students say happened

High school students sitting in a classroom using laptops

  • Computer Science
  • Information Technology Industry
  • Secondary Schools
  • Universities
  • Skip to main content
  • Keyboard shortcuts for audio player

'Everybody is cheating': Why this teacher has adopted an open ChatGPT policy

Mary Louise Kelly, photographed for NPR, 6 September 2022, in Washington DC. Photo by Mike Morgan for NPR.

Mary Louise Kelly

chatgpt homework cheat

Not all educators are shying away from artificial intelligence in the classroom. Jeff Pachoud/AFP via Getty Images hide caption

Not all educators are shying away from artificial intelligence in the classroom.

Ethan Mollick has a message for the humans and the machines: can't we all just get along?

After all, we are now officially in an A.I. world and we're going to have to share it, reasons the associate professor at the University of Pennsylvania's prestigious Wharton School.

"This was a sudden change, right? There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of how we teach people to write in a world with ChatGPT," Mollick told NPR.

Ever since the chatbot ChatGPT launched in November, educators have raised concerns it could facilitate cheating.

Some school districts have banned access to the bot, and not without reason. The artificial intelligence tool from the company OpenAI can compose poetry. It can write computer code. It can maybe even pass an MBA exam.

One Wharton professor recently fed the chatbot the final exam questions for a core MBA course and found that, despite some surprising math errors, he would have given it a B or a B-minus in the class .

A new AI chatbot might do your homework for you. But it's still not an A+ student

A new AI chatbot might do your homework for you. But it's still not an A+ student

And yet, not all educators are shying away from the bot.

This year, Mollick is not only allowing his students to use ChatGPT, they are required to. And he has formally adopted an A.I. policy into his syllabus for the first time.

He teaches classes in entrepreneurship and innovation, and said the early indications were the move was going great.

"The truth is, I probably couldn't have stopped them even if I didn't require it," Mollick said.

This week he ran a session where students were asked to come up with ideas for their class project. Almost everyone had ChatGPT running and were asking it to generate projects, and then they interrogated the bot's ideas with further prompts.

"And the ideas so far are great, partially as a result of that set of interactions," Mollick said.

chatgpt homework cheat

Users experimenting with the chatbot are warned before testing the tool that ChatGPT "may occasionally generate incorrect or misleading information." OpenAI/Screenshot by NPR hide caption

He readily admits he alternates between enthusiasm and anxiety about how artificial intelligence can change assessments in the classroom, but he believes educators need to move with the times.

"We taught people how to do math in a world with calculators," he said. Now the challenge is for educators to teach students how the world has changed again, and how they can adapt to that.

Mollick's new policy states that using A.I. is an "emerging skill"; that it can be wrong and students should check its results against other sources; and that they will be responsible for any errors or omissions provided by the tool.

And, perhaps most importantly, students need to acknowledge when and how they have used it.

"Failure to do so is in violation of academic honesty policies," the policy reads.

This 22-year-old is trying to save us from ChatGPT before it changes writing forever

Planet Money

This 22-year-old is trying to save us from chatgpt before it changes writing forever.

Mollick isn't the first to try to put guardrails in place for a post-ChatGPT world.

Earlier this month, 22-year-old Princeton student Edward Tian created an app to detect if something had been written by a machine . Named GPTZero, it was so popular that when he launched it, the app crashed from overuse.

"Humans deserve to know when something is written by a human or written by a machine," Tian told NPR of his motivation.

Mollick agrees, but isn't convinced that educators can ever truly stop cheating.

He cites a survey of Stanford students that found many had already used ChatGPT in their final exams, and he points to estimates that thousands of people in places like Kenya are writing essays on behalf of students abroad .

"I think everybody is cheating ... I mean, it's happening. So what I'm asking students to do is just be honest with me," he said. "Tell me what they use ChatGPT for, tell me what they used as prompts to get it to do what they want, and that's all I'm asking from them. We're in a world where this is happening, but now it's just going to be at an even grander scale."

"I don't think human nature changes as a result of ChatGPT. I think capability did."

The radio interview with Ethan Mollick was produced by Gabe O'Connor and edited by Christopher Intagliata.

Faced with criticism it's a haven for cheaters, ChatGPT adds tool to catch them

Launch of text classifier follows weeks of criticism.

The ChatGPT website is seen on a smartphone.

Social Sharing

The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

The new AI Text Classifier launched Tuesday by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT's ability to write just about anything on command could fuel academic dishonesty and hinder learning.

OpenAI cautions that its new tool — like others already available — is not foolproof. The method for detecting AI-written text "is imperfect and it will be wrong sometimes," said Jan Leike, head of OpenAI's alignment team tasked to make its systems safer.

"Because of that, it shouldn't be solely relied upon when making decisions," Leike said.

Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI's website . And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.

By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.

chatgpt homework cheat

Educators, students see challenges, opportunities with new ChatGPT AI software

The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.

"We can't afford to ignore it," Robinson said.

The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a "personal tutor" or to help generate new ideas when working on an assignment, Robinson said.

School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.

"The initial reaction was 'OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT?'' said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that "this is the future" and blocking it is not the solution, he said.

"I think we would be naive if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power," said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company's detection service is in place.

A man in a suit walks with a folder under his arm.

OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.

The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text — a college admissions essay, or a literary analysis of Ralph Ellison's Invisible Man  — and the tool will label it as either "very unlikely, unlikely, unclear if it is, possibly, or likely" AI-generated.

But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it's not easy to interpret how it came up with a result.

"We don't fundamentally know what kind of pattern it pays attention to, or how it works internally," Leike said. "There's really not much we could say at this point about how the classifier actually works."

Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France's most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.

In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.

"Like many other technologies, it may be that one district decides that it's inappropriate for use in their classrooms," said OpenAI policy researcher Lama Ahmad. "We don't really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them."

  • ChatGPT could help rather than hinder student learning, says B.C. professor
  • Q&A Bot or not? This Canadian developed an app that weeds out AI-generated homework
  • New AI chatbot can do students' homework for them

It's an unusually public role for the research-oriented San Francisco startup, now backed by billions of dollars in investment from its partner Microsoft and facing growing interest from the public and governments.

France's digital economy minister Jean-Noel Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.

"So if you're in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive," he said. "If you are in the economics faculty, then you're fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty."

He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.

Smiling teacher leans over shoulder of student seated in front of a tablet in a classroom

3 ways to use ChatGPT to help students learn – and not cheat

chatgpt homework cheat

Professor of Educational Psychology and Learning Technologies, The Ohio State University

chatgpt homework cheat

Professor of Educational Psychology and Quantitative Research, Evaluation, and Measurement, The Ohio State University

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

The Ohio State University provides funding as a founding partner of The Conversation US.

View all partners

  • Bahasa Indonesia

Since ChatGPT can engage in conversation and generate essays, computer codes, charts and graphs that closely resemble those created by humans, educators worry students may use it to cheat . A growing number of school districts across the country have decided to block access to ChatGPT on computers and networks.

As professors of educational psychology and educational technology , we’ve found that the main reason students cheat is their academic motivation. For example, sometimes students are just motivated to get a high grade, whereas other times they are motivated to learn all that they can about a topic.

The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are less likely to cheat .

We believe teachers can use ChatGPT to increase their students’ motivation for learning and actually prevent cheating. Here are three strategies for doing that.

1. Treat ChatGPT as a learning partner

Our research demonstrates that students are more likely to cheat when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are less likely to cheat when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade.

Treating ChatGPT as a learning partner can help teachers shift the focus among their students from competition and performance to collaboration and mastery.

For example, a science teacher can assign students to work with ChatGPT to design a hydroponic vegetable garden. In this scenario, students could engage with ChatGPT to discuss the growing requirements for vegetables, brainstorm design ideas for a hydroponic system and analyze pros and cons of the design.

These activities are designed to promote mastery of content as they focus on the processes of learning rather than just the final grade.

2. Use ChatGPT to boost confidence

Research shows that when students feel confident that they can successfully do the work assigned to them, they are less likely to cheat . And an important way to boost students’ confidence is to provide them with opportunities to experience success .

ChatGPT can facilitate such experiences by offering students individualized support and breaking down complex problems into smaller challenges or tasks.

For example, suppose students are asked to attempt to design a hypothetical vehicle that can use gasoline more efficiently than a traditional car. Students who struggle with the project – and might be inclined to cheat – can use ChatGPT to break down the larger problem into smaller tasks. ChatGPT might suggest they first develop an overall concept for the vehicle before determining the size and weight of the vehicle and deciding what type of fuel will be used. Teachers could also ask students to compare the steps suggested by ChatGPT with steps that are recommended by other sources.

3. Prompt ChatGPT to give supportive feedback

It is well documented that personalized feedback supports students’ positive emotions, including self-confidence.

ChatGPT can be directed to deliver feedback using positive, empathetic and encouraging language. For example, if a student completes a math problem incorrectly, instead of merely telling the student “You are wrong and the correct answer is …,” ChatGPT may initiate a conversation with the student. Here’s a real response generated by ChatGPT: “Your answer is not correct, but it’s completely normal to encounter occasional errors or misconceptions along the way. Don’t be discouraged by this small setback; you’re on the right track! I’m here to support you and answer any questions you may have. You’re doing great!”

This will help students feel supported and understood while receiving feedback for improvement. Teachers can easily show students how to direct ChatGPT to provide them such feedback.

We believe that when teachers use ChatGPT and other AI chatbots thoughtfully – and also encourage students to use these tools responsibly in their schoolwork – students have an incentive to learn more and cheat less.

  • Artificial intelligence (AI)
  • Education technology
  • K-12 education
  • Educational psychology
  • Large language models
  • AI chatbots

chatgpt homework cheat

Postdoctoral Research Fellowship

chatgpt homework cheat

Health Safety and Wellbeing Advisor

chatgpt homework cheat

Social Media Producer

chatgpt homework cheat

Dean (Head of School), Indigenous Knowledges

chatgpt homework cheat

Senior Research Fellow - Curtin Institute for Energy Transition (CIET)

one pixel image

Home — Blog — AI Hacks and Tips — ChatGPT Cheating Unveiled: Navigating the AI Landscape in Education

ChatGPT Cheating Unveiled: Navigating the AI Landscape in Education

chatgpt cheating

Artificial Intelligence (AI) has rapidly integrated into our everyday lives, revolutionizing various fields, including education. This integration has brought along numerous benefits, making processes more efficient and accessible. However, alongside these advancements come significant challenges and ethical dilemmas. A prominent issue that has emerged in the academic sector is the use of AI tools like ChatGPT for cheating, commonly referred to as ChatGPT cheating.

Understanding the risks and consequences of ChatGPT cheating is crucial for students, educators, and AI enthusiasts alike. The question Is using AI cheating? often arises in academic discussions. AI tools have the potential to assist students in their learning journey, but when misused, they can compromise academic integrity. This phenomenon of AI cheating is becoming increasingly prevalent as more students discover the ease with which AI can produce essays, solve problems, and generate content.

In this blog post, we will delve into what ChatGPT is and how it works in an academic context. ChatGPT, a sophisticated language model, can generate human-like text based on the input it receives. While it can be a valuable educational tool, its allure for students lies in the possibility of using it to complete assignments and exams effortlessly, thus engaging in AI and cheating.

The impacts of ChatGPT cheating are far-reaching. For students, reliance on AI for academic tasks can hinder the development of critical thinking and problem-solving skills. For educators, detecting AI-generated work poses a significant challenge, complicating the assessment process. Furthermore, widespread AI cheating can undermine the value of academic qualifications and erode trust in educational institutions.

In conclusion, the question Is using AI cheating? underscores the need for a balanced approach to AI in education. While AI tools like ChatGPT offer substantial benefits, it is imperative to address the ethical concerns and develop strategies to prevent their misuse. By fostering an environment of academic integrity and promoting responsible AI usage, we can harness the potential of AI without compromising the principles of education.

What is ChatGPT?

ChatGPT is an advanced AI language model developed by OpenAI, designed to understand and generate human-like text based on user prompts. It has been trained on diverse datasets, which allows it to provide coherent and contextually relevant responses. Unlike traditional search engines that retrieve information from indexed web pages, ChatGPT generates text by predicting the most suitable words to follow a given input, making its interactions more conversational and fluid.

A common question that arises is, Is using AI cheating?. The debate on AI and cheating focuses on whether tools like ChatGPT constitute AI cheating in various contexts. While some argue that relying on ChatGPT for tasks like writing or problem-solving could be seen as chatgpt cheating, others believe that it enhances productivity and offers new ways to interact with technology.

Key features of ChatGPT include natural language processing (NLP) capabilities, the ability to engage in extended conversations, and the capacity to provide detailed, nuanced responses. These features make ChatGPT a powerful tool, offering a more interactive and personalized user experience compared to traditional resources. The ethical considerations around ai cheating continue to evolve as the technology becomes more integrated into daily life.

How ChatGPT Works in an Academic Context

ChatGPT leverages natural language processing (NLP) to effectively process and comprehend user inputs, which allows it to generate relevant and contextually appropriate responses. Its extensive knowledge base is derived from vast amounts of training data, enabling it to provide information on a wide range of topics. This makes ChatGPT an incredibly useful tool for students who interact with it through prompts and queries, asking questions or seeking assistance with various academic tasks.

In an academic setting, ChatGPT offers a multitude of benefits. It can assist with essay writing, research, problem-solving, and even exam preparation. For instance, a student struggling to draft an essay can use ChatGPT to generate coherent text that serves as either a primary draft or an inspiration for their own writing. Similarly, students facing difficulty with research can use the tool to gather summaries of complex topics or to generate a list of scholarly articles on the subject matter. ChatGPT's ability to produce well-structured and informative responses makes it a valuable asset for quick, efficient academic assistance.

However, the convenience and capabilities of ChatGPT also introduce significant ethical considerations. The term "chatgpt cheating" has emerged as a growing concern among educators and academic institutions. When used unethically, ChatGPT can facilitate academic dishonesty. For example, a student might misuse this AI tool to generate entire essays or solve complex problems without contributing their own effort, effectively engaging in ai cheating. This unethical use undermines the educational process, which is designed to develop critical thinking and problem-solving skills.

Moreover, the ease with which ChatGPT can produce high-quality text makes it tempting for students to rely on the tool rather than putting in the work to understand the material. This misuse can lead to a superficial understanding of the subject matter and can be detrimental to a student's overall educational experience. The risk of ai cheating is not limited to essay writing alone; students might also use the tool to get answers for take-home exams or other assignments, bypassing the learning process entirely.

Educational institutions are becoming increasingly aware of the potential for chatgpt cheating and are taking steps to mitigate this risk. Some are incorporating stricter plagiarism detection measures, while others are focusing on educating students about the ethical use of AI tools. It's important for students to understand that while ChatGPT can be a valuable academic aid, it should be used responsibly and ethically.

In conclusion, ChatGPT has the potential to be a remarkable tool for enhancing the academic experience. However, the risks associated with ai cheating cannot be ignored. Both students and educators must work together to ensure that this technology is used to support learning and intellectual growth, rather than to undermine it.

The Allure of ChatGPT for Students

The ease of use and quick response time of ChatGPT make it an attractive option for students. With just a few clicks, they can generate essays, answer homework questions, and gain insights on complex topics. The perceived anonymity of interacting with an AI tool further adds to its allure, as students may believe they can avoid detection. However, this convenience comes at a cost. Relying on ChatGPT for academic tasks can hinder the development of critical thinking, research skills, and independent problem-solving abilities. It's essential for students to recognize the difference between legitimate use and cheating.

ChatGPT has revolutionized the way students approach their studies by providing instant answers and solutions. This AI tool can be especially helpful when students face tight deadlines or need quick clarification on a difficult topic. The efficiency and accessibility of ChatGPT are undeniably appealing. Yet, it's crucial to consider the implications of such reliance. Utilizing ChatGPT to complete assignments can easily cross the line into AI cheating. When students depend too heavily on this technology, they risk missing out on the educational experiences that build essential skills.

AI cheating through ChatGPT is becoming a growing concern in the academic world. Educators are increasingly worried about students using AI tools to complete their work, bypassing the learning process. This form of cheating undermines the educational system's integrity and the value of genuine learning. When students use ChatGPT to generate essays or solve problems without understanding the underlying concepts, they cheat themselves out of valuable educational opportunities.

Moreover, the issue of AI cheating with ChatGPT raises ethical questions. Is it fair for students to submit AI-generated work as their own? While the technology can offer guidance and support, it should not replace the effort required to learn and master new material. Educators must address these ethical concerns by setting clear guidelines on the acceptable use of AI tools like ChatGPT.

In conclusion, ChatGPT offers incredible benefits for students, providing quick answers and easing the workload. However, it's essential to distinguish between legitimate use and AI cheating. Over-reliance on this technology can hinder the development of critical thinking and problem-solving skills, ultimately impacting a student's educational journey. Both students and educators must navigate the fine line between leveraging AI for learning and falling into the trap of AI cheating. By recognizing and addressing these challenges, we can ensure that the use of ChatGPT and similar tools enhances education rather than detracts from it.

Defining Academic Integrity in the Age of AI

Academic integrity traditionally focuses on honesty, originality, and the avoidance of plagiarism. However, the advent of AI blurs these lines, making it challenging to define what constitutes cheating. While using AI as a tool is acceptable, passing off AI-generated content as one's own work violates academic integrity.

Institutions are grappling with these new challenges, revising policies to address the use of AI in academics. Clear guidelines and education on ethical AI use are vital to maintaining academic standards and integrity in this evolving landscape.

Risks of Using ChatGPT for Academic Cheating

The risks of using ChatGPT for cheating are multifaceted. In the short term, students risk detection by instructors or plagiarism detection software, leading to academic penalties such as course failure, suspension, or expulsion. The long-term consequences are even more severe, including skill deficits, knowledge gaps, and ethical compromises that can have lasting professional repercussions.

Students who rely on ChatGPT for academic tasks may struggle to develop essential skills, affecting their personal and professional growth. Additionally, forming habits of dishonesty can impact their future career opportunities and ethical standards in their chosen fields.

Consequences of ChatGPT Cheating

The consequences of ChatGPT cheating extend beyond individual students. For educational institutions, widespread cheating can damage academic reputation, lead to accreditation challenges, and necessitate increased resource allocation for detection and prevention measures. This erosion of trust in academic credentials can have broader societal impacts, affecting workforce preparedness and ethical standards in professional fields.

Individuals who cheat using ChatGPT may experience poor learning outcomes, stunted personal development, and limited future opportunities. Educational institutions face the challenge of maintaining their integrity and credibility, while society at large grapples with the implications of diminished trust in educational credentials.

Detection Methods for ChatGPT-Generated Content

Detecting ChatGPT-generated content is a growing concern for educators. Linguistic analysis tools and AI-powered detection software can help identify AI-generated text by analyzing patterns and inconsistencies. Instructors can also employ strategies such as oral examinations and in-class writing assignments to verify student knowledge and authenticity.

These detection methods are crucial in upholding academic integrity and ensuring that students are genuinely acquiring the skills and knowledge they need for their future endeavors.

Alternatives to Cheating with ChatGPT

Rather than resorting to cheating, students can use AI tools like ChatGPT for legitimate educational purposes. ChatGPT can be a valuable brainstorming tool, helping students refine research questions or generate practice problems. Developing critical thinking and research skills through proper use of AI can enhance learning and academic performance.

Seeking academic support, such as tutoring, writing centers, and office hours, provides students with the help they need without compromising their integrity. Encouraging the ethical use of AI tools can foster a culture of honesty and excellence in education.

In conclusion, while ChatGPT offers incredible potential for enhancing educational experiences, its misuse for cheating undermines academic integrity and personal growth. Understanding the risks and consequences of ChatGPT cheating is essential for students, educators, and AI enthusiasts alike.

Maintaining academic integrity in the age of AI requires clear guidelines, ethical education, and robust detection methods. By using AI tools responsibly, students can enhance their learning experiences and prepare themselves for future success. For those seeking further guidance, exploring ethical AI use and academic support services can provide valuable insights and resources.

chatgpt ethics

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

chatgpt homework cheat

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

It’s a wonderful world — and universe — out there.

Come explore with us!  

Science News Explores

Think twice before using chatgpt for help with homework.

This new AI tool talks a lot like a person — but still makes mistakes

ChatGPT is impressive and can be quite useful. It can help people write text, for instance, and code. However, “it’s not magic,” says Casey Fiesler. In fact, it often seems intelligent and confident while making mistakes — and sometimes parroting biases.

Glenn Harvey

Share this:

  • Google Classroom

By Kathryn Hulick

February 16, 2023 at 6:30 am

“We need to talk,” Brett Vogelsinger said. A student had just asked for feedback on an essay. One paragraph stood out. Vogelsinger, a 9th-grade English teacher in Doylestown, Pa., realized that the student hadn’t written the piece himself. He had used ChatGPT. It’s a new artificial intelligence (AI) tool. It answers questions. It writes code. And it can generate long essays and stories.

The company OpenAI made ChatGPT available for free at the end of November 2022. Within a week, it had more than a million users. Other tech companies are racing to put out similar tools. Google launched Bard in early February. The AI company Anthropic is testing a new chatbot named Claude. And another AI company, DeepMind, is working on a bot called Sparrow.

ChatGPT marks the beginning of a new wave of AI that will disrupt education. Whether that’s a good or bad thing remains to be seen.

Some people have been using ChatGPT out of curiosity or for entertainment. I asked it to invent a silly excuse for not doing homework in the style of a medieval proclamation. In less than a second, it offered me: “Hark! Thy servant was beset by a horde of mischievous leprechauns, who didst steal mine quill and parchment, rendering me unable to complete mine homework.”

But students can also use it to cheat. When Stanford University’s student-run newspaper polled students at the university, 17 percent said they had used ChatGPT on assignments or exams during the end of 2022. Some admitted to submitting the chatbot’s writing as their own. For now, these students and others are probably getting away with cheating.

And that’s because ChatGPT does an excellent job. “It can outperform a lot of middle-school kids,” Vogelsinger says. He probably wouldn’t have known his student used it — except for one thing. “He copied and pasted the prompt,” says Vogelsinger.

This essay was still a work in progress. So Vogelsinger didn’t see this as cheating. Instead, he saw an opportunity. Now, the student is working with the AI to write that essay. It’s helping the student develop his writing and research skills.

“We’re color-coding,” says Vogelsinger. The parts the student writes are in green. Those parts that ChatGPT writes are in blue. Vogelsinger is helping the student pick and choose only a few sentences from the AI to keep. He’s allowing other students to collaborate with the tool as well. Most aren’t using it regularly, but a few kids really like it. Vogelsinger thinks it has helped them get started and to focus their ideas.

This story had a happy ending.

But at many schools and universities, educators are struggling with how to handle ChatGPT and other tools like it. In early January, New York City public schools banned ChatGPT on their devices and networks. They were worried about cheating. They also were concerned that the tool’s answers might not be accurate or safe. Many other school systems in the United States and elsewhere have followed suit.

Test yourself: Can you spot the ChatGPT answers in our quiz?

But some experts suspect that bots like ChatGPT could also be a great help to learners and workers everywhere. Like calculators for math or Google for facts, an AI chatbot makes something that once took time and effort much simpler and faster. With this tool, anyone can generate well-formed sentences and paragraphs — even entire pieces of writing.

How could a tool like this change the way we teach and learn?

The good, the bad and the weird

ChatGPT has wowed its users. “It’s so much more realistic than I thought a robot could be,” says Avani Rao. This high school sophomore lives in California. She hasn’t used the bot to do homework. But for fun, she’s prompted it to say creative or silly things. She asked it to explain addition, for instance, in the voice of an evil villain. Its answer is highly entertaining.

Tools like ChatGPT could help create a more equitable world for people who are trying to work in a second language or who struggle with composing sentences. Students could use ChatGPT like a coach to help improve their writing and grammar. Or it could explain difficult subjects. “It really will tutor you,” says Vogelsinger, who had one student come to him excited that ChatGPT had clearly outlined a concept from science class.

Teachers could use ChatGPT to help create lesson plans or activities — ones personalized to the needs or goals of specific students.

Several podcasts have had ChatGPT as a “guest” on the show. In 2023, two people are going to use an AI-powered chatbot like a lawyer. It will tell them what to say during their appearances in traffic court. The company that developed the bot is paying them to test the new tech. Their vision is a world in which legal help might be free.

@professorcasey Replying to @novshmozkapop #ChatGPT might be helpful but don’t ask it for help on your math homework. #openai #aiethics ♬ original sound – Professor Casey Fiesler

Xiaoming Zhai tested ChatGPT to see if it could write an academic paper . Zhai is an expert in science education at the University of Georgia in Athens. He was impressed with how easy it was to summarize knowledge and generate good writing using the tool. “It’s really amazing,” he says.

All of this sounds great. Still, some really big problems exist.

Most worryingly, ChatGPT and tools like it sometimes gets things very wrong. In an ad for Bard, the chatbot claimed that the James Webb Space Telescope took the very first picture of an exoplanet. That’s false. In a conversation posted on Twitter, ChatGPT said the fastest marine mammal was the peregrine falcon. A falcon, of course, is a bird and doesn’t live in the ocean.

ChatGPT can be “confidently wrong,” says Casey Fiesler. Its text, she notes, can contain “mistakes and bad information.” She is an expert in the ethics of technology at the University of Colorado Boulder. She has made multiple TikTok videos about the pitfalls of ChatGPT .

Also, for now, all of the bot’s training data came from before a date in 2021. So its knowledge is out of date.

Finally, ChatGPT does not provide sources for its information. If asked for sources, it will make them up. It’s something Fiesler revealed in another video . Zhai discovered the exact same thing. When he asked ChatGPT for citations, it gave him sources that looked correct. In fact, they were bogus.

Zhai sees the tool as an assistant. He double-checked its information and decided how to structure the paper himself. If you use ChatGPT, be honest about it and verify its information, the experts all say.

Under the hood

ChatGPT’s mistakes make more sense if you know how it works. “It doesn’t reason. It doesn’t have ideas. It doesn’t have thoughts,” explains Emily M. Bender. She is a computational linguist who works at the University of Washington in Seattle. ChatGPT may sound a lot like a person, but it’s not one. It is an AI model developed using several types of machine learning .

The primary type is a large language model. This type of model learns to predict what words will come next in a sentence or phrase. It does this by churning through vast amounts of text. It places words and phrases into a 3-D map that represents their relationships to each other. Words that tend to appear together, like peanut butter and jelly, end up closer together in this map.

Before ChatGPT, OpenAI had made GPT3. This very large language model came out in 2020. It had trained on text containing an estimated 300 billion words. That text came from the internet and encyclopedias. It also included dialogue transcripts, essays, exams and much more, says Sasha Luccioni. She is a researcher at the company HuggingFace in Montreal, Canada. This company builds AI tools.

OpenAI improved upon GPT3 to create GPT3.5. This time, OpenAI added a new type of machine learning. It’s known as “reinforcement learning with human feedback.” That means people checked the AI’s responses. GPT3.5 learned to give more of those types of responses in the future. It also learned not to generate hurtful, biased or inappropriate responses. GPT3.5 essentially became a people-pleaser.

a photo of the disclaimer shown upon opening ChatGPT's interface

During ChatGPT’s development, OpenAI added even more safety rules to the model. As a result, the chatbot will refuse to talk about certain sensitive issues or information. But this also raises another issue: Whose values are being programmed into the bot, including what it is — or is not — allowed to talk about?

OpenAI is not offering exact details about how it developed and trained ChatGPT. The company has not released its code or training data. This disappoints Luccioni. “I want to know how it works in order to help make it better,” she says.

When asked to comment on this story, OpenAI provided a statement from an unnamed spokesperson. “We made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems,” the statement said. “We are constantly incorporating feedback and lessons learned.” Indeed, some early experimenters got the bot to say biased things about race and gender. OpenAI quickly patched the tool. It no longer responds the same way.

ChatGPT is not a finished product. It’s available for free right now because OpenAI needs data from the real world. The people who are using it right now are their guinea pigs. If you use it, notes Bender, “You are working for OpenAI for free.”

Humans vs robots

How good is ChatGPT at what it does? Catherine Gao is part of one team of researchers that is putting the tool to the test.

At the top of a research article published in a journal is an abstract. It summarizes the author’s findings. Gao’s group gathered 50 real abstracts from research papers in medical journals. Then they asked ChatGPT to generate fake abstracts based on the paper titles. The team asked people who review abstracts as part of their job to identify which were which .

The reviewers mistook roughly one in every three (32 percent) of the AI-generated abstracts as human-generated. “I was surprised by how realistic and convincing the generated abstracts were,” says Gao. She is a doctor and medical researcher at Northwestern University’s Feinberg School of Medicine in Chicago, Ill.

In another study, Will Yeadon and his colleagues tested whether AI tools could pass a college exam . Yeadon is a physics teacher at Durham University in England. He picked an exam from a course that he teaches. The test asks students to write five short essays about physics and its history. Students who take the test have an average score of 71 percent, which he says is equivalent to an A in the United States.

Yeadon used a close cousin of ChatGPT, called davinci-003. It generated 10 sets of exam answers. Afterward, he and four other teachers graded them using their typical grading standards for students. The AI also scored an average of 71 percent. Unlike the human students, however, it had no very low or very high marks. It consistently wrote well, but not excellently. For students who regularly get bad grades in writing, Yeadon says, this AI “will write a better essay than you.”

These graders knew they were looking at AI work. In a follow-up study, Yeadon plans to use work from the AI and students and not tell the graders whose work they are looking at.

Educators and Parents, Sign Up for The Cheat Sheet

Weekly updates to help you use Science News Explores in the learning environment

Thank you for signing up!

There was a problem signing you up.

Cheat-checking with AI

People may not always be able to tell if ChatGPT wrote something or not. Thankfully, other AI tools can help. These tools use machine learning to scan many examples of AI-generated text. After training this way, they can look at new text and tell you whether it was most likely composed by AI or a human.

Most free AI-detection tools were trained on older language models, so they don’t work as well for ChatGPT. Soon after ChatGPT came out, though, one college student spent his holiday break building a free tool to detect its work . It’s called GPTZero .

The company Originality.ai sells access to another up-to-date tool. Founder Jon Gillham says that in a test of 10,000 samples of text composed by GPT3, the tool tagged 94 percent of them correctly. When ChatGPT came out, his team tested a much smaller set of 20 samples that had been created by GPT3, GPT3.5 and ChatGPT. Here, Gillham says, “it tagged all of them as AI-generated. And it was 99 percent confident, on average.”

In addition, OpenAI says they are working on adding “digital watermarks” to AI-generated text. They haven’t said exactly what they mean by this. But Gillham explains one possibility. The AI ranks many different possible words when it is generating text. Say its developers told it to always choose the word ranked in third place rather than first place at specific places in its output. These words would act “like a fingerprint,” says Gillham.

a conversation between ChatGPT and Avani Rao

The future of writing

Tools like ChatGPT are only going to improve with time. As they get better, people will have to adjust to a world in which computers can write for us. We’ve made these sorts of adjustments before. As high-school student Rao points out, Google was once seen as a threat to education because it made it possible to instantly look up any fact. We adapted by coming up with teaching and testing materials that don’t require students to memorize things.

Now that AI can generate essays, stories and code, teachers may once again have to rethink how they teach and test. That might mean preventing students from using AI. They could do this by making students work without access to technology. Or they might invite AI into the writing process, as Vogelsinger is doing. Concludes Rao, “We might have to shift our point of view about what’s cheating and what isn’t.”

Students will still have to learn to write without AI’s help. Kids still learn to do basic math even though they have calculators. Learning how math works helps us learn to think about math problems. In the same way, learning to write helps us learn to think about and express ideas.

Rao thinks that AI will not replace human-generated stories, articles and other texts. Why? She says: “The reason those things exist is not only because we want to read it but because we want to write it.” People will always want to make their voices heard. ChatGPT is a tool that could enhance and support our voices — as long as we use it with care.

Correction: Gillham’s comment on the 20 samples that his team tested has been corrected to show how confident his team’s AI-detection tool was in identifying text that had been AI-generated (not in how accurately it detected AI-generated text).

Can you find the bot?

More stories from science news explores on tech.

an AI generated image showing impossible architecture, curving and swoopy staircases with impressions of people walking up and down them

Does AI steal art or help create it? It depends on who you ask

an AI generated image showing an illustration of overly happy (almost creepily so) people around a family dinner table

AI image generators tend to exaggerate stereotypes

Computer Scientist Niall Williams stands in front of metal handrail. He's has a black moustache and beard. He's wearing a powder blue baseball cap and a button down shirt with tree designs on it. A concrete courtyard and palm trees are in the background.

This computer scientist is making virtual reality safer

a circular structure with a shiny gold metal supports and silvery white wires runing throughout - it almost looks like a chandelier. At the very bottom there is a dark rectangle that all the wires seem to connect to, the quantum-processing chip.

Here’s why scientists want a good quantum computer

Actor Timothée Chalamet wears a brown cloak with a hood. He is walking toward the camera in a desert world with the sun shining behind him.

The desert planet in ‘Dune’ is pretty realistic, scientists say

An astronaut on Mars looks at a tablet device with the Red Planet landscape, base station and rover in the background

Here’s how to build an internet on Mars

a photo of the ISEF 2024 winners

Bioelectronics research wins top award at 2024 Regeneron ISEF

Taylor Swift is looking at the viewer.Her hair is up and she is wearing huge diamond and purple gem earrings. with a navy blue outfit is covered in glittering beads.

Artificial intelligence is making it hard to tell truth from fiction

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base
  • Using AI tools

Is Using ChatGPT Cheating?

Published on June 29, 2023 by Eoghan Ryan . Revised on September 14, 2023.

Using ChatGPT and other AI tools to cheat is academically dishonest and can have severe consequences.

However, using these tools is not always academically dishonest . It’s important to understand how to use these tools correctly and ethically to complement your research and writing skills. You can learn more about how to use AI tools responsibly on our  AI writing  resources page.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How can chatgpt be used to cheat, what are the risks of using chatgpt to cheat, how to use chatgpt without cheating, frequently asked questions.

ChatGPT and other AI tools can be used to cheat in various ways. This can be intentional or unintentional and can vary in severity. Some examples of the ways in which ChatGPT can be used to cheat include:

  • AI-assisted plagiarism: Passing off AI-generated text as your own work (e.g., essays, homework assignments, take-home exams)
  • Plagiarism : Having the tool rephrase content from another source and passing it off as your own work
  • Self-plagiarism : Having the tool rewrite a paper you previously submitted with the intention of resubmitting it
  • Data fabrication: Using ChatGPT to generate false data and presenting them as genuine findings to support your research

Using ChatGPT in these ways is academically dishonest and very likely to be prohibited by your university. Even if your guidelines don’t explicitly mention ChatGPT, actions like data fabrication are academically dishonest regardless of what tools are used.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

chatgpt homework cheat

Try for free

ChatGPT does not solve all academic writing problems and using ChatGPT to cheat can have various negative impacts on yourself and others. ChatGPT cheating:

  • Leads to gaps in your knowledge
  • Is unfair to other students who did not cheat
  • Potentially damages your reputation
  • May result in the publication of inaccurate or false information
  • May lead to dangerous situations if it allows you to avoid learning the fundamentals in some contexts (e.g., medicine)

When used correctly, ChatGPT and other AI tools can be helpful resources that complement your academic writing and research skills. Below are some tips to help you use ChatGPT ethically.

Follow university guidelines

Guidelines on how ChatGPT may be used vary across universities. It’s crucial to follow your institution’s policies regarding AI writing tools and to stay up to date with any changes. Always ask your instructor if you’re unsure what is allowed in your case.

Use the tool as a source of inspiration

If allowed by your institute, use ChatGPT outputs as a source of guidance or inspiration, rather than as a substitute for coursework. For example, you can use ChatGPT to write a research paper outline or to provide feedback on your text.

You can also use ChatGPT to paraphrase or summarize text to express your ideas more clearly and to condense complex information. Alternatively, you can use Scribbr’s free paraphrasing tool or Scribbr’s free text summarizer , which are designed specifically for these purposes.

Practice information literacy skills

Information literacy skills can help you use AI tools more effectively. For example, they can help you to understand what constitutes plagiarism, critically evaluate AI-generated outputs, and make informed judgments more generally.

You should also familiarize yourself with the user guidelines for any AI tools you use and get to know their intended uses and limitations .

Be transparent about how you use the tools

If you use ChatGPT as a primary source or to help with your research or writing process, you may be required to cite it or acknowledge its contribution in some way (e.g., by providing a link to the ChatGPT conversation). Check your institution’s guidelines or ask your professor for guidance.

Using ChatGPT in the following ways is generally considered academically dishonest:

  • Passing off AI-generated content as your own work
  • Having the tool rephrase plagiarized content and passing it off as your own work
  • Using ChatGPT to generate false data and presenting them as genuine findings to support your research

Using ChatGPT to cheat can have serious academic consequences . It’s important that students learn how to use AI tools effectively and ethically.

Using ChatGPT to cheat is a serious offense and may have severe consequences.

However, when used correctly, ChatGPT can be a helpful resource that complements your academic writing and research skills. Some tips to use ChatGPT ethically include:

  • Following your institution’s guidelines
  • Understanding what constitutes plagiarism
  • Being transparent about how you use the tool

No, it’s not a good idea to do so in general—first, because it’s normally considered plagiarism or academic dishonesty to represent someone else’s work as your own (even if that “someone” is an AI language model). Even if you cite ChatGPT , you’ll still be penalized unless this is specifically allowed by your university . Institutions may use AI detectors to enforce these rules.

Second, ChatGPT can recombine existing texts, but it cannot really generate new knowledge. And it lacks specialist knowledge of academic topics. Therefore, it is not possible to obtain original research results, and the text produced may contain factual errors.

However, you can usually still use ChatGPT for assignments in other ways, as a source of inspiration and feedback.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, September 14). Is Using ChatGPT Cheating?. Scribbr. Retrieved June 18, 2024, from https://www.scribbr.com/ai-tools/chatgpt-cheating/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, how to write an essay with chatgpt | tips & examples, how to use chatgpt in your studies, what are the limitations of chatgpt.

Eoghan Ryan

Eoghan Ryan (Scribbr Team)

Thanks for reading! Hope you found this article helpful. If anything is still unclear, or if you didn’t find what you were looking for here, leave a comment and we’ll see if we can help.

Still have questions?

"i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Getty Images/Futurism

89 Percent of College Students Admit to Using ChatGPT for Homework, Study Claims

Wait, what, taicher's pet.

Educators are battling a new reality: easily accessible AI that allows students to take immense shortcuts in their education — and as it turns out, many appear to already be cheating with abandon.

Online course provider Study.com asked 1,000 students over the age of 18 about the use of ChatGPT, OpenAI's blockbuster chatbot, in the classroom.

The responses were surprising. A full 89 percent said they'd used it on homework. Some 48 percent confessed they'd already made use of it to complete an at-home test or quiz. Over 50 percent said they used ChatGPT to write an essay, while 22 percent admitted to having asked ChatGPT for a paper outline.

Honestly, those numbers sound so staggeringly high that we wonder about Study.com's methodology. But if there's a throughline here, it's that AI isn't just getting pretty good — it's also already weaving itself into the fabric of society, and the results could be far-reaching.

Muscle AItrophy

At the same time, according to the study, almost three-quarters of students said they wanted ChatGPT to be banned, indicating students are equally worried about cheating becoming the norm.

Educators are also understandably worried about AI having a major impact on their students' education, and are resorting to AI-detecting apps that attempt to suss out whether a student used ChatGPT.

But as we've found out for ourselves, the current crop of tools out there, like GPTZero, are still actively being developed and are far from perfect .

Future Shock

Some are worried AI chatbots could have a disastrous effect on education.

"Just because there is a machine that will help me lift up a dumbbell doesn’t mean my muscles will develop," Western Washington University history professor Johann Neem told The Wall Street Journal . "In the same way just because there is a machine that can write an essay doesn’t mean my mind will develop."

But others argue teachers should leverage powerful technologies like ChatGPT to prepare students for a new reality.

" I hope to inspire and educate you enough that you will want to learn how to leverage these tools, not just to learn to cheat better," Weber State University professor Alex Lawrence told the WSJ, while University of Pennsylvania's Ethan Mollick, said that he expects his literature students to leverage the tech to "write more" and "better."

"This is a force multiplier for writing," Mollick added. "I expect them to use it."

READ MORE: Professors Turn to ChatGPT to Teach Students a Lesson [ The Wall Street Journal ]

More on ChatGPT: BuzzFeed Announces Plans to Use OpenAI to Churn Out Content

Share This Article

ChatGPT just created a new tool to catch students trying to cheat using ChatGPT

The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023.

The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

The new AI Text Classifier launched Tuesday by OpenAI follows a  weeks-long discussion at schools  and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.

OpenAI cautions that its  new tool  – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.

“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.

Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.

By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.

The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.

“We can’t afford to ignore it,” Robinson said.

The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.

School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.

“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,’” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.

“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.

OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to  detect automated disinformation campaigns  and other misuse of AI to mimic humans.

The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text — a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” — and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.

But much like ChatGPT itself,  which was trained  on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.

“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”

Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.

In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.

“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them.”

It’s an unusually public role for the research-oriented San Francisco startup, now  backed by billions of dollars in investment  from its partner Microsoft and facing growing interest from the public and governments.

France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.

“So if you’re in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive,” he said. “If you are in the economics faculty, then you’re fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty.”

He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.

Latest in Tech

Nvidia CEO Jensen Huang

Nvidia dethroned as world’s most valuable company after just a few days on top as $220 billion selloff sends it to third place

What’s Next Is Now: How To Live Future Ready, by Frederik G. Pferdt.

The ‘radical optimism’ philosophy that gave us the Google Glass

The exterior of The Spheres is seen at the Amazon.com Inc. headquarters

Why a log of redacted Signal messages shines a light on Amazon’s glass ceiling

Chris Thompson created Sober Sidekick as a personal project while he was in the early stages of sobriety.

Venture capitalists wanted Sober Sidekick’s founder to take his addiction recovery app in a different direction. Now he’s proving them wrong

Eugene Kaspersky attends the panel 'The Future of Space' during Starmus V: A Giant Leap, sponsored by Kaspersky at Samsung Hall on June 28, 2019 in Zurich, Switzerland.

Cybersecurity firm Kaspersky denies that it is a security threat, will continue selling products despite ban

Chinese foreign minister Wang Yi looks over his should as he walks with aides.

China retaliates against Lockheed Martin for arms deals with Taiwan, banning 3 top executives and key business units

Most popular.

chatgpt homework cheat

Why going cashless has turned Sweden from one of the safest countries into a high-crime nation

chatgpt homework cheat

Five men were convicted for running Jetflicks, a low-cost streaming service that amassed more TV shows than Netflix, Hulu, and Amazon Prime combined

chatgpt homework cheat

Gen Z are increasingly becoming NEETs by choice—not in employment, education, or training

chatgpt homework cheat

Sleuthing L.L. Bean heiress discovered trees at her seaside home in Maine had been poisoned—now her neighbors have been fined $1.7m for their new ocean view

chatgpt homework cheat

Bank of America tells Detroit’s Big 3 they can’t make money in China and should just leave the hypercompetitive car market ‘as soon as they possibly can’

chatgpt homework cheat

Billionaire Melinda French Gates endorses a presidential candidate for the first time ever

British Academics Despair as ChatGPT-Written Essays Swamp Grading Season

‘It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by the rise of bland essays.

By  Jack Grove for Times Higher Education

You have / 5 articles left. Sign up for a free account or log in.

The increased prevalence of students using ChatGPT to write essays should prompt a rethink about whether current policies encouraging “ethical” use of artificial intelligence (AI) are working, scholars have argued.

Times Higher Ed Logo

With marking season in full flow, lecturers have taken to social media in large numbers to complain about AI-generated content found in submitted work.

Telltale signs of ChatGPT use, according to academics, include little-used words such as “delve” and “multifaceted,” summarizing key themes using bullet points and a jarring conversational style using terms such as, “Let’s explore this theme.”

In a more obvious giveaway, one professor said an advertisement for an AI essay company was  buried in a paper’s introduction ; another academic noted how a student had  forgotten to remove a chatbot statement  that the content was AI-generated.

“I had no idea how many would resort to it,” admitted  one U.K. law professor .

Des Fitzgerald, professor of medical humanities and social sciences at  University College Cork , told  Times Higher Education  that student use of AI had “gone totally mainstream” this year.

“Across a batch of essays, you do start to notice the tics of ChatGPT essays, which is partly about repetition of certain words or phrases, but is also just a kind of aura of machinic blandness that’s hard to describe to someone who hasn’t encountered it—an essay with no edges, that does nothing technically wrong or bad, but not much right or good, either,” said Professor Fitzgerald.

Since  ChatGPT’s emergence in late 2022 , some universities have adopted policies to allow the use of AI as long as it is acknowledged, while others have begun using AI content detectors, although  opinion is divided on their effectiveness .

According to the  latest Student Academic Experience Survey , for which Advance HE and the Higher Education Policy Institute polled around 10,000 U.K. undergraduates, 61 percent use AI at least a little each month, “in a way allowed by their institution,” while 31 percent do so every week.

Professor Fitzgerald said that although some colleagues “think we just need to live with this, even that we have a duty to teach students to use it well,” he was “totally against” the use of AI tools for essays.

“ChatGPT is completely antithetical to everything I think I’m doing as a teacher—working with students to engage with texts, thinking through ideas, learning to clarify and express complex thoughts, taking some risks with those thoughts, locating some kind of distinctive inner voice. ChatGPT is total poison for all of this, and we need to simply ban it,” he said.

Steve Fuller, professor of sociology at the  University of Warwick , agreed that AI use had “become more noticeable” this year despite his students signing contracts saying they would not use it to write essays.

Editors’ Picks

  • Questions Linger After Penn State Buyouts
  • More Downsizing at Beleaguered ETS
  • UNC Fires Professor They Secretly Recorded

He said he was not opposed to students using it “as long as what they produce sounds smart and on point, and the marker can’t recognize it as simply having been lifted from another source wholesale.”

Those who leaned heavily on the technology should expect a relatively low mark, even though they might pass, said Professor Fuller.

“Students routinely commit errors of fact, reasoning and grammar [without ChatGPT], yet if their text touches enough bases with the assignment, they’re likely to get somewhere in the low- to mid-60s. ChatGPT does a credible job at simulating such mediocrity, and that’s good enough for many of its student users,” he said.

Having to mark such mediocre essays partly generated by AI is, however, a growing complaint among academics. Posting on X,  Lancaster University  economist  Renaud Foucart  said marking AI-generated essays “takes much more time to assess [because] I need to concentrate much more to cut through the amount of seemingly logical statements that are actually full of emptiness.”

“My biggest issue [with AI] is less the moral issue about cheating but more what ChatGPT offers students,” Professor Fitzgerald added. “All it is capable of is [writing] bad essays made up of non-ideas and empty sentences. It’s not a machine for cheating; it’s a machine for producing crap.”

College green and alumni gateway at Ohio University on a fall day

Funding Student Success: Mini Grants for First-Year Engagement

This spring, Ohio University piloted a grant program to support faculty and student engagement in first-year seminars

Share This Article

More from global.

Logo for Times Higher Education on a white background

Gloomy Financial Outlook for British Universities

Four in five institutions could face deficits given stalling domestic enrollment and declines in international recrui

A square with rounded corners colored with a changing gradient that starts red and pink on the top left and changes to purple and blue on the bottom right. On this background are the white letters "T," "H" and "E." To the right of the rounded square, black text reads "Times Higher Education."

Is ‘Fatphobia’ the Last Acceptable Prejudice in the Academy?

A Cornell University philosopher is calling out the discrimination—which is often blatant—faced by scholars deemed ov

U.K. Universities Targeted by Cyberattack

Hackers known as Anonymous Sudan reportedly took responsibility and cited the “U.K.’s continued support for Israel” a

  • Become a Member
  • Sign up for Newsletters
  • Learning & Assessment
  • Diversity & Equity
  • Career Development
  • Labor & Unionization
  • Shared Governance
  • Academic Freedom
  • Books & Publishing
  • Financial Aid
  • Residential Life
  • Free Speech
  • Physical & Mental Health
  • Race & Ethnicity
  • Sex & Gender
  • Socioeconomics
  • Traditional-Age
  • Adult & Post-Traditional
  • Teaching & Learning
  • Artificial Intelligence
  • Digital Publishing
  • Data Analytics
  • Administrative Tech
  • Alternative Credentials
  • Financial Health
  • Cost-Cutting
  • Revenue Strategies
  • Academic Programs
  • Physical Campuses
  • Mergers & Collaboration
  • Fundraising
  • Research Universities
  • Regional Public Universities
  • Community Colleges
  • Private Nonprofit Colleges
  • Minority-Serving Institutions
  • Religious Colleges
  • Women's Colleges
  • Specialized Colleges
  • For-Profit Colleges
  • Executive Leadership
  • Trustees & Regents
  • State Oversight
  • Accreditation
  • Politics & Elections
  • Supreme Court
  • Student Aid Policy
  • Science & Research Policy
  • State Policy
  • Colleges & Localities
  • Employee Satisfaction
  • Remote & Flexible Work
  • Staff Issues
  • Study Abroad
  • International Students in U.S.
  • U.S. Colleges in the World
  • Intellectual Affairs
  • Seeking a Faculty Job
  • Advancing in the Faculty
  • Seeking an Administrative Job
  • Advancing as an Administrator
  • Beyond Transfer
  • Call to Action
  • Confessions of a Community College Dean
  • Higher Ed Gamma
  • Higher Ed Policy
  • Just Explain It to Me!
  • Just Visiting
  • Law, Policy—and IT?
  • Leadership & StratEDgy
  • Leadership in Higher Education
  • Learning Innovation
  • Online: Trending Now
  • Resident Scholar
  • University of Venus
  • Student Voice
  • Academic Life
  • Health & Wellness
  • The College Experience
  • Life After College
  • Academic Minute
  • Weekly Wisdom
  • Reports & Data
  • Quick Takes
  • Advertising & Marketing
  • Consulting Services
  • Data & Insights
  • Hiring & Jobs
  • Event Partnerships

4 /5 Articles remaining this month.

Sign up for a free account or log in.

  • Create Free Account
  • Future Students
  • Current Students
  • Faculty/Staff

Stanford Graduate School of Education

News and Media

  • News & Media Home
  • Research Stories
  • School's In
  • In the Media

You are here

What do ai chatbots really mean for students and cheating.

Student working on laptop and phone and notebook

The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release of ChatGPT.  

“There’s been a ton of media coverage about AI making it easier and more likely for students to cheat,” said Denise Pope , a senior lecturer at Stanford Graduate School of Education (GSE). “But we haven’t seen that bear out in our data so far. And we know from our research that when students do cheat, it’s typically for reasons that have very little to do with their access to technology.”

Pope is a co-founder of Challenge Success , a school reform nonprofit affiliated with the GSE, which conducts research into the student experience, including students’ well-being and sense of belonging, academic integrity, and their engagement with learning. She is the author of Doing School: How We Are Creating a Generation of Stressed-Out, Materialistic, and Miseducated Students , and coauthor of Overloaded and Underprepared: Strategies for Stronger Schools and Healthy, Successful Kids.  

Victor Lee is an associate professor at the GSE whose focus includes researching and designing learning experiences for K-12 data science education and AI literacy. He is the faculty lead for the AI + Education initiative at the Stanford Accelerator for Learning and director of CRAFT (Classroom-Ready Resources about AI for Teaching), a program that provides free resources to help teach AI literacy to high school students. 

Here, Lee and Pope discuss the state of cheating in U.S. schools, what research shows about why students cheat, and their recommendations for educators working to address the problem.

Denise Pope

Denise Pope

What do we know about how much students cheat?

Pope: We know that cheating rates have been high for a long time. At Challenge Success we’ve been running surveys and focus groups at schools for over 15 years, asking students about different aspects of their lives — the amount of sleep they get, homework pressure, extracurricular activities, family expectations, things like that — and also several questions about different forms of cheating. 

For years, long before ChatGPT hit the scene, some 60 to 70 percent of students have reported engaging in at least one “cheating” behavior during the previous month. That percentage has stayed about the same or even decreased slightly in our 2023 surveys, when we added questions specific to new AI technologies, like ChatGPT, and how students are using it for school assignments.

Victor Lee

Isn’t it possible that they’re lying about cheating? 

Pope: Because these surveys are anonymous, students are surprisingly honest — especially when they know we’re doing these surveys to help improve their school experience. We often follow up our surveys with focus groups where the students tell us that those numbers seem accurate. If anything, they’re underreporting the frequency of these behaviors.

Lee: The surveys are also carefully written so they don’t ask, point-blank, “Do you cheat?” They ask about specific actions that are classified as cheating, like whether they have copied material word for word for an assignment in the past month or knowingly looked at someone else’s answer during a test. With AI, most of the fear is that the chatbot will write the paper for the student. But there isn’t evidence of an increase in that.

So AI isn’t changing how often students cheat — just the tools that they’re using? 

Lee: The most prudent thing to say right now is that the data suggest, perhaps to the surprise of many people, that AI is not increasing the frequency of cheating. This may change as students become increasingly familiar with the technology, and we’ll continue to study it and see if and how this changes. 

But I think it’s important to point out that, in Challenge Success’ most recent survey, students were also asked if and how they felt an AI chatbot like ChatGPT should be allowed for school-related tasks. Many said they thought it should be acceptable for “starter” purposes, like explaining a new concept or generating ideas for a paper. But the vast majority said that using a chatbot to write an entire paper should never be allowed. So this idea that students who’ve never cheated before are going to suddenly run amok and have AI write all of their papers appears unfounded.

But clearly a lot of students are cheating in the first place. Isn’t that a problem? 

Pope: There are so many reasons why students cheat. They might be struggling with the material and unable to get the help they need. Maybe they have too much homework and not enough time to do it. Or maybe assignments feel like pointless busywork. Many students tell us they’re overwhelmed by the pressure to achieve — they know cheating is wrong, but they don’t want to let their family down by bringing home a low grade. 

We know from our research that cheating is generally a symptom of a deeper, systemic problem. When students feel respected and valued, they’re more likely to engage in learning and act with integrity. They’re less likely to cheat when they feel a sense of belonging and connection at school, and when they find purpose and meaning in their classes. Strategies to help students feel more engaged and valued are likely to be more effective than taking a hard line on AI, especially since we know AI is here to stay and can actually be a great tool to promote deeper engagement with learning.

What would you suggest to school leaders who are concerned about students using AI chatbots? 

Pope: Even before ChatGPT, we could never be sure whether kids were getting help from a parent or tutor or another source on their assignments, and this was not considered cheating. Kids in our focus groups are wondering why they can't use ChatGPT as another resource to help them write their papers — not to write the whole thing word for word, but to get the kind of help a parent or tutor would offer. We need to help students and educators find ways to discuss the ethics of using this technology and when it is and isn't useful for student learning.

Lee: There’s a lot of fear about students using this technology. Schools have considered putting significant amounts of money in AI-detection software, which studies show can be highly unreliable. Some districts have tried blocking AI chatbots from school wifi and devices, then repealed those bans because they were ineffective. 

AI is not going away. Along with addressing the deeper reasons why students cheat, we need to teach students how to understand and think critically about this technology. For starters, at Stanford we’ve begun developing free resources to help teachers bring these topics into the classroom as it relates to different subject areas. We know that teachers don’t have time to introduce a whole new class, but we have been working with teachers to make sure these are activities and lessons that can fit with what they’re already covering in the time they have available. 

I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.

More Stories

Aerial view of students in a classroom

⟵ Go to all Research Stories

Get the Educator

Subscribe to our monthly newsletter.

Stanford Graduate School of Education

482 Galvez Mall Stanford, CA 94305-3096 Tel: (650) 723-2109

  • Contact Admissions
  • GSE Leadership
  • Site Feedback
  • Web Accessibility
  • Career Resources
  • Faculty Open Positions
  • Explore Courses
  • Academic Calendar
  • Office of the Registrar
  • Cubberley Library
  • StanfordWho
  • StanfordYou

Improving lives through learning

Make a gift now

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

Professors want to 'ChatGPT-proof' assignments, and are returning to paper exams and requesting editing history to curb AI cheating

  • College professors are looking to "ChatGPT-proof" assignments to curb cheating.
  • Some professors suggest returning to paper exams and asking students to show editing histories. 
  • Changes to assignments come as teachers debate the usage of generative AI in the classroom. 

Insider Today

Since OpenAI's ChatGPT came out last November, a number of teachers have caught their students using the chatbot to cheat and plagiarize on their assignments.

Now, professors at colleges across the US and beyond are trying out ways to "ChatGPT-proof" their assignments, as concerns grow that students may be missing out on learning by using AI cut corners and tools that detect AI-generated text have been found to be prone to errors.

Bonnie MacKellar, a computer science professor at St. Johns University in New York, said that she is making students in her intro courses take paper exams instead of digital ones and having them handwrite their code. Paper exams will be a bigger portion of her students' grades this fall, she said, compared to previous semesters. In turn, students will be disincentivized to outsource their logical thinking to AI, which she said could stunt their learning and leave them unprepared for more advanced computer science classes down the line.

"I hear colleagues in humanities courses saying the same thing: It's back to the blue books," MacKellar said.

Other professors seek to curb AI cheating by reframing assignment questions so students are required to "show their work," William Hart-Davidson, an associate dean at Michigan State University who leads AI workshops for faculty members, told Insider over email.

Assignment questions, Hart-Davidson said, "should include a request for students to be explicit and reflective about the moves they are making."

"We don't just want them to reproduce a fact or a rote response, but to learn to account for their reasoning in a deliberate way," he said.

Related stories

For instance, ChatGPT can easily answer a straight-forward question like "Tell me in three sentences what is the Krebs cycle in chemistry?" he said.

To avoid this, Hart-Davidson told Insider that teachers should reframe the question to something like "revise an existing passage" on the Krebs cycle, which would require students to point out errors, identify writing for clarity and accuracy, and explain how the writing could be improved.

That way, students are forced to think through their answers, rather than regurgitate what a chatbot tells them, which Hart-Davidson said could help improve their writing.

Some professors suggest students show their work by including their editing history and drafts along with their completed assignments. A document that logs all the typos corrected and the sentences rephrased in an essay can prove that a human wrote it, Dave Sayers, a professor at the University of Jyväskylä in Finland, wrote for the Times Higher Education , an education blog.

A guide from Butler University in Indianapolis on how to chatbot-proof assignments suggests that teachers could eliminate the essay, issue impromptu oral exams, and foster classroom discussions around how to best use the chatbot's responses.

The changes to school assignments come as teachers grapple with how to best integrate AI tools like ChatGPT into their classrooms. While some professors require their students to use ChatGPT to generate project ideas , some schools have outright banned the usage of AI to avoid cases of academic dishonesty.

Despite the controversy, some teachers are using AI chatbots themselves to streamline their workflows. Shannon Ahern, a high school math and science teacher in Dublin, Ireland, previously told Insider she used ChatGPT Plus to write lesson plans , generate exercise worksheets, and come up with quiz questions, which she claimed saved her hours of time.

As far as cheating goes, some teachers don't see that changing — with or without AI.

"I worried that my students would use it to cheat and plagiarize," Ahern said. "But then I remembered that students have always been cheating — whether that's copying a classmate's homework or getting a sibling to write an essay — and I don't think ChatGPT will change that."

Are you a student that secretly uses AI for your school work? Reach out to Insider's Aaron Mok at [email protected], or by encrypted messaging app Signal at 718-710-8200. Your identity will remain anonymous.

What emotions did you feel while reading this article?

Select all that apply

Thanks for your input!

Watch: What is ChatGPT, and should we be afraid of AI chatbots?

chatgpt homework cheat

  • Main content

More From Forbes

Educators battle plagiarism as 89% of students admit to using openai’s chatgpt for homework.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

ChatGPT in education is causing concerns of plagiarism and cheating

Who's teaching who?

A large majority of students are already using ChatGPT for homework assignments, creating challenges around plagiarism , cheating, and learning. According to Wharton MBA Professor Christian Terwisch, ChatGPT would receive “a B or a B-” on an Ivy League MBA-level exam in operations management. Another professor at a Utah-based university asked ChatGPT to tweet in his voice - leading Professor Alex Lawrence to declare that “this is the greatest cheating tool ever invented”, according to the Wall Street Journal . The plagiarism potential is potent - so, is banning the tool a realistic solution?

New research from Study.com provides eye-opening insight into the educational impact of ChatGPT , an online tool that has a surprising mastery of learning and human language. INSIDER reports that researchers recently put ChatGPT through the United States Medical Licensing exam (the three-part exam used to qualify medical school students for residency - basically, a test to see if you can be a doctor). In a December report, ChatGPT “performed at or near the passing threshold for all three exams without any training or reinforcement.” Lawrence, a professor from Weber State in Utah who tested via tweet, wrote a follow-up message to his students regarding the new platform from OpenAI: “I hope to inspire and educate you enough that you will want to learn how to leverage these tools, not just to learn to cheat better.” No word on how the students have responded so far.

Machines, tools and software have been making certain tasks easier for us for thousands of years. Are we about to outsource learning and education to artificial intelligence ? And what are the implications, beyond the classroom, if we do?

Considering that 90% of students are aware of ChatGPT, and 89% of survey respondents report that they have used the platform to help with a homework assignment, the application of OpenAI’s platform is already here. More from the survey:

  • 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper.
  • 72% of college students believe that ChatGPT should be banned from their college's network. (New York, Seattle and Los Angeles have all blocked the service from their public school networks).
  • 82% of college professors are aware of ChatGPT
  • 72% of college professors who are aware of ChatGPT are concerned about its impact on cheating
  • Over a third (34%) of all educators believe that ChatGPT should be banned in schools and universities, while 66% support students having access to it.
  • Meanwhile, 5% of educators say that they have used ChatGPT to teach a class, and 7% have used the platform to create writing prompts.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

A teacher quoted anonymously in the Study.com survey shares, “'I love that students would have another resource to help answer questions. Do I worry some kids would abuse it? Yes. But they use Google and get answers without an explanation. It's my understanding that ChatGPT explains answers. That [explanation] would be more beneficial.” Or would it become a crutch?

Modern society has many options for transportation: cars, planes, trains, and even electric scooters all help us to get around. But these machines haven’t replaced the simple fact that walking and running (on your own) is really, really good for you. Electric bikes are fun, but pushing pedals on our own is where we find our fitness. Without movement comes malady. A sedentary life that relies solely on external mechanisms for transport is a recipe for atrophy, poor health, and even a shortened lifespan. Will ChatGPT create educational atrophy, the equivalent of an electric bicycle for our brains?

Of course, when calculators came into the classroom, many declared the decline of math skills would soon follow. Research conducted as recently as 2012 has proven this to be false. Calculators had no positive or negative effects on basic math skills.

But ChatGPT has already gone beyond the basics, passing medical exams and MBA-level tests. A brave new world is already here, with implications for cheating and plagiarism, to be sure. But an even deeper implication points to the very nature of learning itself, when ChatGPT has become a super-charged repository for what is perhaps the most human of all inventions: the synthesis of our language. (That same synthesis that sits atop Blooms Taxonomy - a revered pyramid of thinking, that outlines the path to higher learning ). Perhaps educators, students and even business leaders will discover something old is new again, from ChatGPT. That discovery? Seems Socrates was right: the key to strong education begins with asking the right questions. Especially if you are talking to a ‘bot.

Chris Westfall

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

  • Subscribe Now

WT? A TikTok tip – Use AI bot to cheat on homework

Already have Rappler+? Sign in to listen to groundbreaking journalism.

This is AI generated summarization, which may have errors. For context, always refer to the full article.

WT? A TikTok tip – Use AI bot to cheat on homework

MANILA, Philippines – The Filipino creator starts with a quick tilt-down of his front camera from the ceiling to his face, a classic opening for a TikTok video.

“Check out this website that can help with writing an essay for school,” he says, almost shouting to his phone.

“This is ChatGPT. It’s a chatbot AI ( artificial intelligence ) that can do anything.”

The camera flips to the front showing his computer, on standby for a demonstration.

He types his prompt: “Write me a 500-word essay proving that the earth is not flat.” 

He hits “enter.” 

ChatGPT delivers, tracing the debunked idea’s history then enumerating the strongest pieces of evidence against it, like the subtly visible curve as we look out on the horizon. It concludes that the Earth’s roundness makes maritime travel possible.

The creator is happy with the result: “If you have homework. It’s done immediately.”

The video has been viewed at least 1.8 million times as of December 26, 2022. There are dozens of other videos just like this, in different languages, with students thanking the creator for the tip.

Knowing ChatGPT

ChatGPT is the internet’s newest chatbot darling. 

An AI chatbot, ChatGPT can write stories, essays, solve math problems, and even do code – all of which are useful for students to cheat their way through school.

It was developed by OpenAI, a research company with the mission “to ensure that artificial general intelligence benefits all of humanity.”

Made public on November 30, it’s become various things for different people – whether one is a storyteller, a psychologist, an email template generator, a friend, and for many students, a ghostwriter.

The release is so recent and the concept of AI for some teachers has been too complex that schools are still scrambling to find ways to spot and stop cheating.

How is it used to cheat?

ChatGPT is a language model – an AI system that receives massive amounts of text to learn the logical sequencing of words while refining this with human feedback.

ChatGPT was stuffed by researchers with scripts as recent as 2021, making almost any subject until that year a topic that ChatGPT can generate. 

The data “fed” to ChatGPT included copious amounts of text from different languages, including their varying levels of formality. It can respond to a prompt as specific as “Write an essay about Twitter with a lot of slang.” It can also write in Shakespearean.

ChatGPT learned how humans used language, was powered with the vocabulary of almost all disciplines known to man, and then programmed to respond to anybody’s request.

ChatGPT is not just a writing companion like Wikipedia, Google, or YouTube. It is a writer in itself.

Why stopping AI-assisted cheating is tricky

There is still no silver bullet yet.

There have been various websites created to see how AI-sounding a body of text is, but not one has so far been made to categorically detect AI-generated text.

As of December 2022 – still fairly close to ChatGPT’s public release – researchers were still exploring better ways to detect if an essay was AI-made, the MIT Technology Review wrote.

To complement the so-far insufficient detectors, researchers have advised teachers to also train themselves in spotting AI-generated works. 

They taught AI giveaways such as the excessive use of “the,” and the absence of typos – red flags that cheating students could easily take down by deliberate deletion and calculated misspells.

ChatGPT’s weaknessses

A limitation that teachers can use against cheating students is ChatGPT’s own limitation of learning texts until only 2021. 

This means writing prompts that concern 2022 and the years after it (ie. the future) are no-go topics for ChatGPT.

Asking so would generate a response explaining that it is only a language model and that it cannot learn about events on its own beyond what it had been taught.

In addition, as ChatGPT is a language model that tends to generate the same logical response, teachers can simply ask it to answer their class-assigned prompt then they can use the AI’s response as a guide in searching for similarities with the work of their students.

We all know, of course, that this isn’t enough to give teachers peace of mind. – Rappler.com

Add a comment

Please abide by Rappler's commenting guidelines .

There are no comments yet. Add your comment to start the conversation.

How does this make you feel?

Related Topics

Clothing, Apparel, Person

Rambo Talabong

Recommended stories, {{ item.sitename }}, {{ item.title }}, [in this economy] duterte exits deped: good riddance.

[In This Economy] Duterte exits DepEd: Good riddance!

Philippines is 2nd lowest in 2022 international creative thinking test for students

Philippines is 2nd lowest in 2022 international creative thinking test for students

[OPINION] Poverty is not a hindrance to success – that is a lie

[OPINION] Poverty is not a hindrance to success – that is a lie

[OPINION] Academic dishonesty is pervasive

[OPINION] Academic dishonesty is pervasive

Nomadic families in Nigeria get education for the first time in generations

Nomadic families in Nigeria get education for the first time in generations

Checking your Rappler+ subscription...

Upgrade to Rappler+ for exclusive content and unlimited access.

Why is it important to subscribe? Learn more

You are subscribed to Rappler+

Robot hand 3D background, presenting technology gesture

AI: Artificial Intelligence | Academic Integrity

Over the last decade, AI’s development has been consistently on the rise, leading to increased awareness and usage of AI systems.

“We know everyone is using it,” half-jokes David Imhoof , professor of history at Susquehanna, “not because we’re ‘catching’ them but because we know everyone is using it.”

At Susquehanna’s Break Through career networking conference in 2024, nearly every student at the AI in the Workplace panel raised their hand when asked if they’ve ever used an AI platform.

The Pew Research Center asked U.S. teens ages 13 to 17 about their awareness and use of AI (November 2023). The organization found that 67% are familiar with ChatGPT, arguably the most well-known generative AI platform. Nineteen percent of those teens said they have used ChatGPT to help with their schoolwork.

Of the teens who have heard of ChatGPT, most (69%) say it’s acceptable to use the platform to research new things. The perception of acceptability declines when it comes to solving math problems (39%) or writing an essay (20%).

AI’s application in the classroom is fraught with ethical issues more complicated than just saying, “Alexa” or “Hey Siri.”

Susquehanna University’s Center for Teaching and Learning has tackled this topic head-on with a series of professional development sessions aimed at educating faculty on the mechanics of AI and how they can manage the use of it in their classrooms.

Susquehanna does not yet have a generalized, university-wide policy regarding the use of AI. Instead, Nabeel Siddiqui , assistant professor of digital media and director of the Center for Teaching and Learning, has encouraged faculty members to tailor their own policies to their classrooms. For Amanda Lenig ’07 , department chair and associate professor of graphic design, that means nurturing a culture of transparency.

“It’s part of the industry now, so I believe our job as professors is to teach students to be discerning in how they choose to use AI and to be accountable for that choice,” she says. AI can be an advtange in the field of graphic design. As explained by Lenig, what once could have taken hours or days — let’s say creating a cardboard sword to be used in an advertising campaign for the television series Storage Wars, an example from one of Lenig’s assignments — can now be done in a matter of minutes through the AI tool in Adobe Firefly.

The ethics come into play, Lenig says, at the heart of the assignment.

“If the assignment was to create a custom or hand-done illustration, then using AI to create that illustration would be unethical,” Lenig reasons. “If the assignment was to create an ad campaign concept and execute that concept visually where stock photography could have been a method, then using AI would allow the student to create the perfect image for their campaign in a much quicker turnaround time.”

Lenig’s approach is one that is shared by others across the sciences and humanities. Siddiqui allows his students to use AI — up to a point. If he suspects a student is relying too heavily on AI, he will consult with the student about it.

What Siddiqui, in his position with the Center for Teaching and Learning, does not encourage is the default use of AI detectors, which typically search for the repetition of words as a sign that a text was AI-generated. This is because AI detectors can be problematic, he said.

According to a 2023 article published in the International Journal for Educational Integrity, an evaluation of 14 AI-detection tools found them neither accurate nor reliable (all scored below 80% of accuracy and only five over 70%). Studies have also shown AI detectors to be biased against nonnative English speakers.

“The reasons a student doesn’t cheat isn’t because they didn’t have access; it’s because they found it ethically problematic,” Siddiqui says. “When a student does make the decision to violate academic integrity policies, there are larger issues that are occurring, in which case it is even more important to be able to talk to that student to determine what is going on.”

Instead of relying on detectors or banning the use of AI altogether, some faculty members are integrating AI into their assignments. During the pandemic, Mike Ozlanski ’05 , department head and Allen C. Tressler associate professor of accounting in the Sigmund Weis School of Business, migrated his tests and quizzes to an online setting out of necessity. He has since moved back to the “old-fashioned” way of doing things — in class with a pencil and paper.

“I did this because ChatGPT (in January 2023) earned, on average, a passing grade on these assessments, so I needed a way to assess how well my students — not AI — know accounting concepts,” he says. “I’ve also received informal feedback from students that many prefer taking paper-based assessments.”

However, he hasn’t altogether abandoned ChatGPT in his classes.

“I tell students they can use the tool to help them troubleshoot homework problems with the expectation they can still successfully navigate quizzes and exams,” he explains. “I also highlight that ChatGPT can create multiple-choice and true-false questions about course topics. So, they could use ChatGPT to help them prepare for these assessments.”

In another course, Ozlanski shares copies of ChatGPT output related to course projects and asks his students to critique them.

“We discuss the strengths and weaknesses of the AI output. Then, it is their job to ensure their analysis is better than the chat,” he says. “ChatGPT could be a starting point for their analysis, but they are ultimately responsible for the quality of their submissions, including accurate citations from credible sources.”

Ozlanski’s students must also acknowledge in their papers if they used AI as part of their analysis.

Anusha Veluswamy , visiting assistant professor of mathematical science, has had the students in her 400-level artificial intelligence course predict incidences of gestational diabetes by running an AI statistical analysis on provided data sets. She also uses AI-assisted grading.

“I load my answer key into the AI platform and first submit a test exam to confirm accuracy,” Veluswamy says. “The platform links directly to Canvas so students can easily submit their exams through a platform they are already familiar with.”

While not necessarily “AI-proofing” his assignments, Imhoof is and has always designed them in a way that would make them difficult to complete via AI.

“I do a lot of very narrowly focused assignments, so if students follow the assignment, it’s not easy for them to type something into ChatGPT and just get an answer,” Imhoof adds. “For example, I may ask my students to use specific documents to analyze an assigned topic because I’m less interested in their ability to gather information than I am in their ability to provide insight.” 

Imhoof has also brough AI into the classroom through his Europe, Money and the World course. In it, students use ChatGPT to help them consider certain parts of a paper, but not write it for them. They also learn how important it is to submit the most appropriate prompt to a generative AI platform to receive the information they seek.

“We explored different ways ChatGPT could explain the process of decolonization — like a graduate student would, like a 15-year-old would, and like a stand-up comic would,” Imhoof explains. “Needless to say, they especially liked that last one.”

At Break Through, Joseph Morante ’21 , a data analyst with Bloomberg, and Robert Masters ’20 , a solutions analyst with Deloitte, spoke with students about the use of AI in the workplace.

Morante highlighted the various misconceptions surrounding AI, particularly the fear of job displacement. He and Masters pointed to various career pathways that he believes will be created or expand with the growth of AI, from coding to software design to prompt engineering.

“With any advancing technology there is a fear to it, yes, and some jobs may go, but these advancements in technology are creating more opportunities for jobs and more skills to be learned,” Morante says.

As AI platforms multiply and become more sophisticated, higher education will adapt as it had in the past to computers, the internet and smartphones. What educators like Imhoof, Lenig, Siddiqui and Veluswamy are looking forward to is using AI to instill in students what they have always sought — the ability to think creatively and critically to analyze issues and make effective decisions.

“When we teach people to be graphic designers, we’re teaching them to be critical thinkers and decision makers,” Lenig emphasizes. “While AI is certainly another tool in a student’s tool chest, AI doesn’t change what has always been central to our mission as educators.”

“Rather than reacting to generative AI like ChatGPT as a threat, instructors need to realize that our students will be working in a world that will feature AI in most jobs. We should, therefore, teach students how to use this technology effectively to enhance their critical thinking skills,” Imhoof says. “We have a unique opportunity to demonstrate how a liberal arts school like Susquehanna is the perfect place to figure out how to use AI as an extension of our skills, not as a replacement for them.” 

Get answers. Find inspiration. Be more productive.

Free to use. Easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming, and more.

Writes, brainstorms, edits, and explores ideas with you

A conversation between a user and ChatGPT on an interface about rewriting an email to appear friendly and professional.

Summarize meetings. Find new insights. Increase productivity.

A conversation between a user and ChatGPT on an interface about summarizing meeting notes.

Generate and debug code. Automate repetitive tasks. Learn new APIs.

A conversation between a user and ChatGPT on an interface about creating CSS with specific paramaters.

Learn something new. Dive into a hobby. Answer complex questions.

A conversation between a user and ChatGPT on an interface about gathering a list of things needed to start a herb garden.

Explore more features in ChatGPT

Type, talk, and use it your way.

With ChatGPT, you can type or start a voice conversation by tapping the headphone icon in the mobile app. 

Browse the web

ChatGPT can answer your questions using its vast knowledge and with information from the web.

Analyze data and create charts

Upload a file and ask ChatGPT to help analyze data, summarize information or create a chart. 

Talk about an image

Take or upload an image and ask ChatGPT about it.

3a

Customize ChatGPT for work, daily tasks or inspiration with GPTs

​​ Explore the GPT store and see what others have made. ChatGPT Plus users can also create their own custom GPTs. 

4a

Create images

ChatGPT Plus users can ask ChatGPT to create images using a simple sentence or even a detailed paragraph.

2a

Apple & ChatGPT

At WWDC in June 2024, we announced a partnership with Apple to integrate ChatGPT into experiences within iOS, iPadOS, and macOS.

ChatGPT > Two Up Text + Media > Plus > Apple + ChatGPT > Media Item > Apple abstract

Get started with ChatGPT today

Assistance with writing, problem solving and more

Access to GPT-3.5

Limited access to GPT-4o

Limited access to advanced data analysis, file uploads, vision, web browsing, and custom GPTs

Early access to new features

Access to GPT-4, GPT-4o, GPT-3.5

Up to 5x more messages for GPT-4o

Access to advanced data analysis, file uploads, vision, and web browsing

DALL·E image generation

Create and use custom GPTs

  • $20 / month

Join hundreds of millions of users and try ChatGPT today.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT welcome screen

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies .

That growth has propelled OpenAI itself into becoming one of the most-hyped companies in recent memory. And its latest partnership with Apple for its upcoming generative AI offering, Apple Intelligence, has given the company another significant bump in the AI race.

2024 also saw the release of GPT-4o, OpenAI’s new flagship omni model for ChatGPT. GPT-4o is now the default free model, complete with voice and vision capabilities. But after demoing GPT-4o, OpenAI paused one of its voices , Sky, after allegations that it was mimicking Scarlett Johansson’s voice in “Her.”

OpenAI is facing internal drama, including the sizable exit of co-founder and longtime chief scientist Ilya Sutskever as the company dissolved its Superalignment team. OpenAI is also facing a lawsuit from Alden Global Capital-owned newspapers , including the New York Daily News and the Chicago Tribune, for alleged copyright infringement, following a similar suit filed by The New York Times last year.

Here’s a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. And if you have any other questions, check out our ChatGPT FAQ here.

Timeline of the most recent ChatGPT updates

February 2024, january 2024.

  • ChatGPT FAQs

Apple brings ChatGPT to its apps, including Siri

Apple announced at WWDC 2024 that it is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems. The ChatGPT integrations, powered by GPT-4o, will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, and will be free without the need to create a ChatGPT or OpenAI account. Features exclusive to paying ChatGPT users will also be available through Apple devices .

Apple is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems #WWDC24 Read more: https://t.co/0NJipSNJoS pic.twitter.com/EjQdPBuyy4 — TechCrunch (@TechCrunch) June 10, 2024

House Oversight subcommittee invites Scarlett Johansson to testify about ‘Sky’ controversy

Scarlett Johansson has been invited to testify about the controversy surrounding OpenAI’s Sky voice at a hearing for the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation. In a letter, Rep. Nancy Mace said Johansson’s testimony could “provide a platform” for concerns around deepfakes.

ChatGPT experiences two outages in a single day

ChatGPT was down twice in one day: one multi-hour outage in the early hours of the morning Tuesday and another outage later in the day that is still ongoing. Anthropic’s Claude and Perplexity also experienced some issues.

You're not alone, ChatGPT is down once again. pic.twitter.com/Ydk2vNOOK6 — TechCrunch (@TechCrunch) June 4, 2024

The Atlantic and Vox Media ink content deals with OpenAI

The Atlantic and Vox Media have announced licensing and product partnerships with OpenAI . Both agreements allow OpenAI to use the publishers’ current content to generate responses in ChatGPT, which will feature citations to relevant articles. Vox Media says it will use OpenAI’s technology to build “audience-facing and internal applications,” while The Atlantic will build a new experimental product called Atlantic Labs .

I am delighted that @theatlantic now has a strategic content & product partnership with @openai . Our stories will be discoverable in their new products and we'll be working with them to figure out new ways that AI can help serious, independent media : https://t.co/nfSVXW9KpB — nxthompson (@nxthompson) May 29, 2024

OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier

OpenAI announced a new deal with management consulting giant PwC . The company will become OpenAI’s biggest customer to date, covering 100,000 users, and will become OpenAI’s first partner for selling its enterprise offerings to other businesses.

OpenAI says it is training its GPT-4 successor

OpenAI announced in a blog post that it has recently begun training its next flagship model to succeed GPT-4. The news came in an announcement of its new safety and security committee, which is responsible for informing safety and security decisions across OpenAI’s products.

Former OpenAI director claims the board found out about ChatGPT on Twitter

On the The TED AI Show podcast, former OpenAI board member Helen Toner revealed that the board did not know about ChatGPT until its launch in November 2022. Toner also said that Sam Altman gave the board inaccurate information about the safety processes the company had in place and that he didn’t disclose his involvement in the OpenAI Startup Fund.

Sharing this, recorded a few weeks ago. Most of the episode is about AI policy more broadly, but this was my first longform interview since the OpenAI investigation closed, so we also talked a bit about November. Thanks to @bilawalsidhu for a fun conversation! https://t.co/h0PtK06T0K — Helen Toner (@hlntnr) May 28, 2024

ChatGPT’s mobile app revenue saw biggest spike yet following GPT-4o launch

The launch of GPT-4o has driven the company’s biggest-ever spike in revenue on mobile , despite the model being freely available on the web. Mobile users are being pushed to upgrade to its $19.99 monthly subscription, ChatGPT Plus, if they want to experiment with OpenAI’s most recent launch.

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

After demoing its new GPT-4o model last week, OpenAI announced it is pausing one of its voices , Sky, after users found that it sounded similar to Scarlett Johansson in “Her.”

OpenAI explained in a blog post that Sky’s voice is “not an imitation” of the actress and that AI voices should not intentionally mimic the voice of a celebrity. The blog post went on to explain how the company chose its voices: Breeze, Cove, Ember, Juniper and Sky.

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them. Read more about how we chose these voices: https://t.co/R8wwZjU36L — OpenAI (@OpenAI) May 20, 2024

ChatGPT lets you add files from Google Drive and Microsoft OneDrive

OpenAI announced new updates for easier data analysis within ChatGPT . Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations. The company says these improvements will be added to GPT-4o in the coming weeks.

We're rolling out interactive tables and charts along with the ability to add files directly from Google Drive and Microsoft OneDrive into ChatGPT. Available to ChatGPT Plus, Team, and Enterprise users over the coming weeks. https://t.co/Fu2bgMChXt pic.twitter.com/M9AHLx5BKr — OpenAI (@OpenAI) May 16, 2024

OpenAI inks deal to train AI on Reddit data

OpenAI announced a partnership with Reddit that will give the company access to “real-time, structured and unique content” from the social network. Content from Reddit will be incorporated into ChatGPT, and the companies will work together to bring new AI-powered features to Reddit users and moderators.

We’re partnering with Reddit to bring its content to ChatGPT and new products: https://t.co/xHgBZ8ptOE — OpenAI (@OpenAI) May 16, 2024

OpenAI debuts GPT-4o “omni” model now powering ChatGPT

OpenAI’s spring update event saw the reveal of its new omni model, GPT-4o, which has a black hole-like interface , as well as voice and vision capabilities that feel eerily like something out of “Her.” GPT-4o is set to roll out “iteratively” across its developer and consumer-facing products over the next few weeks.

OpenAI demos real-time language translation with its latest GPT-4o model. pic.twitter.com/pXtHQ9mKGc — TechCrunch (@TechCrunch) May 13, 2024

OpenAI to build a tool that lets content creators opt out of AI training

The company announced it’s building a tool, Media Manager, that will allow creators to better control how their content is being used to train generative AI models — and give them an option to opt out. The goal is to have the new tool in place and ready to use by 2025.

OpenAI explores allowing AI porn

In a new peek behind the curtain of its AI’s secret instructions , OpenAI also released a new NSFW policy . Though it’s intended to start a conversation about how it might allow explicit images and text in its AI products, it raises questions about whether OpenAI — or any generative AI vendor — can be trusted to handle sensitive content ethically.

OpenAI and Stack Overflow announce partnership

In a new partnership, OpenAI will get access to developer platform Stack Overflow’s API and will get feedback from developers to improve the performance of their AI models. In return, OpenAI will include attributions to Stack Overflow in ChatGPT. However, the deal was not favorable to some Stack Overflow users — leading to some sabotaging their answer in protest .

U.S. newspapers file copyright lawsuit against OpenAI and Microsoft

Alden Global Capital-owned newspapers, including the New York Daily News, the Chicago Tribune, and the Denver Post, are suing OpenAI and Microsoft for copyright infringement. The lawsuit alleges that the companies stole millions of copyrighted articles “without permission and without payment” to bolster ChatGPT and Copilot.

OpenAI inks content licensing deal with Financial Times

OpenAI has partnered with another news publisher in Europe, London’s Financial Times , that the company will be paying for content access. “Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote in a press release.

OpenAI opens Tokyo hub, adds GPT-4 model optimized for Japanese

OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands.

Sam Altman pitches ChatGPT Enterprise to Fortune 500 companies

According to Reuters, OpenAI’s Sam Altman hosted hundreds of executives from Fortune 500 companies across several cities in April, pitching versions of its AI services intended for corporate use.

OpenAI releases “more direct, less verbose” version of GPT-4 Turbo

Premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now use an updated and enhanced version of GPT-4 Turbo . The new model brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base.

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding. Source: https://t.co/fjoXDCOnPr pic.twitter.com/I4fg4aDq1T — OpenAI (@OpenAI) April 12, 2024

ChatGPT no longer requires an account — but there’s a catch

You can now use ChatGPT without signing up for an account , but it won’t be quite the same experience. You won’t be able to save or share chats, use custom instructions, or other features associated with a persistent account. This version of ChatGPT will have “slightly more restrictive content policies,” according to OpenAI. When TechCrunch asked for more details, however, the response was unclear:

“The signed out experience will benefit from the existing safety mitigations that are already built into the model, such as refusing to generate harmful content. In addition to these existing mitigations, we are also implementing additional safeguards specifically designed to address other forms of content that may be inappropriate for a signed out experience,” a spokesperson said.

OpenAI’s chatbot store is filling up with spam

TechCrunch found that the OpenAI’s GPT Store is flooded with bizarre, potentially copyright-infringing GPTs . A cursory search pulls up GPTs that claim to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services and advertise themselves as being able to bypass AI content detection tools.

The New York Times responds to OpenAI’s claims that it “hacked” ChatGPT for its copyright lawsuit

In a court filing opposing OpenAI’s motion to dismiss The New York Times’ lawsuit alleging copyright infringement, the newspaper asserted that “OpenAI’s attention-grabbing claim that The Times ‘hacked’ its products is as irrelevant as it is false.” The New York Times also claimed that some users of ChatGPT used the tool to bypass its paywalls.

OpenAI VP doesn’t say whether artists should be paid for training data

At a SXSW 2024 panel, Peter Deng, OpenAI’s VP of consumer product dodged a question on whether artists whose work was used to train generative AI models should be compensated . While OpenAI lets artists “opt out” of and remove their work from the datasets that the company uses to train its image-generating models, some artists have described the tool as onerous.

A new report estimates that ChatGPT uses more than half a million kilowatt-hours of electricity per day

ChatGPT’s environmental impact appears to be massive. According to a report from The New Yorker , ChatGPT uses an estimated 17,000 times the amount of electricity than the average U.S. household to respond to roughly 200 million requests each day.

ChatGPT can now read its answers aloud

OpenAI released a new Read Aloud feature for the web version of ChatGPT as well as the iOS and Android apps. The feature allows ChatGPT to read its responses to queries in one of five voice options and can speak 37 languages, according to the company. Read aloud is available on both GPT-4 and GPT-3.5 models.

ChatGPT can now read responses to you. On iOS or Android, tap and hold the message and then tap “Read Aloud”. We’ve also started rolling on web – click the "Read Aloud" button below the message. pic.twitter.com/KevIkgAFbG — OpenAI (@OpenAI) March 4, 2024

OpenAI partners with Dublin City Council to use GPT-4 for tourism

As part of a new partnership with OpenAI, the Dublin City Council will use GPT-4 to craft personalized itineraries for travelers, including recommendations of unique and cultural destinations, in an effort to support tourism across Europe.

A law firm used ChatGPT to justify a six-figure bill for legal services

New York-based law firm Cuddy Law was criticized by a judge for using ChatGPT to calculate their hourly billing rate . The firm submitted a $113,500 bill to the court, which was then halved by District Judge Paul Engelmayer, who called the figure “well above” reasonable demands.

ChatGPT experienced a bizarre bug for several hours

ChatGPT users found that ChatGPT was giving nonsensical answers for several hours , prompting OpenAI to investigate the issue. Incidents varied from repetitive phrases to confusing and incorrect answers to queries. The issue was resolved by OpenAI the following morning.

Match Group announced deal with OpenAI with a press release co-written by ChatGPT

The dating app giant home to Tinder, Match and OkCupid announced an enterprise agreement with OpenAI in an enthusiastic press release written with the help of ChatGPT . The AI tech will be used to help employees with work-related tasks and come as part of Match’s $20 million-plus bet on AI in 2024.

ChatGPT will now remember — and forget — things you tell it to

As part of a test, OpenAI began rolling out new “memory” controls for a small portion of ChatGPT free and paid users, with a broader rollout to follow. The controls let you tell ChatGPT explicitly to remember something, see what it remembers or turn off its memory altogether. Note that deleting a chat from chat history won’t erase ChatGPT’s or a custom GPT’s memories — you must delete the memory itself.

We’re testing ChatGPT's ability to remember things you discuss to make future chats more helpful. This feature is being rolled out to a small portion of Free and Plus users, and it's easy to turn on or off. https://t.co/1Tv355oa7V pic.twitter.com/BsFinBSTbs — OpenAI (@OpenAI) February 13, 2024

OpenAI begins rolling out “Temporary Chat” feature

Initially limited to a small subset of free and subscription users, Temporary Chat lets you have a dialogue with a blank slate. With Temporary Chat, ChatGPT won’t be aware of previous conversations or access memories but will follow custom instructions if they’re enabled.

But, OpenAI says it may keep a copy of Temporary Chat conversations for up to 30 days for “safety reasons.”

Use temporary chat for conversations in which you don’t want to use memory or appear in history. pic.twitter.com/H1U82zoXyC — OpenAI (@OpenAI) February 13, 2024

ChatGPT users can now invoke GPTs directly in chats

Paid users of ChatGPT can now bring GPTs into a conversation by typing “@” and selecting a GPT from the list. The chosen GPT will have an understanding of the full conversation, and different GPTs can be “tagged in” for different use cases and needs.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT. This allows you to add relevant GPTs with the full context of the conversation. pic.twitter.com/Pjn5uIy9NF — OpenAI (@OpenAI) January 30, 2024

ChatGPT is reportedly leaking usernames and passwords from users’ private conversations

Screenshots provided to Ars Technica found that ChatGPT is potentially leaking unpublished research papers, login credentials and private information from its users. An OpenAI representative told Ars Technica that the company was investigating the report.

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

OpenAI has been told it’s suspected of violating European Union privacy , following a multi-month investigation of ChatGPT by Italy’s data protection authority. Details of the draft findings haven’t been disclosed, but in a response, OpenAI said: “We want our AI to learn about the world, not about private individuals.”

OpenAI partners with Common Sense Media to collaborate on AI guidelines

In an effort to win the trust of parents and policymakers, OpenAI announced it’s partnering with Common Sense Media to collaborate on AI guidelines and education materials for parents, educators and young adults. The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy .

OpenAI responds to Congressional Black Caucus about lack of diversity on its board

After a letter from the Congressional Black Caucus questioned the lack of diversity in OpenAI’s board, the company responded . The response, signed by CEO Sam Altman and Chairman of the Board Bret Taylor, said building a complete and diverse board was one of the company’s top priorities and that it was working with an executive search firm to assist it in finding talent. 

OpenAI drops prices and fixes ‘lazy’ GPT-4 that refused to work

In a blog post , OpenAI announced price drops for GPT-3.5’s API, with input prices dropping to 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out. GPT-4 Turbo also got a new preview model for API use, which includes an interesting fix that aims to reduce “laziness” that users have experienced.

Expanding the platform for @OpenAIDevs : new generation of embedding models, updated GPT-4 Turbo, and lower pricing on GPT-3.5 Turbo. https://t.co/7wzCLwB1ax — OpenAI (@OpenAI) January 25, 2024

OpenAI bans developer of a bot impersonating a presidential candidate

OpenAI has suspended AI startup Delphi, which developed a bot impersonating Rep. Dean Phillips (D-Minn.) to help bolster his presidential campaign. The ban comes just weeks after OpenAI published a plan to combat election misinformation, which listed “chatbots impersonating candidates” as against its policy.

OpenAI announces partnership with Arizona State University

Beginning in February, Arizona State University will have full access to ChatGPT’s Enterprise tier , which the university plans to use to build a personalized AI tutor, develop AI avatars, bolster their prompt engineering course and more. It marks OpenAI’s first partnership with a higher education institution.

Winner of a literary prize reveals around 5% her novel was written by ChatGPT

After receiving the prestigious Akutagawa Prize for her novel The Tokyo Tower of Sympathy, author Rie Kudan admitted that around 5% of the book quoted ChatGPT-generated sentences “verbatim.” Interestingly enough, the novel revolves around a futuristic world with a pervasive presence of AI.

Sam Altman teases video capabilities for ChatGPT and the release of GPT-5

In a conversation with Bill Gates on the Unconfuse Me podcast, Sam Altman confirmed an upcoming release of GPT-5 that will be “fully multimodal with speech, image, code, and video support.” Altman said users can expect to see GPT-5 drop sometime in 2024.

OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

OpenAI is forming a Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services. This comes as a part of OpenAI’s public program to award grants to fund experiments in setting up a “democratic process” for determining the rules AI systems follow.

OpenAI unveils plan to combat election misinformation

In a blog post, OpenAI announced users will not be allowed to build applications for political campaigning and lobbying until the company works out how effective their tools are for “personalized persuasion.”

Users will also be banned from creating chatbots that impersonate candidates or government institutions, and from using OpenAI tools to misrepresent the voting process or otherwise discourage voting.

The company is also testing out a tool that detects DALL-E generated images and will incorporate access to real-time news, with attribution, in ChatGPT.

Snapshot of how we’re preparing for 2024’s worldwide elections: • Working to prevent abuse, including misleading deepfakes • Providing transparency on AI-generated content • Improving access to authoritative voting information https://t.co/qsysYy5l0L — OpenAI (@OpenAI) January 15, 2024

OpenAI changes policy to allow military applications

In an unannounced update to its usage policy , OpenAI removed language previously prohibiting the use of its products for the purposes of “military and warfare.” In an additional statement, OpenAI confirmed that the language was changed in order to accommodate military customers and projects that do not violate their ban on efforts to use their tools to “harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

ChatGPT subscription aimed at small teams debuts

Aptly called ChatGPT Team , the new plan provides a dedicated workspace for teams of up to 149 people using ChatGPT as well as admin tools for team management. In addition to gaining access to GPT-4, GPT-4 with Vision and DALL-E3, ChatGPT Team lets teams build and share GPTs for their business needs.

OpenAI’s GPT store officially launches

After some back and forth over the last few months, OpenAI’s GPT Store is finally here . The feature lives in a new tab in the ChatGPT web client, and includes a range of GPTs developed both by OpenAI’s partners and the wider dev community.

To access the GPT Store, users must be subscribed to one of OpenAI’s premium ChatGPT plans — ChatGPT Plus, ChatGPT Enterprise or the newly launched ChatGPT Team.

the GPT store is live! https://t.co/AKg1mjlvo2 fun speculation last night about which GPTs will be doing the best by the end of today. — Sam Altman (@sama) January 10, 2024

Developing AI models would be “impossible” without copyrighted materials, OpenAI claims

Following a proposed ban on using news publications and books to train AI chatbots in the U.K., OpenAI submitted a plea to the House of Lords communications and digital committee. OpenAI argued that it would be “impossible” to train AI models without using copyrighted materials, and that they believe copyright law “does not forbid training.”

OpenAI claims The New York Times’ copyright lawsuit is without merit

OpenAI published a public response to The New York Times’s lawsuit against them and Microsoft for allegedly violating copyright law, claiming that the case is without merit.

In the response , OpenAI reiterates its view that training AI models using publicly available data from the web is fair use. It also makes the case that regurgitation is less likely to occur with training data from a single source and places the onus on users to “act responsibly.”

We build AI to empower people, including journalists. Our position on the @nytimes lawsuit: • Training is fair use, but we provide an opt-out • "Regurgitation" is a rare bug we're driving to zero • The New York Times is not telling the full story https://t.co/S6fSaDsfKb — OpenAI (@OpenAI) January 8, 2024

OpenAI’s app store for GPTs planned to launch next week

After being delayed in December , OpenAI plans to launch its GPT Store sometime in the coming week, according to an email viewed by TechCrunch. OpenAI says developers building GPTs will have to review the company’s updated usage policies and GPT brand guidelines to ensure their GPTs are compliant before they’re eligible for listing in the GPT Store. OpenAI’s update notably didn’t include any information on the expected monetization opportunities for developers listing their apps on the storefront.

GPT Store launching next week – OpenAI pic.twitter.com/I6mkZKtgZG — Manish Singh (@refsrc) January 4, 2024

OpenAI moves to shrink regulatory risk in EU around data privacy

In an email, OpenAI detailed an incoming update to its terms, including changing the OpenAI entity providing services to EEA and Swiss residents to OpenAI Ireland Limited. The move appears to be intended to shrink its regulatory risk in the European Union, where the company has been under scrutiny over ChatGPT’s impact on people’s privacy.

What is ChatGPT? How does it work?

ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI . The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

When did ChatGPT get released?

November 30, 2022 is when ChatGPT was released for public use.

What is the latest version of ChatGPT?

Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o .

Can I use ChatGPT for free?

There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus .

Who uses ChatGPT?

Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

What companies use ChatGPT?

Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool .

Most recently, Microsoft announced at it’s 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

What does GPT mean in ChatGPT?

GPT stands for Generative Pre-Trained Transformer.

What is the difference between ChatGPT and a chatbot?

A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

Can ChatGPT write essays?

Can chatgpt commit libel.

Due to the nature of how these models work , they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

Does ChatGPT have an app?

Yes, there is a free ChatGPT mobile app for iOS and Android users.

What is the ChatGPT character limit?

It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

Does ChatGPT have an API?

Yes, it was released March 1, 2023.

What are some sample everyday uses for ChatGPT?

Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.

What are some advanced uses for ChatGPT?

Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

How good is ChatGPT at writing code?

It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

Can you save a ChatGPT chat?

Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

Are there alternatives to ChatGPT?

Yes. There are multiple AI-powered chatbot competitors such as Together , Google’s Gemini and Anthropic’s Claude , and developers are creating open source alternatives .

How does ChatGPT handle data privacy?

OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out  this form . This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

The web form for making a deletion of data about you request is entitled “ OpenAI Personal Data Removal Request ”.

In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here  for instructions on how you can opt out of our use of your information to train our models.”

What controversies have surrounded ChatGPT?

Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

Several major school systems and colleges, including New York City Public Schools , have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with .

There have also been cases of ChatGPT accusing individuals of false crimes .

Where can I find examples of ChatGPT prompts?

Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase . Another is ChatX . More launch every day.

Can ChatGPT be detected?

Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests , they’re inconsistent at best.

Are ChatGPT chats public?

No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

What lawsuits are there surrounding ChatGPT?

None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

Are there issues regarding plagiarism with ChatGPT?

Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

More TechCrunch

Get the industry’s biggest tech news, techcrunch daily news.

Every weekday and Sunday, you can get the best of TechCrunch’s coverage.

Startups Weekly

Startups are the core of TechCrunch, so get our best coverage delivered weekly.

TechCrunch Fintech

The latest Fintech news and analysis, delivered every Tuesday.

TechCrunch Mobility

TechCrunch Mobility is your destination for transportation news and insight.

The fight over Fisker’s assets is already heating up

Fisker is just a few days into its Chapter 11 bankruptcy, and the fight over its assets is already charged, with one lawyer claiming the startup has been liquidating assets…

The fight over Fisker’s assets is already heating up

Hacker claims to have 30 million customer records from Australian ticket seller giant TEG

A hacker is advertising customer data allegedly stolen from the Australia-based live events and ticketing company TEG on a well-known hacking forum. On Thursday, a hacker put up for sale…

Hacker claims to have 30 million customer records from Australian ticket seller giant TEG

Tesla makes Musk best-paid CEO of all time and Fisker bites the dust

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Elon…

Tesla makes Musk best-paid CEO of all time and Fisker bites the dust

Dot’s AI really, really wants to get to know you

Dot is a new AI companion and chatbot that thrives on getting to know your innermost thoughts and feelings.

Dot’s AI really, really wants to get to know you

E-fuels startup Aether Fuels is raising $34.3 million, per filing

The e-fuels startup is working on producing fuel for aviation and maritime shipping using carbon dioxide and other waste carbon streams.

E-fuels startup Aether Fuels is raising $34.3 million, per filing

Fisker faced financial distress as early as last August

Fisker was facing “potential financial distress” as early as last August, according to a new filing in its Chapter 11 bankruptcy proceeding, which the EV startup initiated earlier this week.…

Fisker faced financial distress as early as last August

Cruise clears key hurdle to getting robotaxis back on roads in California

Cruise, the self-driving subsidiary of General Motors, has agreed to pay a $112,500 fine for failing to provide full information about an accident involving one of its robotaxis last year.…

Cruise clears key hurdle to getting robotaxis back on roads in California

Pitch Deck Teardown: Feel Therapeutics’ $3.5M seed deck

Feel Therapeutics has a pretty original deck, with some twists we rarely see; the company did a great job telling the overall story.

Pitch Deck Teardown: Feel Therapeutics’ $3.5M seed deck

OpenAI buys Rockset to bolster its enterprise AI

The Rockset buy fits into OpenAI’s broader recent strategy of investing heavily in its enterprise sales and tech orgs.

OpenAI buys Rockset to bolster its enterprise AI

US government sanctions Kaspersky executives

The U.S. government announced sanctions against 12 executives and senior leaders of the Russia-based cybersecurity giant Kaspersky. In a press release, the Department of the Treasury’s Office of Foreign Assets…

US government sanctions Kaspersky executives

Style DNA gets a generative AI chatbot that suggests outfit ideas based on your color type

Style DNA, an AI-powered fashion stylist app, creates a personalized style profile from a single selfie. The app is particularly useful for people interested in seasonal color analysis, a process…

Style DNA gets a generative AI chatbot that suggests outfit ideas based on your color type

Khosla-backed Marble, built by former Headway founders, offers affordable group therapy for teens

Rates of depression, anxiety and suicidal thoughts are surging among U.S. teens. A recent report from the Center of Disease Control found that nearly one in three girls have seriously…

Khosla-backed Marble, built by former Headway founders, offers affordable group therapy for teens

A new startup from Figure’s founder is licensing NASA tech in a bid to curb school shootings

Cover says what sets it apart is the underlying technology it employs, which has been exclusively licensed from NASA’s Jet Propulsion Laboratory.

A new startup from Figure’s founder is licensing NASA tech in a bid to curb school shootings

Spotify launches a new Basic streaming plan in the US

Spotify is introducing a new “Basic” streaming plan in the United States, the company announced on Friday. The new plan costs $10.99 per month and includes all of the benefits…

Spotify launches a new Basic streaming plan in the US

Meta is tagging real photos as ‘Made with AI,’ say photographers

Photographers say the social media giant is applying a ‘Made with AI’ label to photos they took, causing confusion for users.

Meta is tagging real photos as ‘Made with AI,’ say photographers

Squarespace sells restaurant reservation system Tock to American Express for $400M

Website building platform Squarespace is selling Tock, its restaurant reservation service, to American Express in a deal worth $400 million — the exact figure that Squarespace paid for the service…

Squarespace sells restaurant reservation system Tock to American Express for $400M

Featured Article

Change Healthcare confirms ransomware hackers stole medical records on a ‘substantial proportion’ of Americans

The February ransomware attack on UHG-owned Change Healthcare stands as one of the largest-ever known digital thefts of U.S. medical records.

Change Healthcare confirms ransomware hackers stole medical records on a ‘substantial proportion’ of Americans

Google pauses its experiment to expand real-money games on the Play Store

Google said today that it globally paused its experiment that aimed to allow new kinds of real-money games on the Play Store, citing the challenges that come with the lack…

Google pauses its experiment to expand real-money games on the Play Store

Kevin Hartz’s A* raises its second oversubscribed fund in three years

Venture firms raised $9.3 billion in Q1 according to PitchBook data, which means this year likely won’t match or surpass 2023’s $81.8 billion total. While emerging managers are feeling the…

Kevin Hartz’s A* raises its second oversubscribed fund in three years

Google is making your movie and TV reviews visible under a new profile page

Google is making reviews of all your movies, TV shows, books, albums and games visible under one profile page starting June 24, according to an email sent to users last…

Google is making your movie and TV reviews visible under a new profile page

Zepto, a 10-minute delivery app, raises $665M at $3.6B valuation

Zepto, an Indian quick commerce startup, has more than doubled its valuation to $3.6 billion in a new funding round of $665 million.

Zepto, a 10-minute delivery app, raises $665M at $3.6B valuation

Language learning app Speak nets $20M, doubles valuation

Speak, the AI-powered language learning app, has raised new money from investors at double its previous valuation.

Language learning app Speak nets $20M, doubles valuation

SpaceX debuts portable Starlink Mini for $599

SpaceX unveiled Starlink Mini, a more portable version of its satellite internet product that is small enough to fit inside a backpack.  Early Starlink customers were invited to purchase the…

SpaceX debuts portable Starlink Mini for $599

Brex’s compliance head has left the fintech startup to join Andreessen Horowitz as a partner

Ali Rathod-Papier has stepped down from her role as global head of compliance at corporate card expense management startup Brex to join venture firm Andreessen Horowitz (a16z) as a partner…

Brex’s compliance head has left the fintech startup to join Andreessen Horowitz as a partner

US bans sale of Kaspersky software citing security risk from Russia 

U.S. officials imposed the “first of its kind” ban arguing that Kaspersky threatens U.S. national security because of its links to Russia.

US bans sale of Kaspersky software citing security risk from Russia 

Apple releases Final Cut Pro for iPad 2 and Final Cut Camera

Apple has released Final Cut Pro for iPad 2 and Final Cut Camera, the company announced on Thursday. Both apps were previously announced during the company’s iPad event in May.…

Apple releases Final Cut Pro for iPad 2 and Final Cut Camera

Poolside is raising $400M+ at a $2B valuation to build a supercharged coding co-pilot

Paris has quickly established itself as a major European center for AI startups, and now another big deal is in the works.

Poolside is raising $400M+ at a $2B valuation to build a supercharged coding co-pilot

Gravitics prepares a testing gauntlet for a new generation of giant spacecraft

The space industry is all abuzz about how SpaceX’s Starship, Blue Origin’s New Glenn, and other heavy-lift rockets will change just about everything. One likely consequence is that spacecraft will…

Gravitics prepares a testing gauntlet for a new generation of giant spacecraft

Influencer shopping app LTK gets an automatic direct message tool

LTK (formerly LiketoKnow.it and RewardStyle), the influencer shopping app with 40 million monthly users, announced on Thursday the launch of a free direct message tool for creators to instantly share…

Influencer shopping app LTK gets an automatic direct message tool

YouTube confirms crackdown on VPN users accessing cheaper Premium plans 

YouTube appears to be taking a firm stance against Premium subscribers who attempt to use a VPN (virtual private network) to access cheaper subscription prices in other countries. This week,…

YouTube confirms crackdown on VPN users accessing cheaper Premium plans 

IMAGES

  1. Unleash the Power of ChatGPT

    chatgpt homework cheat

  2. ChatGPT Cheat Sheet: 100+ Prompts To Unlock All The Power Of ChatGPT

    chatgpt homework cheat

  3. ChatGPT Cheat Sheet: 100+ Prompts to Unlock All the power of ChatGPT

    chatgpt homework cheat

  4. The ultimate homework cheat? How teachers are facing up to ChatGPT

    chatgpt homework cheat

  5. Chatgpt Cheat Sheet Chatgpt Prompt Structure How to Use

    chatgpt homework cheat

  6. The Ultimate ChatGPT Cheat Sheet

    chatgpt homework cheat

COMMENTS

  1. ChatGPT was tipped to cause widespread cheating. Here's what students

    For Eric, who asked to remain anonymous, the arrival of ChatGPT in the summer of 2022-23 was a mixed blessing. To stop students using AI to cheat on take-home assignments, his school switched to ...

  2. ChatGPT Cheating: What to Do When It Happens

    Those "don't let the robot do your homework" talks are becoming all too common in schools these days. More than a quarter of K-12 teachers have caught their students cheating using ChatGPT ...

  3. The ultimate homework cheat? How teachers are facing up to ChatGPT

    Teachers vs ChatGPT - round one. First up, Sky News asked a secondary school science teacher from Essex, who was not familiar with the bot, to feed ChatGPT a homework question. Galaxies contain ...

  4. School Is Back, and the Data Is in: ChatGPT Is Mainly a Tool for

    The summer is over, schools are back, and the data is in: ChatGPT is mainly a tool for cheating on homework. Alistair Barr 2023-09-19T19:17:03Z

  5. Universities, schools react to student use of generative AI programs

    Uni student Daniel hesitates when asked if he has used ChatGPT to cheat on assignments before. His answer is "no", but the 22-year-old feels the need to explain it further. "I don't think it's ...

  6. 'Everybody is cheating': Why this teacher has adopted an open ChatGPT

    Earlier this month, 22-year-old Princeton student Edward Tian created an app to detect if something had been written by a machine. Named GPTZero, it was so popular that when he launched it, the ...

  7. Some students are using ChatGPT to cheat

    School boards and at least one college in Hamilton and surrounding areas are on high alert for any students trying to cheat using a new artificial intelligence tool called ChatGPT. ChatGPT is a ...

  8. Faced with criticism it's a haven for cheaters, ChatGPT adds tool to

    The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

  9. 3 ways to use ChatGPT to help students learn

    Here are three strategies for doing that. 1. Treat ChatGPT as a learning partner. Our research demonstrates that students are more likely to cheat when assignments are designed in ways that ...

  10. Cheating Fears Over Chatbots Were Overblown, New Research Suggests

    Last December, as high school and college students began trying out a new A.I. chatbot called ChatGPT to manufacture writing assignments, fears of mass cheating spread across the United States.

  11. ChatGPT Cheating: Understanding the Risks and Consequences

    The term "chatgpt cheating" has emerged as a growing concern among educators and academic institutions. When used unethically, ChatGPT can facilitate academic dishonesty. ... they can generate essays, answer homework questions, and gain insights on complex topics. The perceived anonymity of interacting with an AI tool further adds to its allure ...

  12. Using ChatGPT for Homework: Does it Feel Like Cheating? (WIP)

    This WIP paper disseminates the results of an anonymous survey given to first-year engineering students in February of 2023 about ChatGPT, the recently-developed artificial intelligence chatbot. Survey results showed that some engineering students had used ChatGPT to complete their homework assignments. Furthermore, only a few of those who used it for homework felt like they were "cheating ...

  13. Think twice before using ChatGPT for help with homework

    ChatGPT is impressive and can be quite useful. It can help people write text, for instance, and code. However, "it's not magic," says Casey Fiesler. In fact, it often seems intelligent and confident while making mistakes — and sometimes parroting biases. Glenn Harvey. By Kathryn Hulick. February 16, 2023 at 6:30 am.

  14. Is Using ChatGPT Cheating?

    The consequences of using ChatGPT to cheat depend on your institution's policies and the severity of the incident. Students who are caught cheating may face academic probation, failing grades, or even expulsion from university. Cheating can have long-term implications for your academic records and future educational and career opportunities.

  15. 89 Percent of College Students Admit to Using ChatGPT for Homework

    Online course provider Study.com asked 1,000 students over the age of 18 about the use of ChatGPT, and found almost half already used it to cheat. Big News / Small Bytes Updated 1.31.23, 5:22 PM EST

  16. Students are using ChatGPT to do their homework. Should ...

    Is the end of homework near? As new powerful artificial intelligence (AI) tools like ChatGPT take the Internet by storm, teachers concerned with AI-driven cheating are questioning their teaching ...

  17. ChatGPT creates a new tool to catch students cheating using ChatGPT

    The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

  18. Academics dismayed by flood of chatgpt written student essays

    'It's not a machine for cheating; it's a machine for producing crap,' says one professor infuriated by the rise of bland essays. The increased prevalence of students using ChatGPT to write essays should prompt a rethink about whether current policies encouraging "ethical" use of artificial intelligence (AI) are working, scholars have argued.

  19. What Students Are Saying About ChatGPT

    Feb. 2, 2023. By now you've probably heard of ChatGPT, a powerful new artificial intelligence chatbot released to the public late last year that can craft jokes and working computer code, guess ...

  20. Don't Ban ChatGPT in Schools. Teach With It. (Published 2023)

    (Some publications have declared, perhaps a bit prematurely, that ChatGPT has killed homework altogether.) Cheating is the immediate, practical fear, along with the bot's propensity to spit out ...

  21. What do AI chatbots really mean for students and cheating?

    The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release

  22. Professors Find Ways to 'ChatGPT-Proof' Assignments

    Professors want to 'ChatGPT-proof' assignments, and are returning to paper exams and requesting editing history to curb AI cheating. Aaron Mok and Associated Press. Aug 10, 2023, 12:32 PM PDT ...

  23. Educators Battle Plagiarism As 89% Of Students Admit To Using ...

    Nearly 90% students are already using ChatGPT for homework assignments, creating challenges around plagiarism, cheating, and learning. Subscribe To Newsletters. BETA. This is a BETA experience.

  24. ChatGPT in education

    ChatGPT's logo. Since OpenAI's public release of ChatGPT in November 2022, the use of chatbots has been widely discussed within education. Opinions among educators are divided; some oppose the use of large language models, while a majority finds them beneficial.. ChatGPT can be used for various tasks, including providing an overviews of topics, generating ideas, and writing drafts.

  25. WT? A TikTok tip

    An AI chatbot, ChatGPT can write stories, essays, solve math problems, and even do code - all of which are useful for students to cheat their way through school. It was developed by OpenAI, a ...

  26. Introducing ChatGPT

    ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there's currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows ...

  27. AI: Artificial Intelligence

    However, he hasn't altogether abandoned ChatGPT in his classes. "I tell students they can use the tool to help them troubleshoot homework problems with the expectation they can still successfully navigate quizzes and exams," he explains. "I also highlight that ChatGPT can create multiple-choice and true-false questions about course topics.

  28. Student arrested for using homemade AI gadget to whisper test ...

    A student in Turkey is facing jail time after using a homebrew AI device to cheat on a test. ... including the very popular AI chatbot ChatGPT, to cheat on homework, tests, and other work in the ...

  29. ChatGPT

    Just ask and ChatGPT can help with writing, learning, brainstorming, and more. Start now (opens in a new window) Download the app. Write a text inviting my neighbors to a barbecue (opens in a new window) Give me ideas for what to do with my kids' art (opens in a new window)

  30. ChatGPT: Everything you need to know about the AI chatbot

    The ChatGPT integrations, powered by GPT-4o, will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, and will be free without the need to create a ChatGPT or OpenAI account.