San Francisco – The makers of ChatGPT are trying to curb its reputation as a freewheeling cheating machine with a new tool that allows teachers to detect if a student or artificial intelligence has written their homework.
The new AI Text Classifier announced by OpenAI on Tuesday weeks of debate at school And the university has rallied over concerns that ChatGPT’s ability to write almost anything on command could foster academic misconduct and impede learning.
OpenAI is new tools – Like others already available – it’s not for everyone. Methods for detecting text written by AI are “imperfect and sometimes wrong,” said Jan Leike, head of OpenAI’s alignment team, who was tasked with making the system more secure. said Mr.
“So you shouldn’t rely solely on it when making decisions,” says Leike.
Teens and college students were among the millions of people who began trying ChatGPT after it launched as a free application on OpenAI’s website on November 30. And while many have found creative and harmless ways to use it, the ease with which they can answer take-home test questions or help with other assignments is a major draw among some educators. It caused panic.
By the time schools opened for the new year, New York City, Los Angeles, and other large public school districts began blocking their use on classroom and school devices.
Seattle Public Schools initially blocked ChatGPT on all school devices in December, but has since made it accessible to educators who want to use it as a teaching tool, said district spokesman Tim Robinson. said.
“We can’t ignore that,” said Robinson.
The district will also expand the use of ChatGPT into the classroom so that teachers can use it to train students to be better critical thinkers and students can use the application as a “tutor” or We also discuss the possibility of helping or enabling new ideas to be generated while working on a challenge. said Robinson.
School districts across the country say the conversation around ChatGPT is evolving rapidly.
“The first reaction was, ‘OMG, how are we going to stop all the fraud streams that are happening on ChatGPT,’” said Devin Page, a technology specialist at Calvert County Public Schools in Maryland. I’m here. Recognizing that “this is the future” and blocking it is not the solution, he said.
“I think we would be naive if we weren’t aware of the dangers this tool poses. will not be able to serve the ,” Page said. Like himself, his ChatGPT will eventually be unblocked, especially once the company’s detection service is in place.
OpenAI highlighted the limitations of its detection tools in Tuesday’s blog post, but in addition to deterring plagiarism: Detect automated disinformation campaigns Other exploits of AI to imitate humans.
The longer the passage of text, the better the tool can detect if something was written by an AI or a human. Enter any text, such as a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man,” and the tool will respond with “extremely unlikely, improbable, likely, likely, or Label it as one of “unknown if likely or not” AI – generated.
But just like ChatGPT itself, trained With the sheer volume of digitized books, newspapers, and online writing often spitting out falsehoods and nonsense with confidence, it’s not easy to interpret how they came to their conclusions.
“We basically don’t know what patterns we’re looking at and how they work under the hood,” says Leike. “There’s not much we can say at this point about how the classifier will actually work.”
Higher education institutions around the world are also beginning to discuss the responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, last week banned its use, banning anyone found to have secretly used ChatGPT and other AI tools to conduct written or oral research. , warned that they could be expelled from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to create new guidelines to help educators.
“Like many other technologies, some districts may find it unsuitable for classroom use,” said Lama Ahmad, a policy researcher at OpenAI. “We don’t really push them in any way. We just want to give them the information they need to help them make the right decisions.”
For a research-oriented San Francisco startup, it’s now an unusually public role. Backed by billions of dollars of investment Delivered by partner Microsoft, it is facing growing interest from the public and government.
French Digital Economy Minister Jean-Noël Barraud recently met with OpenAI executives in California, including CEO Sam Altman, and told an audience at the World Economic Forum in Davos, Switzerland a week later, But the government minister, a former professor at the Massachusetts Institute of Technology and HEC, a French business school in Paris, said there are also difficult ethical questions that need to be addressed.
“So if you’re in a law school, there’s room for concern, because it’s clear that ChatGPT, among other tools, can deliver a relatively impressive exam,” he said. “If you’re in a graduate-level economics department, ChatGPT will struggle to find or deliver what you expect, so if you’re in an economics department, it’s fine.”
He said it will become increasingly important for users to understand the basics of how these systems work and to know what biases may exist.
O’Brien reported from Providence, Rhode Island. AP writer Jon Lester contributed to this report from Paris.
Copyright 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.