MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 22 November 2024

ChatGPT maker OpenAI mulls watermark tool plan to catch AI chatbot-powered essays

A growing pool of students are using ChatGPT to write their assignments and essays and then passing them off as their own, however formulaic and insipid the tone. The chatbot can offer general answers to most questions posed by students

Mathures Paul Calcutta Published 07.08.24, 10:39 AM
PLAGIARISM WALL

PLAGIARISM WALL Sourced by the Telegraph

Teachers have been experiencing a meteor shower of chatbot-powered essays for a couple of years. OpenAI, the company behind ChatGPT, has a solution but is yet to decide whether to make it publicly available.

“Our teams have developed a text watermarking method that we continue to consider as we research alternatives,” said OpenAI in a statement. The solution is accurate against “localised tampering, such as paraphrasing” but it is “less robust against globalised tampering”, like in cases that involve rewording with another generative model. Another form of tampering involves “asking the model to insert a special character in between every word and then deleting that character”.

ADVERTISEMENT

A growing pool of students are using ChatGPT to write their assignments and essays and then passing them off as their own, however formulaic and insipid the tone. The chatbot can offer general answers to most questions posed by students.

Some are using the tool to check grammar after writing an essay without understanding the changes that are being made. To counter the problem, teachers could be expected to enforce stricter standards when marking assignments. There are also benefits to tools like ChatGPT and Wolfram, like a better understanding of Calculus problems in a step-by-step format.

Any solution to detect artificial intelligence-powered written material can benefit teachers, who are generally trying to discourage students from using chatbots.

OpenAI has said watermarking is one of the many solutions, including classifiers and metadata, that it has looked into while conducting “extensive research on the area of text provenance”.

The watermark is not visible to the human eye but, when run through an AI-detection tool, there can be a score on how likely it is that the text was created using ChatGPT.

OpenAI, it seems, has been ready with the solution for about a year but, according to The Wall Street Journal, the company is walking the line between committing to transparency and attracting and retaining users. An internal survey has reportedly shown that people supported the idea of an AI detection tool by a margin of four to one.

Google is also finetuning a digital watermarking system called SynthID, which can mark video that was digitally generated as well as AI-generated text that comes from Gemini. The tool has already been rolled out to AI-generated images. In Europe, the recent AI Act requires companies to develop AI-generated media in a way that makes it possible to detect.

OpenAI has said that it’s “in the early stages” of exploring embedding metadata and it’s “too early” to know its effectiveness but since it’s “cryptographically signed”, there are no false positives.

In a survey conducted by the Centre for Democracy & Technology, a technology policy nonprofit in the US, in March, it was found that 59 per cent of teachers were certain that one or more of their students have used generative AI for school purposes, and 83 per cent of teachers said they had used ChatGPT or another generative AI tool for personal or school use.

The same research has shown that 52 per cent of teachers agreed that generative AI has made them “more distrustful of whether their students’ work is actually theirs”.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT