According to a report by the Wall Street Journal, OpenAI has been ready with a watermarking ChatGPT tool for about a year, that could detect essays written by ChatGPT with a high degree of accuracy.
However, the company is contemplating whether to release the tool, which is why it has yet to be released. On one hand, they think it is the most responsible thing to do, while on the other hand, it could hurt the user base.
Yesterday, an update by TechCrunch, revealed OpenAI’s stance, “Our teams have developed a text watermarking method that we continue to consider as we research alternatives.”
The company said its method to detect AI is “99.9% effective” and resistant to “tampering, such as paraphrasing.” However, rewording using another program could lead to evasion. It is also worried about the possible stigmatization of the AI tools’ usefulness.
Although the ability to detect AI-written material is a blessing for teachers, it might upset some users. According to a survey, almost 30% of users have
revealed that “they’d use the software less if watermarking was introduced.”
In its statement to TechCrunch, an OpenAI spokesperson said that the company is taking a deliberate approach to text watermarking due to “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”
In recent updates, the company said that it’s still “in the early stages” of exploring embedded metadata. It’s still too early to tell how effective it’s going to be, but there won’t be any false alarms as it’s cryptographically signed.