OpenAI has confirmed that it is working on a text watermarking method, after the report by Wall Street Journal. According to the US-based artificial intelligence company, its text watermarking method is accurate and “even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering.” However, the company has kept it on hold to date over concerns about stigmatisation of use of AI as a useful writing tool for non-native English speakers.
What is OpenAI’s watermarking method
OpenAI’s watermarking can be described as the process of regulating the model’s predictions of the word and phrase that are going to come up next to create a pattern that can be noticed. Reportedly, the company feels that watermarking is the right thing to do, but also believes watermarking could prevent people from using ChatGPT.
Watermarking can prove to be an effective way to find out if a content was written by AI, and according to reports the company has said that the watermarking does not affect the quality of content produced by ChatGPT.
OpenAI said that it is exploring embedding metadata, too. “For example, unlike watermarking, metadata is cryptographically signed, which means that there are no false positives. We expect this will be increasingly important as the volume of generated text increases. While text watermarking has a low false positive rate, applying it to large volumes of text would lead to a large number of total false positives,” said OpenAI in a blogpost it updated on August 4.
OpenAI had shut down its previous AI text detector, AI classifier over low rate of accuracy. This new tool is expected to be different from the previous ones. Watermarking would focus on detection of content from ChatGPT only, resulting in an invisible watermark within the written content that is detectable by different tools.
First Published: Aug 05 2024 | 11:06 AM IST