No more automated texts? ChatGPT will have a watermark to identify the content; it understands

No more automated texts? ChatGPT will have a watermark to identify the content; it understands

With the increasing confusion between original texts and those produced by ChatGPT, OpenAI decided to create a new solution.

The development of artificial intelligence has brought great benefits in many fields, but it also raises questions about the authenticity of the resulting content.

OpenAI, the developer of ChatGPT, is aware of these concerns and recently confirmed the existence of a tool capable of identifying texts generated by its language model.

This move is aimed at addressing plagiarism issues and ensuring transparency in the use of the technology. Check out how this watermarking tool works and why OpenAI is still reluctant to make it available to the public.

Automated texts generated by ChatGPT can be easily recognized by OpenAI.
Automated texts generated by ChatGPT are easily recognizable by OpenAI. / Credit: @jeanedeoliveirafotografia / beneficiodoidoso.com.br

OpenAI confirms it has a tool to identify text generated using ChatGPT

OpenAI recently confirmed the existence of a watermarking tool that allows you to identify texts generated by ChatGPT. This revelation comes after a report by The Wall Street Journal detailing the development of this technology.

OpenAI says the tool is able to detect texts generated by its language model with high accuracy, and is primarily aimed at reducing plagiarism and improving transparency.

The tool is still in the evaluation phase, and there is no confirmation yet about its official launch. The company confirmed that this is just one of the many solutions being researched to ensure the origin of texts.

Implementing this watermark could expose those using ChatGPT to write or rewrite texts, helping to maintain the integrity of published content.

OpenAI said the technology could have a disproportionate impact on some groups, contributing to the hesitation to launch the tool.

See also  PUBLIC ALERT TO BRAZILIANS WITH WHATSAPP TODAY (27/03)

This tool is part of a series of methods that OpenAI is studying, including classifiers and metadata, as part of its extensive research into the field of text provenance.

The company is also concerned about the stigma and bias implications of this technology, especially for those who do not speak English as their primary language.

OpenAI believes that introducing a watermark could discourage the use of ChatGPT, especially among these groups, raising questions about how best to implement such a feature without harming users.

look: These Mobile Phones Can Now Say Goodbye to WhatsApp: Check Out the Full List!

How does this watermark work?

The watermark, developed by OpenAI, works by modifying the way the language model predicts and selects the next words, creating a recognizable pattern without affecting the quality of the answers.

This method allows OpenAI to detect texts generated by ChatGPT with up to 99.9% accuracy, as revealed by the Wall Street Journal.

The idea is that by providing these tags, the texts can be verified later, ensuring that AI-generated content can be traced back to ChatGPT.

To implement this technique, OpenAI is testing several approaches, including implementing cryptographically signed metadata, which would prevent false positives.

This means that the tool will never incorrectly indicate that a text was generated by ChatGPT if it wasn't actually generated by AI.

Despite the advantages, the company also faces challenges, such as the possibility of circumventing the watermark by rewriting with other language models or translations, which can distort the original mark.

OpenAI had previously released a detection tool, but with a very low accuracy of only 26%, which led to the feature being dropped.

See also  A new WhatsApp function arrives that will make it easier to deal with regret

Now, with this new approach, the company hopes to achieve a higher level of accuracy, making the identification of texts generated by ChatGPT more reliable and effective.

know more: Can't reply to all messages? Learn how to send automatic text messages on WhatsApp

The company does not want to release the tool to users.

Although the tool is technically ready and showing promising results, OpenAI is hesitant to release it to the public at this time.

The main concern is the negative impact this move could have on the use of ChatGPT, especially among those who use the tool to overcome language barriers.

The company fears that applying the watermark will stigmatize the use of AI, and discourage non-English speaking users from using ChatGPT.

Additionally, OpenAI admits that about 30% of survey participants said they would use ChatGPT less if a text-tagging system was implemented.

This raises questions about the tool’s usefulness and user acceptance. The company continues to discuss internally best practices and methods to ensure the authenticity of texts without compromising the user experience or discouraging use of the technology.

do not miss: Has your smartphone been stolen? The secure cell phone will have a device recovery function; payment

You May Also Like

About the Author: Osmond Blake

"Web geek. Wannabe thinker. Reader. Freelance travel evangelist. Pop culture aficionado. Certified music scholar."

Leave a Reply

Your email address will not be published. Required fields are marked *