This tool is OpenAI’s response to the backlash it received from educators, journalists, and others for running ChatGPT without any way to identify the text it generated. However, this is still very much a work in progress and is highly unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.”

While OpenAI clearly still has a lot of work to do to improve its tool, there is a limit to how well it can do. It is highly unlikely that we will ever have a tool that can recognize AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to create fluent and human-like text, and the model mimics human-generated text, says Muhammad Abdul-Maghid, a professor who oversees research at in natural language processing and machine learning from the University of British Columbia

We are in an arms race to create detection methods that match the latest, most powerful models, adds Abdul-Maghid. New AI language models are more powerful and better at generating even more fluid language, quickly rendering our existing detection toolset obsolete.

OpenAI built its detector by creating an entirely new AI language model similar to ChatGPT that is specifically trained to detect outputs from models like it. Although details are scarce, the company apparently trained the model on examples of AI-generated text and examples of human-generated text, and then asked it to detect the AI-generated text. We asked for more information, but OpenAI did not respond.

Last month, I wrote about another method for detecting AI-generated text: watermarking. They act as a kind of secret signal in the AI-generated text that allows computer programs to detect it as such.

Researchers at the University of Maryland have developed an excellent way to watermark text generated by artificial intelligence language models and made it freely available. These watermarks will allow us to tell with near certainty when AI-generated text has been used.

The problem is that this method requires AI companies to embed watermarks into their chatbots from the start. OpenAI is developing these systems, but has not yet implemented them in any of its products. Why the delay? One reason may be that it is not always desirable to have watermarks on AI-generated text.

One of the most promising ways to integrate ChatGPT into products is a tool that helps people write emails, or an advanced spell checker in a word processor. It’s not exactly cheating. But watermarking all AI-generated text automatically marks those results and can lead to wrongful accusations.

Source by [author_name]

Previous articleSoftBank Vision Fund posts another quarterly loss as tech falls
Next articleMicrosoft investigates Outlook crash as users face problems – One America News Network