Sam Altman, CEO of OpenAI, walks after lunch during the Allen & Company Sun Valley Conference on July 6, 2022 in Sun Valley, Idaho.
Kevin Deitch | Getty Images News | Getty Images
Artificial intelligence startup OpenAI on Tuesday unveiled a tool designed to determine whether text is human-generated or computer-written.
The release comes two months after OpenAI drew public attention when it introduced ChatGPT, a chatbot that produces text that can appear to be written by a human in response to a human query. Following a wave of attention last week Microsoft announced a multibillion-dollar investment in OpenAI and said it would incorporate the startup’s AI models into its consumer and business products.
related investing news
Schools have been quick to restrict the use of ChatGPT due to concerns that the software could harm learning. Sam Altman, CEO of OpenAI, said education has changed in the past with the advent of technology like calculators, but he also said the company may have ways to help teachers recognize AI-written text.
OpenAI’s new tool can make mistakes and is a work in progress, the company’s Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson and Jan Leicke wrote in a blog post, noting that OpenAI would like feedback on the classifier from parents and teachers.
“In our evaluations of a ‘problem set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘probably AI-written’, while incorrectly labeling human-written text as AI-written intelligence, 9% of the time (false positives),” OpenAI staff wrote.
This isn’t the first attempt to figure out if a text comes from a machine. Princeton University student Edward Tien announced a tool called GPTZero earlier this month, noting on the tool’s website that it was made for educators. OpenAI itself released the detector in 2019, along with a large language model, or LLM, that is less complex than what underlies ChatGPT. Employees write that the new version is more prepared for text processing from the latest artificial intelligence systems.
The new tool is unable to parse input containing less than 1,000 characters, and OpenAI does not recommend using it in languages other than English. In addition, the AI text may be updated slightly so that the classifier cannot correctly identify that it is not primarily the work of a human, the staff wrote.
Back in 2019, OpenAI made it clear that identifying synthetic text is not an easy task. He intends to continue solving the problem.
“Our work on AI-generated text recognition will continue, and we hope to share improved techniques in the future,” wrote Hendrik Kirchner, Ahmad, Aaronson, and Leicke.
WATCH: China’s Baidu is developing an AI-powered chatbot to compete with OpenAI, report says