When WIRED asked to cover this week’s newsletter, my first instinct was to ask ChatGPT — OpenAI’s viral chatbot — to see what it came up with. That’s what I’ve been doing with emails, recipes, and LinkedIn posts all week. Performance was down significantly, but sassy verses about Elon Musk were up 1,000 percent.

I asked the bot to write a Stephen Levy-style column about me, but the results weren’t great. ChatGPT gave a general commentary on the prospects and pitfalls of AI, but didn’t really capture Stephen’s voice and say anything new. As I wrote last week, it was loose but not entirely convincing. But it made me wonder: Would I have gotten away with it? And what systems can catch people using AI for things they really shouldn’t, whether it’s work emails or college essays?

To find out, I spoke with Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute, who speaks eloquently about how to build transparency and accountability into algorithms. I asked her what that might look like for a system like ChatGPT.

Amit Katwala: ChatGPT can write everything from classic poetry to flashy marketing pieces, but one of the big talking points this week was whether it could help students cheat. Do you think you could tell if one of your students used it to write a paper?

Sandra Wachter: It’s going to be a cat and mouse game. The technique may not yet be good enough to fool me as someone who teaches law, but it may be good enough to convince someone outside the field. I wonder if technology will improve over time to the point where they can fool me too. We may need technical tools to verify that what we see is human-made, just as we have tools to detect fakes and photo-edited photos.

Text seems to have a harder time doing this than fake images because it has fewer artifacts and telltale signs. Perhaps any reliable solution should be built by the company that creates the text in the first place.

You must have buy-in from whoever creates the tool. But if I offer services to students, I might not be the kind of company that would obey that. And there may be a situation where even if you put watermarks, they can be removed. Groups that are very tech savvy will probably find a way. But there is a real technical tool [built with OpenAI’s input] which allows you to determine whether the output is artificially created.

What would a harm reduction version of ChatGPT look like?

A couple of things. First, I would really argue that whoever creates these tools puts the watermarks in place. And perhaps the EU’s proposed AI law could help because it deals with transparency around bots, saying you should always be aware if something isn’t right. But companies may not want to do this, and perhaps watermarks can be removed. So it’s about stimulating research into independent tools that look at the results of artificial intelligence. And in education, we need to be more creative in how we assess students and how we write papers: what questions can we ask that are less easily faked? It should be a combination of technological and human oversight that helps us contain failures.

Source by [author_name]

Previous articleMazars suspends work with crypto clients including Binance, Crypto.com
Next articleFormer FTX spokesperson Kevin O’Leary defends the crypto firm’s endorsement