Breakingnewstexas.com

AI models spit out photos of real people and copyrighted images

Stable Diffusion is open source, meaning anyone can analyze and research it. Imagen is closed, but Google has given researchers access. Singh says this work is a great example of how important it is to give research access to these models for analysis, and he argues that companies should be just as transparent with other AI models like OpenAI’s ChatGPT.

However, while the results are impressive, they come with some caveats. The images the researchers were able to obtain either appeared multiple times in the training data or were very unusual compared to other images in the data set, says Florian Tramer, associate professor of computer science at ETH Zürich, who was part of the team.

People who look unusual or have unusual names are at greater risk of being remembered, Tramer says.

The researchers were only able to obtain a relatively small number of exact copies of individual people’s photos from the AI ​​model: only one in a million images was a copy, according to Webster.

But it’s still worrisome, Tramer says: “I really hope no one looks at these results and says, ‘Oh, actually, those numbers aren’t that bad if it’s only one in a million.’

“The fact that they are greater than zero is important,” he adds.

Source by [author_name]

Exit mobile version