Last week I published a story about new tools developed by researchers at the startup Hugging Face and the University of Leipzig that allow people to see what inherent biases AI models have about different genders and ethnicities.

Even though I’ve written a lot about how our biases are reflected in AI models, I was still sick of seeing how pale, masculine, and stale the AI ​​people are. This was especially true for DALL-E 2, which generates white males 97% of the time when given cues such as “CEO” or “Director.”

And the problem of bias runs even deeper than you think into a wider world created by artificial intelligence. These models are created by American companies and trained on North American data, so when they are asked to create even everyday objects, from doors to houses, they create objects that look American, says Federica Bianchi, a researcher from Stanford university. i

As the world becomes increasingly filled with AI-generated images, we will mostly see images that reflect America’s biases, culture, and values. Who knew AI could become a major tool of American soft power?
So how do we solve these problems? Much work has gone into correcting biases in the datasets on which AI models are trained. But two recent research papers offer interesting new approaches.

What if instead of making the training data less biased, you could just ask the model to give you less biased answers?

A team of researchers from the Technical University of Darmstadt, Germany, and AI startup Hugging Face have developed a tool called Fair Diffusion that makes it easier to tune AI models to create the types of images you want. For example, you can create standard photos of CEOs in various settings, and then use Fair Diffusion to replace the white men in the images with women or people of different ethnicities.

As the Hugging Face tools show, AI models that generate images based on image-text pairs in their training data have very strong default biases about occupation, gender, and ethnicity. The German researchers’ Fair Diffusion tool is based on a technique they developed called semantic guidance, which allows users to control how an artificial intelligence system creates images of people and edits the results.

The artificial intelligence system remains very close to the original image, says Christian Kersting, professor of computer science at TU Darmstadt, who participated in the work.

Source by [author_name]

Previous articleAn Amazon sales consultant has admitted to bribing employees to help customers
Next articleElon Musk says the For You tab will only show verified Twitter users