In China, the authorities recently banned the creation of deepfakes without the subject’s consent. With the Artificial Intelligence Act, Europeans want to add warning signs to indicate that people are interacting with fake or AI-generated images, audio or videos.
All of these regulations could affect how tech companies build, use, and sell AI technologies. Still, regulators must strike a tricky balance between protecting consumers and not stifling innovation — as tech lobbyists aren’t afraid to remind them.
AI is a rapidly evolving field, and the challenge will be to make sure the rules are precise enough to be effective, but not so specific that they become quickly obsolete. As with the EU’s efforts to regulate data protection, if the new laws are implemented correctly, next year could usher in a long-overdue era of artificial intelligence with greater respect for privacy and fairness.
— by Melissa Haykill
Big tech may lose control of basic AI research
Artificial intelligence startups are playing with muscle
Big tech companies aren’t the only players at the forefront of AI; the open source revolution has begun to match, and sometimes surpass, what the wealthiest labs are doing.
In 2022, we saw BLOOM’s first multi-language large language model, created by the community, released by Hugging Face. We also saw an explosion of innovation around the Stable Diffusion open source artificial intelligence model that competed with OpenAI DALL-E 2.