Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

AI giants promise to allow outside audits of their algorithms under new White House pact

The White House struck a deal with major AI developers, including Amazon, Google, Meta, Microsoft, and OpenAI, that requires them to take steps to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a “voluntary commitment,” the companies agree to conduct internal testing and allow external testing of new AI models before their public release. The test will look for issues including biased or discriminatory results, cyber security weaknesses and risks of harm to society as a whole. Startups Anthropic and Inflection, both developers of prominent ChatGPT competitor OpenAI, also participated in the deal.

“Companies have an obligation to ensure that their products are safe before they are released to the public by testing the security and capabilities of their AI systems,” White House Special Adviser on Artificial Intelligence Ben Buchanan told reporters at a briefing yesterday. Risks that companies have been asked to look out for include privacy breaches and even potential contributions to biological threats. Companies have also committed to publicly reporting the limitations of their systems and the security and public risks they may pose.

The agreement also says the companies will develop watermarking systems that make it easier for people to identify AI-generated audio and images. OpenAI already adds watermarks to images created by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated images. Helping people distinguish between real and fake is becoming a growing challenge as political campaigns look to turn to generative AI ahead of the 2024 US election.

Recent advances in generative AI systems that can generate text or images have sparked a renewed AI arms race among companies adapting the technology for tasks such as web searches and writing letters of recommendation. But the new algorithms have also raised new concerns that artificial intelligence is reinforcing oppressive social systems such as sexism or racism, amplifying misinformation in elections, or becoming a tool for cybercrime. As a result, regulators and lawmakers in many parts of the world, including Washington, D.C., have increased calls for new regulations, including requirements to evaluate AI before deployment.

It’s not yet clear how much the deal will change the way major AI companies operate. Growing awareness of the technology’s potential flaws has already made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publishes some information, such as use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite outside experts to try to break their models in an approach known as red-teaming.

“Guided by the unwavering principles of safety, security and trust, the voluntary commitments aim to address the risks associated with advanced AI models and encourage adoption of specific practices—like red team testing and transparency reporting—that will move the entire ecosystem forward,” Microsoft president Brad Smith said in a blog post.

The potential societal risks that the agreement requires companies to monitor do not include the carbon footprint of AI training models, a concern now often cited in studies of the impact of AI systems. Building a system like ChatGPT can require thousands of powerful computer processors running for long periods of time.

Source by [author_name]

Related Articles

Back to top button