Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.

There’s never been a more important time for AI policy

Thanks to the excitement around generative AI, the technology has become a kitchen table topic, and everyone is now aware something needs to be done, says Alex Engler, a fellow at the Brookings Institution. But the devil will be in the details. 

To really tackle the harm AI has already caused in the US, Engler says, the federal agencies controlling health, education, and others need the power and funding to investigate and sue tech companies. He proposes a new regulatory instrument called Critical Algorithmic Systems Classification (CASC), which would grant federal agencies the right to investigate and audit AI companies and enforce existing laws. This is not a totally new idea. It was outlined by the White House last year in its AI Bill of Rights. 

Say you realize you have been discriminated against by an algorithm used in college admissions, hiring, or property valuation. You could bring your case to the relevant federal agency, and the agency would be able to use its investigative powers to demand that tech companies hand over data and code about how these models work and review what they are doing. If the regulator found that the system was causing harm, it could sue. 

In the years I’ve been writing about AI, one critical thing hasn’t changed: Big Tech’s attempts to water down rules that would limit its power. 

“There’s a little bit of a misdirection trick happening,” Engler says. Many of the problems around artificial intelligence—surveillance, privacy, discriminatory algorithms—are affecting us right now, but the conversation has been captured by tech companies pushing a narrative that large AI models pose massive risks in the distant future, Engler adds. 

“In fact, all of these risks are far better demonstrated at a far greater scale on online platforms,” Engler says. And these platforms are the ones benefiting from reframing the risks as a futuristic problem.

Lawmakers on both sides of the Atlantic have a short window to make some extremely consequential decisions about the technology that will determine how it is regulated for years to come. Let’s hope they don’t waste it. 

Deeper Learning

You need to talk to your kid about AI. Here are 6 things you should say.

Source by [author_name]

Related Articles

Back to top button