I agree with each of these points, which could potentially point us toward actual limits we might consider to mitigate the dark side of AI. Things like sharing what goes into training large language models like those behind ChatGPT, and allowing opt-outs for those who don’t want their content to be part of what LLM presents to users. Rules against built-in bias. Antitrust laws that prevent a few giant companies from creating an AI cabal that aggregates (and monetizes) almost all the information we receive. As well as the protection of your personal information used by these versatile artificial intelligence products.
But reading this list also highlights the difficulty of turning encouraging proposals into actual binding law. If you look closely at the points in the White House plan, it becomes clear that they apply not only to artificial intelligence, but to almost everything in technology. Each seems to embody a user right that has been violated since eternity. Big tech didn’t wait for generative artificial intelligence to develop unfair algorithms, opaque systems, abusive data practices, and no opt-out. Those are the stakes, mate, and the fact that these issues are brought up in the discussion of new technology only highlights the failure to protect citizens from the harmful effects of our current technology.
During the Senate hearing where Altman spoke, senator after senator sang the same refrain: When it came to social media regulation, we missed the mark, so let’s not mess with AI. But there is no statute of limitations for passing laws that deter previous abuses. Last time I checked, billions of people, including nearly everyone in the US with the ability to poke at a smartphone display, are still on social media, being bullied, violated, and terrorized. There is nothing stopping Congress from getting tougher on these companies and, above all, passing privacy legislation.
The fact that Congress hasn’t done so casts serious doubt over the AI ​​bill’s prospects. Not surprisingly, some regulators, notably FTC Chairwoman Lina Hahn, are not anticipating the new laws. She argues that current law gives her agency broad powers to address issues of bias, anti-competitive behavior and invasion of privacy presented by new artificial intelligence products.
Meanwhile, when the White House released an update on the AI ​​Bill of Rights this week, it underscored the difficulty of actually crafting new laws — and the enormity of the work that still needs to be done. It explains that the Biden administration is sweating heavily on the development of a national strategy for artificial intelligence. But, apparently, the “national priorities” in this strategy have not yet been written down.
The White House now wants tech companies and other AI-related stakeholders, as well as the general public, to submit answers to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his colleagues to suggest a way forward, the administration is asking corporations and the public for ideas. In its request for information, the White House promises to “review every comment, whether it contains a personal story, experience with artificial intelligence systems, or legal technical, research, policy or scientific material or other content.” (I was relieved to see that comments from the big language models weren’t solicited, although I’m willing to bet that GPT-4 would contribute a lot despite this omission.)