Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

The moral weight of AI consciousness

Seth—who thinks that conscious AI is relatively unlikely, at least for the foreseeable future—nevertheless worries about what the possibility of AI consciousness might mean for humans emotionally. “It’ll change how we distribute our limited resources of caring about things,” he says. That might seem like a problem for the future. But the perception of AI consciousness is with us now: Blake Lemoine took a personal risk for an AI he believed to be conscious, and he lost his job. How many others might sacrifice time, money, and personal relationships for lifeless computer systems?

Knowing that the two lines in the Müller-Lyer illusion are exactly the same length doesn’t prevent us from perceiving one as shorter than the other. Similarly, knowing GPT isn’t conscious doesn’t change the illusion that you are speaking to a being with a perspective, opinions, and personality.

Even bare-bones chatbots can exert an uncanny pull: a simple program called ELIZA, built in the 1960s to simulate talk therapy, convinced many users that it was capable of feeling and understanding. The perception of consciousness and the reality of consciousness are poorly aligned, and that discrepancy will only worsen as AI systems become capable of engaging in more realistic conversations. “We will be unable to avoid perceiving them as having conscious experiences, in the same way that certain visual illusions are cognitively impenetrable to us,” Seth says. Just as knowing that the two lines in the Müller-Lyer illusion are exactly the same length doesn’t prevent us from perceiving one as shorter than the other, knowing GPT isn’t conscious doesn’t change the illusion that you are speaking to a being with a perspective, opinions, and personality.

In 2015, years before these concerns became current, the philosophers Eric Schwitzgebel and Mara Garza formulated a set of recommendations meant to protect against such risks. One of their recommendations, which they termed the “Emotional Alignment Design Policy,” argued that any unconscious AI should be intentionally designed so that users will not believe it is conscious. Companies have taken some small steps in that direction—ChatGPT spits out a hard-coded denial if you ask it whether it is conscious. But such responses do little to disrupt the overall illusion. 

Schwitzgebel, who is a professor of philosophy at the University of California, Riverside, wants to steer well clear of any ambiguity. In their 2015 paper, he and Garza also proposed their “Excluded Middle Policy”—if it’s unclear whether an AI system will be conscious, that system should not be built. In practice, this means all the relevant experts must agree that a prospective AI is very likely not conscious (their verdict for current LLMs) or very likely conscious. “What we don’t want to do is confuse people,” Schwitzgebel says.

Avoiding the gray zone of disputed consciousness neatly skirts both the risks of harming a conscious AI and the downsides of treating a lifeless machine as conscious. The trouble is, doing so may not be realistic. Many researchers—like Rufin VanRullen, a research director at France’s Centre Nationale de la Recherche Scientifique, who recently obtained funding to build an AI with a global workspace—are now actively working to endow AI with the potential underpinnings of consciousness. 

""

STUART BRADFORD

The downside of a moratorium on building potentially conscious systems, VanRullen says, is that systems like the one he’s trying to create might be more effective than current AI. “Whenever we are disappointed with current AI performance, it’s always because it’s lagging behind what the brain is capable of doing,” he says. “So it’s not necessarily that my objective would be to create a conscious AI—it’s more that the objective of many people in AI right now is to move toward these advanced reasoning capabilities.” Such advanced capabilities could confer real benefits: already, AI-designed drugs are being tested in clinical trials. It’s not inconceivable that AI in the gray zone could save lives.

VanRullen is sensitive to the risks of conscious AI—he worked with Long and Mudrik on the white paper about detecting consciousness in machines. But it is those very risks, he says, that make his research important. Odds are that conscious AI won’t first emerge from a visible, publicly funded project like his own; it may very well take the deep pockets of a company like Google or OpenAI. These companies, VanRullen says, aren’t likely to welcome the ethical quandaries that a conscious system would introduce. “Does that mean that when it happens in the lab, they just pretend it didn’t happen? Does that mean that we won’t know about it?” he says. “I find that quite worrisome.”

Academics like him can help mitigate that risk, he says, by getting a better understanding of how consciousness itself works, in both humans and machines. That knowledge could then enable regulators to more effectively police the companies that are most likely to start dabbling in the creation of artificial minds. The more we understand consciousness, the smaller that precarious gray zone gets—and the better the chance we have of knowing whether or not we are in it. 

For his part, Schwitzgebel would rather we steer far clear of the gray zone entirely. But given the magnitude of the uncertainties involved, he admits that this hope is likely unrealistic—especially if conscious AI ends up being profitable. And once we’re in the gray zone—once we need to take seriously the interests of debatably conscious beings—we’ll be navigating even more difficult terrain, contending with moral problems of unprecedented complexity without a clear road map for how to solve them. It’s up to researchers, from philosophers to neuroscientists to computer scientists, to take on the formidable task of drawing that map. 

Grace Huckins is a science writer based in San Francisco.

Source by [author_name]

Related Articles

Back to top button