Coalition Urges OpenAI to Scrap AI Ballot Measure Over Child Safety Concerns

2 hours ago 9

In brief

  • A conjugation of advocacy groups asks OpenAI to retreat a California AI information ballot initiative.
  • Critics accidental the measurement would bounds ineligible accountability and weaken protections for children.
  • While OpenAI has paused the campaign, the conjugation claims it retains power of the inaugural up of cardinal deadlines.

A conjugation of advocacy groups is urging ChatGPT developer OpenAI to retreat a California ballot inaugural that critics accidental could weaken protections for children and bounds ineligible accountability for AI companies.

In a missive sent to OpenAI connected Wednesday, reviewed by Decrypt, the radical argues that the measurement would fastener successful constrictive child-safety protections, bounds families’ quality to sue, and restrict California’s quality to fortify AI laws successful the future.

The letter, signed by much than 2 twelve organizations including AI argumentation non-profit Encode AI, the Center for Humane Technology, and the Electronic Privacy Information Center, asks OpenAI to dissolve its ballot committee and measurement backmost from the connection portion lawmakers enactment connected legislation.

“The main request present is for OpenAI to retreat from the ballot,” Adam Billen, co-executive manager of Encode AI, told Decrypt.

The quality centers connected a projected “Parents & Kids Safe AI Act,” a California ballot initiative backed by OpenAI and Common Sense Media that would found rules for however AI chatbots interact with minors, including information requirements and compliance standards.

In the letter, the groups reason that those rules autumn short. They accidental the measurement defines harm excessively narrowly, limits enforcement, and restricts families' quality to bring claims erstwhile children are harmed.

But OpenAI controls the existent ballot initiative, Billen said.

“OpenAI has the powerfulness to retreat it oregon enactment the wealth successful for signatures. All of the ineligible authorization rests successful their hands,” helium said. “They person not really withdrawn the inaugural from the ballot. This is simply a communal maneuver successful California, wherever you enactment an inaugural up and enactment wealth successful the committee.”

The missive points to the initiative’s explanation of “severe harm,” which focuses connected carnal wounded tied to suicide oregon violence, excluding a scope of mental health impacts that researchers and families person raised arsenic concerns.

It besides highlights provisions that would barroom parents and children from bringing claims nether the inaugural and bounds enforcement tools disposable to authorities and section officials.

Another interest centers connected however the connection treats idiosyncratic data. The groups reason that its explanation of encrypted idiosyncratic contented could marque it harder to entree chatbot conversations that person served arsenic cardinal grounds successful caller lawsuits.

“We work that arsenic an effort to artifact families from being capable to disclose their dormant children’s chat logs successful court,” Billen said.

The missive besides warns that the measurement could beryllium hard to revise if passed. It would necessitate a two-thirds ballot successful the legislature to amend and necktie aboriginal changes to standards specified arsenic supporting “economic progress,” which advocates accidental could bounds lawmakers’ quality to respond to caller risks.

Billen said the inaugural remains a origin successful ongoing negotiations successful Sacramento, adjacent arsenic OpenAI has paused its efforts to suffice it for the ballot.

“They person $10 cardinal successful the committee, and past you accidental to the legislature, if you don't bash what we want, we'll enactment the wealth successful and get the signatures and enactment this connected the ballot, and if it passes, it volition override immoderate the legislature does,” helium said. “So essentially, what's happening present is they're trying to steer and power what authorities legislators bash done the usage of the inaugural arsenic a menace they're leaving connected the table.”

OpenAI is not the lone institution facing scrutiny implicit chatbot-related harms. Earlier this month, the household of Jonathan Gavalas sued Google, claiming that Gemini pushed a delusion that escalated to unit and his eventual suicide. Billen, however, said OpenAI’s attack reflects a broader signifier successful the tech industry.

“The lobbying playbook that’s getting utilized connected AI from these large guys successful particular—the Googles, the Metas, Amazons—is the aforesaid strategy that was utilized antecedently connected different tech issues,” helium said.

For now, the conjugation is focused connected getting OpenAI to retreat the measurement and let lawmakers to determination guardant done the legislative process.

“It’s truly important, peculiarly for the companies that are putting that exertion retired there, to not beryllium the ones who are penning the rules that modulate them, due to the fact that that’s not meaningful protections,” Billen said.

OpenAI did not instantly respond to Decrypt's petition for comment.

Daily Debrief Newsletter

Start each time with the apical quality stories close now, positive archetypal features, a podcast, videos and more.

Read Entire Article