More than 100 organizations are raising alarms about a provision in the House’s sweeping tax and spending cuts package that would hamstring the regulation of artificial intelligence systems. Tucked into President Donald Trump’s “one big, beautiful” agenda bill is a rule that, if passed, would prohibit states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for 10 years. With AI rapidly advancing and extending into more areas of life — such as personal communications, health care, hiring and policing — blocking states from enforcing even their own laws related to the technology could harm users and society, the organizations said. They laid out their concerns in a letter sent Monday to members of Congress, including House Speaker Mike Johnson and House Democratic Leader Hakeem Jeffries. “This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making or using that bad tech would be unaccountable to lawmakers and the public,” the letter, provided exclusively to CNN ahead of its release, states. The bill cleared a key hurdle when the House Budget Committee voted to advance it on Sunday night, but it still must undergo a series of votes in the House before it can move to the Senate for consideration. The 141 signatories on the letter include academic institutions such as Cornell University and Georgetown Law’s Center on Privacy and Technology, and advocacy groups such as the Southern Poverty Law Center and the Economic Policy Institute. Employee coalitions such as Amazon Employees for Climate Justice and the Alphabet Workers Union, the labor group representing workers at Google’s parent company, also signed the letter, underscoring how widely held concerns about the future of AI development are. “The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” said Emily Peterson-Cassin, corporate power director at non-profit Demand Progress, which drafted the letter. “Speaker Johnson and Leader Jeffries must listen to the American people and not just Big Tech campaign donations,” Peterson-Cassin said in a statement. The letter comes as Trump has rolled back some of the limited federal rules for AI that had been existed prior to his second term. Shortly after taking office this year, Trump revoked a sweeping Biden-era executive order designed to provide at least some safeguards around artificial intelligence. He also said he would rescind Biden-era restrictions on the export of critical US AI chips earlier this month. Ensuring that the United States remains the global leader in AI, especially in the face of heightened competition from China, has been one of the president’s key priorities. “We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off,” Vice President JD Vance told heads of state and CEOs at the Artificial Intelligence Action Summit in February. US states, however, have increasingly moved to regulate some of the highest risk applications of AI in the absence of significant federal guidelines. Colorado, for example, passed a comprehensive AI law last year requiring tech companies to protect consumers from the risk of algorithmic discrimination in employment and other crucial decisions, and inform users when they’re interacting with an AI system. New Jersey Gov. Phil Murphy, a Democrat, signed a law earlier this year that creates civil and criminal penalties for people who distribute misleading AI-generated deepfake content. And Ohio lawmakers are considering a bill that would require watermarks on AI-generated content and prohibit identity fraud using deepfakes. Multiple state legislatures have also passed laws regulating the use of AI-generated deepfakes in elections. That some applications of AI should be regulated has been a rare point of bipartisan agreement on Capitol Hill. On Monday, President Donald Trump is set to sign into law the Take It Down Act, which will make it illegal to share non-consensual, AI-generated explicit images, which passed both the House and Senate with support from both sides of the aisle. The budget bill provision would run counter to the calls from some tech leaders for more regulation of AI. OpenAI CEO Sam Altman testified to a Senate subcommittee in 2023 that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” More recently on Capitol Hill, Altman said he agreed that a risk-based approach to regulating AI “makes a lot of sense,” although he urged federal lawmakers to create clear guidelines to help tech companies navigating a patchwork of state regulations. “We need to make sure that companies like OpenAI and others have legal clarity on how we’re going to operate. Of course, there will be rules. Of course, there need to be some guardrails,” he said. But, he added, “we need to be able to understand how we’re going to offer services, and where the rules of the road are going to be.”
House Republicans want to stop states from regulating AI. More than 100 organizations are pushing back
TruthLens AI Suggested Headline:
"Over 100 Organizations Oppose House Provision Limiting State Regulation of AI"
TruthLens AI Summary
More than 100 organizations have expressed significant concerns regarding a provision in the House's tax and spending cuts package that aims to restrict state-level regulation of artificial intelligence (AI) for a decade. This provision, included in President Donald Trump's expansive legislative agenda, would prevent states from enforcing any laws related to AI systems, models, or automated decision-making processes. The organizations argue that such a moratorium could have detrimental effects on public safety and accountability, especially as AI technology continues to permeate various aspects of daily life, including healthcare, hiring practices, and law enforcement. In a letter addressed to members of Congress, including House Speaker Mike Johnson and House Democratic Leader Hakeem Jeffries, these groups highlighted the potential for companies to operate without accountability, even if their AI algorithms cause harm or misconduct. The letter indicates a growing unease among advocacy groups, academic institutions, and employee coalitions regarding the implications of unchecked AI development and the prioritization of corporate interests over public welfare.
The House Budget Committee recently advanced the bill, but it still requires further votes before proceeding to the Senate. The letter's signatories include notable organizations like Cornell University, Georgetown Law's Center on Privacy and Technology, and the Southern Poverty Law Center, reflecting a broad consensus on the need for regulatory frameworks to govern AI technologies. Critics of the provision argue that it represents a significant concession to large tech companies, allowing them to sidestep regulations that are increasingly seen as necessary to protect consumers from algorithmic discrimination and misinformation. As states like Colorado and New Jersey enact laws to mitigate risks associated with AI applications, the federal government's stance appears to diverge from these local efforts. This situation is further complicated by President Trump's recent actions to roll back federal AI regulations established during the Biden administration. While some tech leaders, including OpenAI's CEO Sam Altman, have called for a balanced regulatory approach to ensure safety and clarity, the proposed budget bill could undermine efforts to create necessary safeguards in the rapidly evolving AI landscape.
TruthLens AI Analysis
The article sheds light on a significant legislative proposal by House Republicans aimed at limiting state regulations on artificial intelligence (AI). This move has prompted a strong backlash from more than 100 organizations that are concerned about the implications of such restrictions on the accountability and safety of AI technologies.
Concerns Over Regulation Limitations
The proposal embedded in a larger tax and spending cuts package seeks to establish a 10-year moratorium on state laws regulating AI systems. Critics argue that this would hinder the ability of states to respond to harmful algorithms and decision-making systems, effectively leaving companies unaccountable for potentially dangerous actions. The letter from the organizations highlights fears that unchecked AI development could lead to severe societal consequences, particularly in sensitive areas such as healthcare, policing, and employment.
Broad Coalition Against the Proposal
The wide array of signatories, including academic institutions and labor groups, indicates a collective concern across various sectors. This coalition underscores the importance of retaining state-level oversight over AI technologies, suggesting that many stakeholders are worried about the potential ramifications of an unregulated AI landscape. The involvement of employee coalitions reflects a growing awareness and activism among workers regarding the ethical implications of AI in the workplace.
Political and Economic Implications
As the bill progresses through Congress, its passage could reshape the regulatory environment for AI. If states are unable to implement their own regulations, the landscape of AI development may shift towards less oversight, raising ethical and safety concerns. The economic implications could be significant, particularly for companies developing AI technologies. Investors may react to this news by reassessing risks associated with AI firms, leading to fluctuations in stock prices for companies involved in AI development.
Connection to Wider Issues
This legislative action can be seen in the broader context of the ongoing debates surrounding technology regulation, privacy, and corporate accountability. It resonates with contemporary discussions about the role of government in regulating emerging technologies, reflecting a tension between innovation and consumer protection.
Use of AI in News Creation
It's possible that AI tools may have been utilized in drafting or analyzing the content of this article, particularly in how the concerns are articulated and the breadth of organizations cited. However, the analysis provided seems to maintain a journalistic tone that emphasizes human perspectives and concerns, suggesting that while AI may assist in content generation, it does not overshadow the human element of advocacy and social responsibility.
Potential Manipulative Elements
While the article presents a clear argument against the proposed regulation, it does not appear overtly manipulative. The choice of language is straightforward, aiming to inform readers of the consequences of the legislative action rather than inciting fear or outrage. Nevertheless, the framing of the issue could lead to a polarized perception, influencing public opinion against the legislative proposal.
In conclusion, this article provides a critical perspective on a legislative effort that may significantly impact the future of AI regulation. The concerns raised by various organizations highlight the need for balanced oversight in an era of rapid technological advancement, with the potential for significant societal implications if such oversight is compromised.