US Senator Introduces RISE Act to Shield AI Developers from Civil Lawsuits

Coin WorldSunday, Jun 22, 2025 10:24 am ET
2min read

Civil liability law plays a crucial role in shaping the development of emerging technologies like artificial intelligence. Poorly drafted liability rules can hinder innovation by exposing entrepreneurs, particularly AI developers, to unnecessary legal risks. This concern is at the heart of the Responsible Innovation and Safe Expertise (RISE) Act of 2025, introduced by US Senator Cynthia Lummis. The bill aims to protect AI developers from civil lawsuits, ensuring that professionals can understand the capabilities and limitations of AI tools before relying on them.

Initial reactions to the RISE Act have been generally positive, although some critics have pointed out its limited scope and deficiencies in transparency standards. Many view the bill as a work in progress rather than a final document. Hamid Ekbia, a professor at Syracuse University, described the Lummis bill as "timely and needed," but he also noted that it places too much burden on "learned professionals" by demanding only transparency in the form of technical specifications from developers. This, he argues, provides developers with broad immunity while shifting the risk to professionals.

Critics have also suggested that the RISE Act could be seen as a "giveaway" to AI companies, as it shields developers from strict liability for the unpredictable behavior of large language models. However, Felix Shipkevich, principal at Shipkevich Attorneys at Law, argued that the bill's immunity provision is a rational approach to protect developers from limitless exposure for outputs they cannot control. He emphasized that without some form of protection, developers could face endless legal challenges.

The scope of the RISE Act is relatively narrow, focusing on scenarios where professionals use AI tools while interacting with customers or patients. For example, a financial adviser might use an AI tool to develop an investment strategy, or a radiologist could use AI software to interpret medical images. However, the bill does not address cases where there is no professional intermediary between the AI developer and the end-user, such as when chatbots are used as digital companions for minors. This omission is significant, as recent incidents, like a teenager's suicide after engaging with an AI chatbot, highlight the need for clear guidelines in such situations.

Ryan

, a professor of law and health sciences, emphasized the need for clear and unified standards to ensure that all stakeholders understand their legal obligations. He noted that AI's complexity, opacity, and autonomy create new kinds of potential harms, particularly in the healthcare sector. For instance, while physicians have historically outperformed AI in medical diagnoses, recent evidence suggests that in certain areas, AI might achieve better outcomes than human-in-the-loop systems. This raises complex liability issues, such as who would pay compensation for medical errors when a physician is no longer involved.

The AI Futures Project, a nonprofit research organization, has tentatively endorsed the bill but expressed concerns about its transparency requirements. Executive director Daniel Kokotajlo argued that the public deserves to know the goals, values, and biases that companies are attempting to instill in powerful AI systems. He also pointed out that companies could opt out of transparency requirements by accepting liability, which could undermine the bill's effectiveness.

Comparing the RISE Act to the EU's AI Act of 2023, which is the first comprehensive regulation on AI by a major regulator, reveals differences in approach. The EU's AI liability stance has been in flux, with an initial directive withdrawn in February 2025. The EU generally adopts a human rights-based framework, emphasizing the empowerment of individuals, while the Lummis bill takes a risk-based approach focused on processes, documentation, and assessment tools. Kokotajlo suggested that a risk-based approach, focused on those who create and deploy the technology, would be more appropriate for the US.

In conclusion, the RISE Act is seen as a constructive first step in the conversation over federal AI transparency requirements. However, it will likely require modifications before it is enacted into law. Shipkevich views the bill positively as a starting point, emphasizing the need for real transparency requirements and risk management obligations. Bullock, from Americans for Responsible Innovation, also praised the bill's strong ideas but expressed concerns about the effectiveness of transparency evaluations. Assuming the legislation is passed and signed into law, it would take effect on Dec. 1, 2025.

Comments



Add a public comment...
No comments

No comments yet

Disclaimer: The news articles available on this platform are generated in whole or in part by artificial intelligence and may not have been reviewed or fact checked by human editors. While we make reasonable efforts to ensure the quality and accuracy of the content, we make no representations or warranties, express or implied, as to the truthfulness, reliability, completeness, or timeliness of any information provided. It is your sole responsibility to independently verify any facts, statements, or claims prior to acting upon them. Ainvest Fintech Inc expressly disclaims all liability for any loss, damage, or harm arising from the use of or reliance on AI-generated content, including but not limited to direct, indirect, incidental, or consequential damages.