House leadership announced the launch of the Task Force on Artificial Intelligence (AI), a bipartisan group that will develop a report outlining AI regulatory priorities and setting guiding principles to help shape the AI landscape.
The goal of the AI Task Force is to identify a series of legislative proposals that have the potential to be approved by both chambers of Congress this year.
AI has been an area of bipartisan consensus in the divided Congress, with several bipartisan legislative proposals emerging over the course of the Congressional session—although enacting legislation has proved difficult as the members diverge on how Congress should target AI risks and potential.

On February 20, Speaker Mike Johnson (R-LA-4) and Democratic Leader Hakeem Jeffries (D-NY-8) revealed the creation of a bipartisan Task Force on Artificial Intelligence, to be chaired by Congressman Jay Obernolte (R-CA-23) and Congressman Ted Lieu (D-CA-36).

The House Task Force on AI
Congress has taken a keen interest in developing legislation to encourage AI innovation while mitigating any risks AI poses to the public. Across both chambers, committees have held dozens of hearings addressing AI in their respective jurisdictions. With the rise in the public use of generative AI, members of Congress across the political spectrum have agreed that it is time to act—although how Congress should move forward has not been clearly decided.

In recent news from the Hill, House leadership announced they would be establishing an AI Task Force to expand Congress’s understanding of AI legislative options. Chairs Obernolte and Lieu will be leading a group of 24 bipartisan members to create a report recommending AI guardrails, identifying legislative priorities, and evaluating varying policy approaches to promote the safe innovation of AI. The members of the Task Force all have AI experience or sit on committees with oversight over AI development and technology. For example, Lieu has been a leading voice for sensible AI regulation in the House, introducing multiple pieces of legislation over the course of the Congressional session. This includes the National AI Commission Act, which would establish a bipartisan commission of experts to review the current approach to regulating AI and offer recommendations to address AI risks through legislative action—very similar to the goals of the newly chartered AI Task Force. Obernolte has also worked in a bipartisan fashion to address AI and U.S. security, introducing the AI for National Security Act last year. The Act would allow the Department of Defense to procure AI-enabled security measures and technologies to bolster the national defense. Both Obernolte and Lieu have worked in close partnership across the party lines to address AI—acknowledging that any legislation designed to deftly handle AI’s impacts on society will need bipartisan support to become law.

Obernolte confirmed that the report produced by the Task Force would provide “the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.” Task Force member Rep. Don Beyer (D-VA-8) specified that the report will also identify five-to-10 legislative proposals that the Task Force believes Congress can and should pass this year.

The Senate AI Insight Forums
Similar to the new House Task Force on AI, Senator Chuck Schumer launched the AI Insight Forums when he rolled out his SAFE Innovation Framework last summer. The forums are designed to bring together industry experts, members of Congress, academics, and civil liberties advocates to understand the risks and potential of AI. The information shared during the Forums, some open to the public and others closed, is intended to help inform Congress of their legislative options. The Insight Forums have convened nine times, covering: national security implications of AI; strategies to manage the risks of AI (and guarding against “doomsday” scenarios); transparency, explainability and copyright; protecting democracy and elections; privacy and liability; workforce impacts; “high impact” AI uses, such as AI in the financial services or health care sectors; innovation; and understanding the technology landscape with AI developers. Schumer is working closely with senators Heinrich (D-NM), Rounds (R-SD) and Young (R-IN) to carry forward the SAFE Innovation Framework.

The Senate AI Insight Forums and the AI Task Force will educate members of Congress on the AI technology landscape as well as the legislative tools available to Congress. The goal is to develop legislation that mitigates risks while maximizing potential for the technology. As seen in both chambers, AI has been an area of bipartisan cooperation in this divided Congress, although debates continue on how to best legislate the evolving technology and its far-ranging impacts on society without impeding innovation.

Bipartisan Momentum to Introduce Legislation, and Obstacles to Enacting Laws
Debates on how to regulate AI have been centerstage in Congress. In both the House and Senate, members have been introducing and discussing AI legislation across a variety of AI use-cases. For example, Sen. Gary Peters (D-MI) passed out of the Homeland Security and Governmental Affairs Committee a suite of legislation addressing the use of AI by the federal government. The AI Leadership Training Act, cosponsored by Sen. Braun (R-IN), would establish an AI training program for federal agency management. The AI Lead Act, cosponsored by Sen. Cornyn (R-TX), would require each agency to appoint a Chief AI Officer to monitor the agency’s procurement and use of AI. The Transparent Automated Governance Act, joined by senators Braun (R-IN) and Lankford (R-OK), would require disclaimers for the public-facing federal use of AI so users are aware when they interface with AI-generated content or receive AI-generated decisions. While each of these has passed out of committee and enjoys bipartisan support, none have yet passed the Senate floor.

Other Members have focused on the harms of AI, rather than specific uses of the technology. Senators Blumenthal (D-CT) and Hawley (R-MO) put forward an approach in September establishing five pillars to guide further legislative development addressing risks to consumer data and privacy. The framework would establish a licensing regime to be administered by an independent oversight body who could audit companies developing high-risk AI models. The framework also cements transparency, consumer and child protection, and national security defenses as key priorities for any legislation addressing AI. Building off this framework, the senators introduced the No Section 230 Immunity for AI Act in June of 2023, which failed to pass a floor vote in December of last year.

In addition to addressing risks, there are also measures that promote AI use in the federal government, like the AI in National Security Act, or continue to encourage its innovation, like the CREATE AI Act. The CREATE AI Act, sponsored by senators Heinrich, Booker (D-NJ), and Rounds, would further the development of the National AI Research Resource (NAIRR), a test bed for AI development and resource center to equitably share data with the scientific community. Throughout hearings regarding AI innovation and the introduction of these bills, members of Congress have stressed the need for U.S. leadership in the technology space. Should the U.S. not invest in AI development, it may very well fall behind—leaving other nations, including U.S. adversaries, to decide the parameters of AI technologies, its ethics and uses.

While many pieces of legislation have been introduced, not as many have moved through the committees and made it to the President’s desk. In an election year with a busy calendar, passing major legislation in Congress will be difficult. Congressional activity indicates that AI legislation is a priority for both Democrats and Republicans—but what shape that legislation may take is still unknown.

The National AI Landscape
Congress is not the only one hoping to develop AI policy. The executive branch is also carrying out the various administrative actions and rulemakings as mandated by the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Where possible, the executive branch has also leveraged existing authorities to address AI harms in regulated sectors. In addition to federal activity, the states are continuously developing and enacting AI policies. In particular, Pillsbury has seen increased attention to legislation addressing the potential impacts of AI on our democratic process. Congressional action will be necessary to solidify and harmonize these efforts to address AI.

Pillsbury’s multidisciplinary team of AI thought leaders and legal and strategic advisors is an industry leader in strategic promotion of responsible and beneficial AI. Pillsbury is closely monitoring AI-related legislative and regulatory efforts. Our AI team helps startups, global corporations and government agencies navigate the landscape impacted by emerging developments in AI. For insights on these rapidly evolving topics, please visit our Artificial Intelligence practice page.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.