In a hearing of the Senate Judiciary Subcommittee on Privacy, Technology and the Law on May 16, multiple U.S. senators—including Senators Richard Durbin (D-IL), Lindsey Graham (R-SC), Peter Welch (D-VT) and Cory Booker (D-NJ)—supported the idea of a federal artificial intelligence (AI) agency to regulate the transformative technology.
Witnesses at this hearing—titled “Oversight of A.I.: Rules for Artificial Intelligence”—included Sam Altman, CEO of OpenAI, Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, a New York University professor.
With the release of the next wave of AI technology, specifically generative AI models like ChatGPT-4, as well as announcements from the international community on AI regulation, the U.S. Congress is racing to address AI technology. Congress and the Executive Branch have been analyzing how AI could be regulated to safeguard American consumers and values while also spurring AI advancement in order to compete with foreign adversaries. On May 16, while the Senate Judiciary Subcommittee addressed the question of how to regulate the private sector, the Senate Homeland Security and Governmental Affairs Committee convened a hearing on how the U.S. government could leverage AI. More hearings from these and other Committees in both the House and Senate are expected to come as Congress looks for ways to advance U.S. leadership in AI technology.
The Argument for AI Regulation
Subcommittee Chairman Richard Blumenthal began the hearing with a recording of his opening statement written and voiced by “deepfake” artificial intelligence to illustrate the transformative power, and dangers, of the latest advances in AI technology. Ranking Member Josh Hawley (R-MO) also raised concerns around AI, including job loss, invasion of privacy, manipulation of personal behavior and personal opinion and the possible degradation of free elections.
On the other hand, Altman urged that AI also has the potential to provide immense benefits for users. He stressed repeatedly that AI is a tool that can be used to make tasks more efficient and improve our quality of life. To maximize the benefits of AI, the witnesses agreed that AI had to be regulated. Altman and Marcus specifically argued that the only way to address all the problems raised by the senators would be for Congress to create a nimble agency staffed by subject matter experts. These experts could quickly respond to changes in AI technology as it evolves and provide regulation that protects consumers while encouraging innovation in a way that legislation alone could not.
The hearing also addressed how a regulatory agency might help develop the principles and values for how AI programs are used. Per Sen. Chris Coons (D-DE), AI programs can be instilled with pre-programmed values under a constitutional model, which allows the programmers to prohibit AI from generating “harmful” content. Sen. Mazie Hirono (D-HI) built on this discussion by asking Altman how a developer decides what is “harmful” or what values should be embedded in the program. Defining harm is difficult and while ChatGPT has outlined prohibited content, for example, the program cannot generate language promoting violence. Altman asked that the public and government offer input to further refine the concept.
An AI regulatory body would be able to review the range of AI programs and potential consequences of those AI programs and define what would be “harmful.” The agency would then certify that AI programs are developed with those values and restrictions in mind.
Throughout the hearing, the witnesses proposed that the agency certify that AI programs were safe before deployment by issuing a license to programs that meet certain safety requirements. The agency could also revoke an AI program’s license if the company or program is found to have violated the safety standards. Altman outlined two approaches for the proposed agency to issue licensing requirements for AI programs.
First, the agency could create tiered licensing regimes that increase requirements as the capacity size of the program increases. Second, and what he argued would be more effective, the regulations would set capability thresholds. One example might be labeling models that could persuade, manipulate or influence as the highest tier, subject to the strictest regulations. Another way to measure capability might be to determine whether the programs could be used to create novel biological agents. Congress or the agency could decide how to classify the categories of capabilities for licensure. Altman advocated for a regime that would address defense and depth; it should address the model’s potential risks as well as how the model is actually used. Sen. Graham was keen to employ a licensing model for AI programs. He argued for a system where only those that are certified by the agency can operate and only the agency can issue or revoke these licenses.
Professor Marcus discussed the potential of modeling the AI regulatory body after the Food and Drug Administration (FDA). Under this theory, AI programs would need to make a safety case and prove that the benefits of the program outweigh any potential harms. Further, AI programs could receive the equivalent of nutrition labels, which could outline the underlying data used, potential biases of the system and limitations of the models. This approach would increase the transparency of AI systems, one of its most significant criticisms. Marcus added that the agency must be able to conduct pre- and post-reviews of AI programs for active monitoring.
Creating an International AI Regulator
During the hearing, Marcus and Altman supported calls to create an international regulatory body for AI under the model of the International Atomic Energy Agency (IAEA). Involvement of the United States in an international body would be key to ensuring that democratic values are imbued in the international standards.
Since the hearing, Altman and OpenAI have further elaborated on the need for an international AI organization like the IAEA. On May 22, OpenAI argued for an “international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security” for AI systems above a certain capability threshold, like superintelligence systems. The international group would be focused on “existential risk,” allowing countries to retain the ability to govern their AI on other issues like speech.
Opportunities to Inform AI Regulation and Complementary Legislation
The overwhelming sentiment from the Committee was that Congress must act quickly. Several senators specifically discussed a desire to learn from the overprotection of social media companies when they enacted Section 230 of the Communications Decency Act, a liability shield for social media platforms.
As the presence of key industry witnesses makes clear, industry has significant opportunities to weigh in on AI regulation. For example, legislative efforts on AI and related issues include the following:
Pillsbury is closely monitoring AI-related legislative and regulatory efforts. Our AI team helps startups, global corporations and government agencies navigate the landscape impacted by emerging developments in AI. For insights on these rapidly evolving topics, please visit our Artificial Intelligence practice page.