Takeaways

On July 21, the Biden Administration announced that prominent generative artificial intelligence (AI) technology companies, including Meta, OpenAI, Microsoft and Google, voluntarily committed to a new set of guidelines promising to develop their AI technologies responsibly and safely.
Senate Majority Leader Chuck Schumer (D-NY) announced the SAFE AI Innovation Framework to serve as a blueprint for potential AI legislation.
The White House announced it would be releasing an executive order on AI development and innovation to ensure all AI follows the principles of safety, security and trust.

On July 21, 2023, the White House announced the voluntary commitment of seven companies to high-level principles concerning safety, security and public trust with respect to their generative artificial intelligence (AI) technologies. These voluntary principles will serve as a guidepost for the industry until Congress develops and passes legislation for AI development.

Voluntary Commitments Ensuring Safe, Secure and Trustworthy AI
Prominent generative AI companies, including Meta, OpenAI, Microsoft, Google, Anthropic and Inflection, committed to a voluntary set of guidelines negotiated by the White House, which are expected to help improve the transparency and safety of AI technology. The measure builds on the Administration’s dedication to addressing AI—from introducing the AI Bill of Rights to agency announcements from the Federal Trade Commission, Equal Employment Opportunity Council, and others. These executive efforts have focused on mitigating the potential harms of AI while encouraging its safe innovation. The July announcement outlines eight principles the companies will strive to meet that support safety, security and trust.

Ensure Products Are Safe before Introducing Them to the Public:

1. Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas. Companies will advance red-teaming research and red-teaming regime designs that address bio, chemical and radiological risks, the cyber capabilities of the product, the effects of system interaction and tool use, whether the models can self-replicate, and larger societal risks, like bias and discrimination. The guidelines do not require companies to use any specific third-party red-teaming tests or safety tests.

2. Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. Companies will either join a forum or mechanism that shares best practices for frontier AI safety or create one of their own. Importantly, these forums will develop standards for AI developers, and may begin with implementing the National Institute of Standards and Technology (NIST) AI Risk Management Framework. Relatedly, on July 26 several companies that joined the voluntary commitments announced the creation of a Frontier Model Forum, an industry body focused on ensuring safe and responsible development of frontier AI models. 

Build Systems That Put Security First: 

3. Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Model weights are the parameters adjusted during training of models and the mechanism by which models learn from a training set. The model weights govern the accuracy of the model and are trained at great expense on specialized computing infrastructure. If model weights and the model architecture accidentally become public or otherwise leaked, threat actors will obtain capabilities otherwise only attainable with substantial technical and financial resources. This principle asks companies to protect one of their most valuable assets, by treating model weights as core trade secrets, establishing an insider threat detection program and storing the weights in secure environments.

4. Incent third-party discovery and reporting of issues and vulnerabilities. In addition to using red-teaming, companies are encouraged, for systems within scope bounty systems, to hold contests or prizes that will responsibly disclose weaknesses. Companies can also include AI in existing bug bounty programs.

Earn the Public’s Trust:

5. Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content. Under this principle, companies are striving to help the public understand when audio or visual content is AI-generated through the development of mechanisms, like provenance or watermarking systems, for AI-generated audio or visual content. Companies can investigate the possibility of tools that determine if content was created using their systems. Further, the guidance offers that companies should work together to develop a technical framework that will help users distinguish between human-generated audio or visual content and AI-generated audio or visual content. 

6. Publicly report model or system capabilities, limitations or domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias. The companies will publish reports that will outline safety evaluations conducted on new generative AI models and any limitations in the performance of the system that users should be aware of, such as the effects on fairness and bias in the system.

7. Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy. This is a general commitment to empower trust and safety teams, advance AI safety research, advance privacy, protect children, and work to proactively mitigate AI risk.

8. Develop and deploy frontier AI systems to help address society’s greatest challenges. The guidelines recognize that AI can be leveraged to address great challenges from climate change mitigations and cancer prevention to combating cyber threats. Companies agree to support the research and development of frontier AI systems as well as support initiatives that foster the education and training of students and workers to prosper from the benefits of AI. 

White House Chief of Staff Jeff Zients was quoted last month, stating that “the regulatory process can be relatively slow, and here we cannot afford to wait a year or two.” In that spirit, the White House also announced that it would be releasing an executive order to further guide the development of AI. The White House will simultaneously continue to work with the Senate and the House of Representatives to develop bipartisan legislation.

Updates from Congress: The Safe AI Innovation Framework
In June, Majority Leader Chuck Schumer (D-NY) announced a new framework for regulating AI: The SAFE Innovation Framework (security, accountability, protecting our foundations, explainability). This framework is expected to accelerate AI development in the United States while 1) securing against the use of AI in extremism and securing our workforce as best as possible against market changes caused by AI; 2) promoting accountability to avoid exploitation and protect IP; 3) protecting American foundations, like our democratic processes; and 4) requiring explainability from AI systems. Members of the public and industry have called on Congress to address these topics; however, the explainability component is perhaps the most nebulous of the four guardrails. While explainability is desired, technical solutions are the subject of ongoing research, and model interpretability and explainability are notoriously slippery concepts about which technical solutions may never be found and will consequently be challenging to capture in legislative text. More information will be revealed in the fall as the legislation takes shape.

Schumer shared that the development of AI regulation would undergo a new legislative process, rather than relying solely on committee-led hearings. In order to develop the framework, Schumer will hold AI-industry forums in the fall where AI experts of diverse backgrounds and views can shed light on the biggest challenges in AI development and provide input on potential legislation. The forums would be conducted in addition to committee hearings. Schumer had tasked the chairs of committees of jurisdiction to work with their ranking members and identify areas of collaboration in holding hearings on AI regulatory hurdles and solutions in the coming months.

Updates from Congress: The AI Leaders

The House of Representatives
While Schumer works on the overarching framework for legislation, other bills have been introduced addressing the development of AI. Representative Ted Lieu (D-CA), a computer-science major, introduced the bipartisan National AI Commission Act to establish a blue-ribbon commission to study AI regulations, especially those being tested on the state level, and make recommendations to Congress on how to best foster AI growth while protecting U.S. consumers. On July 20, Lieu led a group of members in writing to the Office of Management and Budget, urging the Biden Administration to require agencies and vendors to follow the NIST AI Risk Management Framework.

Representative Cathy McMorris Rodgers (R-WA-5), chair of the Energy and Commerce Committee, has held hearings on issues of innovation and data privacy. Last Congress, Rodgers introduced the bipartisan American Data Privacy and Protection Act, which passed out of committee 53 – 2. The legislation, if reintroduced this year, is expected to play an even more important role in addressing and alleviating many of the privacy concerns surrounding AI training and development. The Energy and Commerce Committee is critical to moving AI legislation forward in the House.

The U.S. Senate
Senator Gary Peters (D-MI) has been active in introducing AI legislation with a focus on the government’s deployment of AI, especially to enhance national security interests. Peters has introduced three AI-focused pieces of legislation. First, the AI Lead Act in July of 2023, which passed out of committee on July 26, would establish a federal chief intelligence officer and corresponding council of agency representatives to coordinate federal AI activities. Two other pieces of legislation that Peters passed out of committee are the Transparent Automated Governance Act, which would require agencies to notify users of the government’s use of AI and give users an appeal process for when AI makes adverse determinations, and the AI Leadership Training Act, which would train federal employees so they can intelligently procure and harness AI for government use.

Other senators are collaborating with AI thought leaders, as well as AI leaders in Congress, to define the best path forward for AI innovation and equity. Senator Mark Warner (D-VA), who serves as the chair of the Senate Select Committee on Intelligence, will play an important role in the development of both Schumer’s framework and other legislation that would fall within his committee’s purview. Warner has been active in speaking with members of the industry and has urged them, similarly to the Biden Administration’s voluntary principles above, to prioritize safety measures earlier in the AI development processes. On July 24, Warner wrote to the Biden Administration, applauding the set of voluntary guidelines announced on July 21. In the letter, Warner also highlighted the Intelligence Authorization Act, an annual piece of legislation developed in the Intelligence Committee that, as passed out of the Committee on June 22, 2023, would direct the President to establish a “strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.” Warner is well-positioned to work with AI stakeholders and his colleagues in Congress in developing bipartisan AI legislation.

Opportunity for Stakeholder Engagement
As Schumer looks for feedback from AI stakeholders in the form of AI Forums in the fall, this is an important time for AI stakeholders to share recommendations with leaders in Congress.

Pillsbury’s multidisciplinary team of AI thought leaders and legal and strategic advisors is an industry leader in strategic promotion of responsible and beneficial AI. Pillsbury is closely monitoring AI-related legislative and regulatory efforts. Our AI team helps startups, global corporations and government agencies navigate the landscape impacted by emerging developments in AI. For insights on these rapidly evolving topics, please visit our Artificial Intelligence practice page.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.