Alert 08.25.23
Administration Poised to Act on “Internet of Things” Devices
The FCC and House of Representatives have begun discussions on the safety and security of IoT modules from China.
Alert
11.01.23
On October 30, President Biden issued the long-awaited Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI), the first order to navigate AI’s impact across sectors and to help agencies and consumers harness the benefits of AI while mitigating risks.
Executive Action on AI
President Biden first addressed AI in the Blueprint for an AI Bill of Rights in October 2022. Since then, executive agencies have worked to incorporate the AI Bill of Rights principles into their enforcement activity, prioritizing protecting consumers from potential AI harms that fall within their agency’s jurisdictions. The Administration also secured voluntary agreements from several leading generative AI companies in August to promote safety, security and public trust in generative AI.
The Federal Communications Commission (FCC) has demonstrated the Executive Order’s interest in AI, having recently held a hearing with the National Science Foundation on July 13, 2023, to discuss many cross-cutting AI issues. The FCC is also considering the adoption of a Notice of Inquiry at its November 2023 meeting to study whether it should adopt rules to protect consumers from unwanted and illegal telephone calls and text messages generated through the use of AI technologies.
These actions have built up to the much-anticipated Executive Order. In addition, Vice President Harris and Secretary of Commerce Raimondo are traveling to the AI Safety Summit 2023 at Bletchley Park, UK, where the Vice President will give a speech outlining the administration’s Executive Order and vision for the future of AI.
The Executive Order on Artificial Intelligence
Overview of the Order
The Executive Order will leverage the regulatory powers of multiple federal agencies to (a) monitor risks stemming from AI use and programs, (b) develop new and innovative uses for the technology, and (c) implement these new technologies safely. The Order sets out to promote the safe, responsible, and ethical use of AI by federal agencies and to protect consumers through existing regulatory authorities.
Companies using or developing AI who contract with the federal government or are regulated will want to monitor the variety of standards created under the Executive Order. For example, the Department of Commerce is tasked with creating watermarking standards that may be further incorporated into the Federal Acquisition Regulation for government procurements. Companies may also want to be mindful of the Department of Energy’s efforts to test and address chemical, biological, nuclear and other potential AI risks. In addition, the National Institute of Standards and Technology (NIST) will develop two sets of guidelines. The first, to support the goal of promoting industry standards, will include a companion resource to the AI Risk Management Framework, a companion resource to the Secure Software Development Framework and benchmarks for auditing AI capabilities. The second set of guidance will outline the processes and procedures for red-team testing AI systems.
The Executive Order also addresses Congress, asking them to develop and pass data privacy legislation. While not introduced yet this year, the American Data Privacy and Protection Act (ADPPA) garnered attention as a promising vehicle for a federal framework in 2022. Introduced originally by Rep. Pallone (D-NJ-6) and Rep. McMorris Rodgers (R-WA-5), the ADPPA would establish a national framework to protect consumer data privacy and security and bolster the privacy rights of individual rights. As Chair of the Energy and Commerce Committee, Congresswoman Rodgers will serve an important role in developing privacy regulation moving forward.
The Defense Production Act
Importantly, the Executive Order also leverages the Defense Production Act (DPA) to require companies developing or intending to develop “dual-use foundation models” to report the results of any red-team safety tests to the government, as well as to notify the government when they are training their models. These companies are compelled to report to the Department of Commerce their physical and cybersecurity plans to protect the integrity of the training process and model weights from outside threats.
A dual-use foundation model is defined in the Executive Order as a model that is “trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…”
The Order also also exercises authority pursuant to the International Emergency Economic Powers Act to require Infrastructure as a Service (IaaS) products, or cloud services, to report to the Secretary of Commerce when a foreign person rents server space to train large AI models. Under this provision, U.S. IaaS providers must prohibit their foreign resellers from providing the U.S. IaaS product unless the foreign reseller reports each instance in which a foreign person transacts for the U.S. IaaS product. The Secretary of Commerce has 90 days to propose regulations on the reporting requirements. Within 180 days, Commerce will propose regulations for U.S. IaaS providers to ensure that the foreign resellers verify the identity of any foreign person that obtains an IaaS account.
Executive Order Details
The Executive Order establishes a White House Artificial Intelligence Council to coordinate all executive branch activities on AI. The deputy chief of staff for policy will chair the council and direct the efforts of the agencies to carry out the mission of the Executive Order, which are detailed below.
Cybersecurity
The Executive Order promotes the use of AI technologies to protect against cyber threats.
Transparency
The Executive Order aims to understand the risks posed by synthetic content and reduce risks by fostering capabilities to identify synthetic content.
Privacy
The Executive Order directs activity that will protect Americans’ privacy and civil liberties as AI continues to advance.
Immigration
The Executive Order seeks to attract foreign AI talent and lower barriers to entry.
Competition
One objective of the Executive Order is to create an open and competitive AI market that prioritizes U.S. innovation and supports small companies coming to market.
Copyright
The Executive Order addresses novel intellectual property questions and actions to protect investors and creators.
Labor
The Executive Order cites a commitment to supporting American workers in the AI transition.
Equity and Civil Rights
The Executive Order works to protect and prioritize equity throughout all government initiatives
Housing
The Executive Order endeavors to combat unlawful discrimination in decisions about access to housing and other real-estate transactions.
Health
The Executive Order promotes the responsible deployment of AI that accounts for the wellbeing of citizens and potential beneficial uses of AI in the health care sector.
Transportation
The Executive Order supports the safe integration of AI into the transportation sector.
Education
The Order requires the Department of Education to address the safe, responsible and nondiscriminatory uses of AI in education through appropriate documents and resources.
Telecommunications
The Federal Communications Commission is encouraged under the Order to consider how AI will affect communication networks and consumers.
International Collaboration
The Executive Order promotes strategies to strengthen American leadership abroad.
Ongoing Congressional Activity
The Executive is not alone in addressing AI, as members of Congress in both the House and Senate have turned their attention to AI legislation. While the Executive Order is limited to existing spending amounts and authorities, Congress can pass legislation to create new authorities and appropriate additional funding that can affect AI development. Members of both the House and Senate have been active in introducing legislation and holding hearings to lay the groundwork for AI legislation.
Notably, Senate Majority Leader Schumer (D-NY) announced his SAFE Innovation Framework in June and at the accompanying AI Forums, a series of meetings with senators and experts from industry and academia designed to educate the senators on the contours of AI technology. The first AI Forum was attended by 60 senators and hosted prominent AI company leaders. The second AI forum on Tuesday, October 24, focused on AI innovation, which featured venture capitalists and company leaders working on next-generation AI systems as well as civil society groups. The next forum will be held on Wednesday, November 1, to focus on AI and the workforce.
Critical to AI legislation in the Senate has been the work of the “Gang of Four,” Senators Rounds (R-SD), Heinrich (D-NM), Schumer (D-NY) and Young (R-IN). Following the second AI Forum, the Gang of Four introduced the Artificial Intelligence Advancement Act of 2023 (S. 3050), which would establish an artificial intelligence bug bounty program and require separate reports on: the use of AI platforms in financial services; vulnerabilities of AI-enabled military applications; and data sharing and coordination. Also following the AI Forum, Senators Schatz (D-HI) and Kennedy (R-LA) introduced the Schatz-Kennedy Labeling Act (S. 2691) to provide transparency around AI-generated content.
Another important milestone was the introduction of the bipartisan framework for AI legislation by Senators Blumenthal (D-CT) and Hawley (R-MO), who serve as the chair and ranking member, respectively, of the Senate Judiciary Subcommittee on Privacy, Technology and the Law. The bipartisan framework proposes a licensing regime targeting “sophisticated general-purpose AI models” to be administered by an independent oversight body. The framework also provides that Section 230 liability protections would not apply to AI and provides measures to promote transparency and protect children. Finally, the framework urges Congress to use export controls, sanctions and other restrictions to limit transfers of advanced AI models that can be used by foreign adversaries or used in human rights violations. The work of the senators and the Subcommittee has created critical momentum in this space, and the senators expect to produce draft text by the end of the year.
Legislative activity has spurred conversations around the benefits and risks of AI; however, Schumer has warned that Congress will likely not pass holistic legislation addressing AI until next year.
Pillsbury’s multidisciplinary team of AI thought leaders and legal and strategic advisors is an industry leader in strategic promotion of responsible and beneficial AI. Pillsbury is closely monitoring AI-related legislative and regulatory efforts. Our AI team helps startups, global corporations and government agencies navigate the landscape impacted by emerging developments in AI. For insights on these rapidly evolving topics, please visit our Artificial Intelligence practice page.