Takeaways

United States: The Administration and Congress are taking initial steps to produce legislation to regulate AI and using interim measures, such as the White House's recently announced voluntary agreement with seven prominent generative AI companies to provide minimum guardrails for safety, security and public trust, as safeguards.
EU and UK: The EU is expected to finalize the EU AI Act, which will classify AI usage based on risk levels, by late 2023, and a white paper issued by the UK government in March empowers sectoral regulators to regulate AI within their jurisdictions and indicates the government’s plan to establish central functions to support sectoral regulators.
China: In June, China issued its first regulations on generative AI technology, introducing significant obligations for service providers, including content monitoring, marking and data sourcing, while emphasizing the protection of users' personal information through agreements outlining responsibilities.

This article discusses the latest developments of legislations on Generative AI in the United States (U.S.), Europe (EU), the United Kingdom (UK) and the People’s Republic of China (China or the PRC).

United States
Congressional leaders are intensifying efforts to develop legislation directing agency regulation of AI technology. In June, Senate Majority Leader Chuck Schumer (D-NY) publicly announced the SAFE Innovation Framework, which sets priorities for AI legislation, focusing on: security, accountability, protecting our foundations and explainability. The framework's goal is to deliver security without compromising innovation. Although passage in 2023 is uncertain, we can expect that Congress will continue to introduce legislation, hold hearings and deploy the AI Forums throughout the rest of the session, giving industry several opportunities to engage with their representatives and senators before legislation is enacted.

Also, at a May 16 hearing of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, several senators indicated support for the creation of a new federal agency dedicated to regulating AI, which could include licensing activities for AI technology. There were also calls for creating an international AI regulatory body modeled after the International Atomic Energy Agency.

On May 23, the White House revealed three new steps advancing the research, development and deployment of AI technology nationwide. In addition, the Office of Science and Technology Policy (OSTP) completed a public comment period soliciting input to develop a comprehensive National AI Strategy, focused on promoting fairness and transparency in AI while maximizing AI benefits. The results of the public feedback will be made public and move OSTP to the next stage of developing the National AI Strategy.

U.S. federal agencies are also engaging with AI as it intersects with their respective jurisdictional and legislative authority, often issuing guidance to regulated entities, explaining how the agency will apply existing law to any AI violations. For example, the Federal Trade Commission (FTC) has been active in regulating deceptive and unfair practices attributed to AI, particularly enforcing statutes such as the Fair Credit Reporting Act, Equal Credit Opportunity Act and FTC Act.

European Union
The EU has also made steady progress in shaping its proposed AI law, known as the "AI Act," which has entered the final stage of the legislative process. The aim is to agree to a final version of the law by the end of 2023, after which there will likely be a 24-month transition period before it applies.

The proposed AI Act classifies AI usage based on risk levels, prohibiting certain uses (for example, real-time biometric identification surveillance systems used in public places and subliminal techniques which may cause harm) and imposing stringent monitoring and disclosure requirements for high-risk applications compared to lower-risk ones.

The EU's objective is to ensure that AI developed and utilized within Europe aligns with the region’s values and rights, including ensuring human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being. The proposed penalties for breach could be as high as 7% of global revenue or €40 million.

For more detailed analysis on the AI Act, see here.

Further, on September 28, 2022, the European Commission proposed a new law, known as the "AI Liability Directive," aimed at adapting non-contractual civil liability rules to AI. The proposed law (which is closely tied to the AI Act) aims to establish uniform rules for damages caused by AI systems, providing broader protection for victims and fostering the AI sector by increasing guarantees. It will address specific difficulties of proof linked with AI, requiring EU Member States to empower national courts to order the disclosure of relevant evidence about specific high-risk AI systems. The proposed law will impact both users and developers of AI systems, providing clarity for developers about accountability in the event of an AI system failure, and facilitating compensation recovery for victims of crimes associated with AI systems. The negotiations on the new law are ongoing, and it is not yet clear when it will be adopted.

United Kingdom
On March 29, 2023, the UK government released a white paper outlining its pro-innovation approach to AI regulation. Rather than creating new laws or a separate AI regulator, as things stand, existing sectoral regulators will be empowered to regulate AI in their respective sectors. The focus is on enhancing existing regimes to cover AI and avoiding heavy-handed legislation that could hinder innovation.

The proposed regulatory framework outlined in the white paper defines AI based on two key characteristics, namely adaptivity and autonomy. The white paper holds that by defining AI with reference to these characteristics and designing the regulatory framework to address the challenges created by these characteristics, UK lawmakers can future proof the framework against unanticipated new technologies.

The white paper also sets out five "values-focused cross-sectorial" principles that regulators should adhere to when addressing the risks associated with AI. The principles include: (i) safety, security and robustness, (ii) appropriate transparency and explainability, (iii) fairness, (iv) accountability and governance, and (v) contestability and redress.

The principles build on, and reflect the UK government's commitment to, the Organisation for Economic Cooperation and Development’s (OECD) values-based AI principles, which promote the ethical use of AI. The aim of the principles-based approach is to allow the framework to be agile and proportionate. While not legally binding at the outset, the UK government anticipates that the principles may become enforceable in the future, depending on the evolving landscape of AI technology and its societal impact.

In addition to these principles, the UK government plans to establish central functions to support regulators in their AI oversight roles and make sure the regulatory framework operates proportionately and supports innovation. The white paper is silent on which specific entity or entities will undertake these central functions.

The white paper accompanies the investment of £2 million by the UK government to fund a new sandbox to enable AI innovators to test new AI products prior to market launch. The sandbox will enable businesses to test how AI regulations could apply to their products.

Following publication of the white paper, the UK government will continue to work with businesses and regulators as it starts to establish the central functions identified. The UK government will publish an AI regulatory roadmap alongside its response to the consultation on this white paper. In the long term, 12 months or more after the publication of the white paper, the UK government plans to implement all central functions, support regulators in applying cross-sectoral principles, publish a draft AI risk register, develop a regulatory sandbox, and release a monitoring and evaluation report to assess the framework’s performance.

China
On July 13, 2023, the Cyberspace Administration of China (CAC) issued the final version of the Interim Administrative Measures for Generative Artificial Intelligence Service (PRC AI Regulations). Generative AI services provided to the public within the PRC fall within the scope of these regulations, which primarily address content generation using AI technology (Generative AI Services). The PRC AI Regulations explicitly exclude from their scope non-public service providers, such as industry organizations, enterprises, academic and research institutions, and public cultural institutions engaged in research, development and application of generative AI technology.

The PRC AI Regulations introduce significant obligations for providers of Generative AI Services. These requirements include monitoring and controlling content generated by their services. Providers are required to promptly remove any illegal content, take actions against users engaged in illegal activities, and report to the authorities. Additionally, the providers must mark generated content with appropriate labels and use legitimate sources for data training while respecting intellectual property rights and obtaining consent for personal information processing. Reiterating the existing cybersecurity and personal privacy rules in China, the PRC AI Regulations mandate protecting the users' personal information, prohibiting illegal collection and sharing of identifiable data.

China is also likely to adopt an industry-oriented regulatory model, with different governmental departments regulating Generative AI Services within their specific fields. Industry-specific AI regulations and classification guidelines are expected to be introduced.

The PRC AI Regulations are the latest addition to the Administrative Provisions on Algorithm Recommendation for Internet Information Services (Algorithm Provisions, effective as of March 1, 2022) and Administrative Provisions on Deep Synthesis of Internet Information Services (Deep Synthesis Provisions, effective as of January 10, 2023).

The Algorithm Provisions apply to any entity that uses algorithm recommendation technologies (including without limitation technologies for generation and synthesis, personalized push, sorting and selection, retrieval and filtering, scheduling decision-making) to provide internet information services within mainland China. The Algorithm Provisions, among others, require an algorithm recommendation service provider (which could cover a Generative AI services provider) with a public opinion attribute or social mobilization ability to carry out a safety assessment in accordance with the application regulations and complete online record-filing formalities within 10 working days from the date it starts to provide services.

The Deep Synthesis Provisions regulate the provision of internet information services in mainland China by applying "deep synthesis technologies," which is defined as "technologies that use generative sequencing algorithms, such as deep learning and virtual reality, to create text, images, audio, video, virtual scenes, or other information." The Deep Synthesis Provisions set out a comprehensive set of responsibilities for deep synthesis service providers and technical supporters concerning data security and personal information protection, transparency, content management and labeling, technical security, etc. A Generative AI service provider is required to add a mark on content (pictures, videos and other content generated by Generative AI Services) according to the Deep Synthesis Provisions.

Overall, together with the existing cybersecurity and data privacy rules in China, the PRC AI Regulations aim to establish a framework for responsible and transparent use of Generative AI Services, imposing significant responsibilities while offering some flexibility to the service providers. The Chinese authorities have placed more emphasis on industrial policies encouraging AI innovations and massive industrial applications rather than restricting the development of AI technologies, which is also reflected in the PRC AI Regulations.

For our detailed discussion analyzing China's Interim Administrative Measures for Generative Artificial Intelligence Service, see China Finalizes Its First Administrative Measures Governing Generative AI.

Conclusion
The global landscape of AI governance features diverse strategies. The EU focuses on sector-specific regulation, the United States opts for a decentralized approach featuring federal guidance with local adaptations, and China prioritizes consumer transparency and global AI standards dominance. Companies will need to develop global positions on AI ethics and compliance for their products in order to comply with new regulations.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.