Takeaways

Four U.S. agencies (the Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission and Federal Trade Commission) have issued a joint statement to establish a renewed, coordinated effort to enforce potential legal violations involving AI programs.
The agencies will work collaboratively across their jurisdictions to monitor whether AI is responsibly developed and deployed.
The agencies’ enforcement efforts are focused on protecting the public from bias and discrimination, although concerns of fraud and consumer confusion are identified as well.

On April 25, 2023, the Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ) Civil Rights Division, the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC) issued a joint statement affirming that they will be working collaboratively to enforce existing laws and regulations as applied to potential discrimination and bias in artificial intelligence (AI) systems. Companies that use AI and other automated systems should prepare for greater scrutiny from these agencies.

The Joint Statement
The joint statement highlights that the four agencies have resolved to enforce their collective authorities and “to monitor the development of automated systems.” The joint statement makes clear that “existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” 

The joint statement addresses “automated systems,” a broad category that includes AI as well as other software and algorithmic processes that are used to automate workflows. These programs are often used by both the public and private sector to make processes more efficient. While the statement briefly acknowledges that the programs have a “promise of advancement,” the statement focuses almost exclusively on the potential risks in the innovation of AI. In particular, the agencies expressed concern that AI has the potential to perpetuate bias, automate discrimination and produce other harmful outcomes.

The statement identifies the following three categories of potential sources of discrimination that could result in disparate outcomes and spur enforcement action:

  • Data and datasets: The data on which automated systems rely could themselves be biased or skewed. Disparate results may also occur if the automated systems correlate data with protected classes.
  • Model opacity and access: Lack of transparency in how automated systems operate and whether they are fair impacts how developers, businesses or individuals view the fairness of the system.
  • Design and use: Products may be used in a variety of contexts unforeseen by the developers; in this context, developers may design systems based on a faulty set of assumptions resulting in AI bias.

The joint statement and the agencies’ press releases highlight actions each of the agencies have previously taken involving AI and automated systems.

The Consumer Financial Protection Bureau
Director Rohit Chopra released prepared remarks with the joint statement that noted the CFPB’s focus on what the agency calls “digital redlining” in home valuation and lending models, algorithmic advertising and “black box” credit models. Black box models, which the CFPB has addressed previously, are those that may lack transparency in how the model produces outcomes, which makes it more difficult to analyze whether the model produces biased or discriminatory outcomes. The renewed commitment from the CFPB builds on a 2022 compliance circular which clarified that, pursuant to the Equal Credit Opportunity Act, creditors who use complex algorithms in any aspect of their credit decisions must still provide the reasoning of their credit decision to the applicant. The 2022 circular also stated that companies may not use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions required by fair lending laws.

The prepared remarks by Director Chopra also cover concerns not addressed in the joint statement, such as generative AI. Looking forward, the CFPB announced that it will soon release a white paper on chatbots. The CFPB’s press release stated it “is already seeing chatbots interfere with consumers’ ability to interact with financial institutions” and expects to take additional action on generative AI this spring.

The Department of Justice
Assistant Attorney General Kristen Clarke of the Civil Rights Division committed the DOJ to addressing and combatting all instances of discrimination based on automated systems. Previously, the DOJ had issued a statement of interest explaining that the Fair Housing Act must be applied appropriately in cases of algorithm-based tenant screening. While this statement of interest was introduced in the context of a Massachusetts litigation, the joint statement and press release make clear that the DOJ will continue to address housing discrimination and other potential discrimination cases as they arise.

The Equal Employment Opportunity Commission
EEOC Chair Charlotte Burrows committed the EEOC to raising awareness, helping educate employers, vendors and workers, and utilizing the EEOC’s enforcement authorities to uphold America’s workplace civil rights laws. Chair Burrows underscored that these laws reflect “our most cherished values of justice, fairness, and opportunity” and that the agencies would work jointly to ensure that “AI does not become a high-tech pathway to discrimination.” The EEOC previously advised on the application of the Americans with Disabilities Act to AI programs, clarifying how AI is to be held accountable when used to make employment-related decisions.

The Federal Trade Commission
FTC Chair Lina Khan expressed the FTC’s attention on the use of AI in fraud as well as discrimination in its mission to protect and educate consumers. The chair found that AI can “turbocharge fraud” and responded that the “FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.” The chair’s statements expound upon the FTC’s prior warnings about the potential harm of AI. In June of 2022, the FTC published a report that highlighted inaccuracy, bias, discrimination and “commercial surveillance creep” as potential areas of concern for AI. The FTC warned market participants in early 2023 that the agency could take action if developers of AI falsely represented the abilities of AI or failed to mitigate risks before deployment. Finally, the FTC has required developers to delete their algorithms and work product when the dataset was improperly collected. The FTC will likely continue to take action, now in collaboration with the other agencies, against automated systems that may violate the Federal Trade Commission Act.

Preparing for the Agencies’ Increased Enforcement Focus
Companies are understandably interested in the insights and efficiencies that may be gained by deploying AI. However, federal law enforcement agencies are closely monitoring how companies deploy AI and other automated systems, and companies that implement AI should build and maintain a robust compliance framework to proactively address potential regulatory scrutiny. In particular, business, legal and risk teams must understand how AI and other automated systems are developed and how it is evolving over time. Companies should enhance their compliance programs now, so they are positioned to respond to any regulatory inquiries and deter potential enforcement.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.