Takeaways

Associations will need safeguards in place to fact check AI-generated content as well as to avoid copyright infringement issues.
Confidentiality and data privacy are key considerations in the use of generative AI.
Associations should prepare for a shifting legal landscape around generative AI.

There has been a rush of news and debate around Artificial Intelligence (AI) since the launch of ChatGPT in late 2022. AI is nothing new; you likely interact with it every day via spellcheck, virtual assistants and email spam filters. Generative AI, however, and its mass adoption for both personal and professional use, is a more recent phenomenon, and you may not have considered the legal implications and potential impact on your association.

The Upside
AI can create new opportunities for associations. For example, AI could improve member engagement by tracking and analyzing the type of content that each member clicks on to deliver personalized communications or webpages. Your association could also automate data analysis and repetitive tasks and processes to allow staff to focus on more creative work. It might give you a head start on preparing written reports, educational materials or presentations.

The same is true of generative AI. While generative AI can be leveraged by your association and its members in many exciting ways (including but not limited to content creation), it is important to understand and plan for potential risks. Below, we discuss some of the most common concerns among association executives regarding generative AI, all of which underscore the need for associations to develop and regularly update a comprehensive AI policy.

Accuracy and Reliability
From conference presentations and webinars, to white papers and other educational materials, to industry standards and credentialing examinations, content is key for educating the industry and the public, and for advancing the association’s mission. It is therefore vital that this content is accurate and reliable. While generative AI may be useful in producing draft content, care should be taken to ensure it is based on reliable sources and is factually correct.

In a now highly publicized case, a judge sanctioned an attorney who cited fake cases in a court filing prepared with the help of ChatGPT. This type of false content is called “AI hallucination,” and it is a concern for all with plans to leverage generative AI to produce content.

Associations should put in place safeguards to fact check AI-generated content (AIGC). Otherwise, you risk relying on false, misleading or even fraudulent content that may result in legal and reputational risks to your association. In many cases, a lack of intent will not insulate the association from legal liability or the reputational harm that may result from reliance on inaccurate AIGC.

Copyright Ownership and IP Issues
Two key issues are important for associations to consider in the fast-evolving legal landscape of IP implications of AIGC: generative AI authorship and infringement liability for use of AIGC.

While the law continues to evolve on the generative AI authorship issue, the U.S. Copyright Office and at least one federal district court have taken the position that human authorship is a prerequisite to copyright protection (known as the “Human Authorship Requirement”) and, accordingly, that AIGC is not protected by copyright law absent the requisite degree of human involvement. Associations should therefore take the time to understand where the content they produce and use comes from, and how it is generated, to ensure that they own or have the appropriate licenses to use the content.

The law is also still evolving on infringement liability for use of AIGC, with many test cases percolating in the courts on AIGC platform liability for unauthorized use of third-party content in training sets. While at least one court had cast doubt on some training set infringement theories, it remains true that creation and/or use of AIGC may infringe a third-party’s IP rights and expose an association to infringement liability, whether it knows the origin of that content (and the fact that it is infringing) or not. This risk is not only presented through potentially infringing training sets, but also through user interaction (e.g., users can upload infringing content to generate AIGC or use keywords that invoke infringing AIGC).

Critically, associations may engage with generative AI and AIGC through members and third-party contractors—or even employees—without knowing it. Your members could already be using generative AI when contributing content to your association, including in educational presentations, association publications or blogs and in collectively written materials like standards, credentialing criteria and best practice guidelines. Your employees or contractors may also choose to complete assignments using generative AI. Though associations are well-versed in securing IP rights to content they produce, due to the Human Authorship Requirement, the transferring party may not own any rights in the AIGC being “transferred.”

That is why it is now imperative not only to have in place applicable contract provisions with content contributors addressing IP rights, but also to advise them specifically of the status of AIGC and whether and in what way it may be used, and to confirm how content was created. This could mean providing them with a list of tools that they are permitted or not permitted to use, and if they are permitted to use them for some purposes, spelling out those restrictions and guardrails. Generative AI platforms vary starkly with regards to the strength of their IP policies and moderation practices, and associations should choose wisely.

Ambitious associations might consider developing in-house generative AI technology, which likely would require training set data. As noted above, copyright infringement and fair use questions around training set use are still being litigated, and a “one size fits all” approach is unlikely in view of variability in generative AI technology and the way that technology is leveraged by platforms. As such, licensing training content is, and will likely remain, a best practice to avoid costly litigation.

Privacy and Security Considerations
While AIGC may only be as good as the information that a generative AI platform is trained on, associations must take care when entering information into any platform that is not secure, and should not enter confidential or proprietary information, as doing so likely constitutes a public disclosure that may allow third parties to access such information and/or deprive the owner of the legal right to protect that information as confidential. For associations, this issue can arise in various contexts. For example, a credentialing organization considering using generative AI to draft test questions should consider whether this can be done in compliance with the U.S. Copyright Office’s secure test requirements.

Further, associations should be sensitive to data privacy issues, including by instructing employees not to input sensitive data or personally identifiable information into AI platforms. The European Union is already taking action to regulate the use of generative AI platforms, and this issue is likely to make its way into state and federal laws and regulations in the United States.

Putting It All Together
Associations should consider adopting AI policies that address the use of generative AI by employees, volunteers and others who are creating content or have access to association proprietary information. These policies should address when generative AI may be used in creating content and any rules around such use. For example, which platforms may or may not be used and for what purpose, along with any fact-checking requirements. Associations may also want to make clear what organizational information may or may not be input into generative AI platforms.

In addition, associations should keep up to date on the changing laws regulating AI. For instance, Congress is currently considering legislation requiring a conspicuous disclaimer wherever output has been generated by AI. That said, even absent new legislation, existing federal and state false advertising laws can reach current uses of AIGC, so associations would be wise to adopt an AI policy that requires such a disclaimer in appropriate circumstances.

While these are new and still-evolving legal issues, your association should consider the legal implications of AIGC now, as mass adoption of these tools makes it increasingly difficult, if not impossible, for associations to sit on the sidelines.

**********

A version of this article first appeared in ASAE’s Associations Now publication.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.