Blog Post 11.11.25
Alert
Alert
02.25.26
In January and February 2026, the National Institute of Standards and Technology (NIST), through its Center for AI Standards and Innovation (CAISI), launched a new AI Agent Standards Initiative to support the development of interoperable and secure AI agent systems, issued a Request for Information (RFI) on securing AI agent systems, and announced a series of virtual listening sessions to identify barriers to artificial intelligence adoption, including in the financial sector.
The era of merely experimenting with autonomous AI agents is quickly giving way to enterprise deployment. As organizations integrate AI agents into production environments, new opportunities for efficiency and innovation are emerging. With that comes heightened compliance, governance and cybersecurity risks. NIST’s recent actions signal increased federal focus on interoperability, identity management and security controls for AI agent systems.
Stakeholders that wish to help shape the emerging framework for AI agent governance should consider engaging with NIST and CAISI through the available public comment processes and listening sessions, with participation deadlines occurring in March and April 2026.
The AI Agent Standards Initiative
NIST’s AI Agent Standards Initiative is intended to support the development of industry-led technical standards and open protocols for autonomous AI agent systems. Since enterprise-level agents interact with APIs, databases and other digital infrastructure, standardization efforts are likely to focus on promoting secure, reliable and interoperable deployment at scale.
Although the Initiative is in its early stages, anticipated areas of focus include:
Given NIST’s historical role in shaping widely adopted cybersecurity and risk management standards, this Initiative may serve as a precursor to future guidance or frameworks that influence procurement requirements, third-party risk management, vendor diligence and, potentially, supervisory expectations in regulated sectors. Many organizations already consider NIST’s AI Risk Management Framework as the lodestar for AI governance programs, and this would extend such industry standards to the arena of agentic AI.
CAISI Request for Information on Securing AI Agent Systems
CAISI issued an RFI seeking stakeholder input on security considerations unique to AI agent systems. The RFI is intended to inform future guidance, research priorities and potential standards development relating to the secure design and deployment of autonomous agents.
The request may be particularly relevant for financial institutions and other regulated entities deploying or evaluating AI agents in customer-facing applications, trading and market functions, compliance and risk automation, and internal workflow orchestration. As AI agents begin operating across production systems and interacting with sensitive data and critical infrastructure, CAISI’s inquiry signals growing federal attention to the risk management frameworks surrounding these tools.
The RFI seeks input on, among other topics:
Comments are due on March 9, 2026.
CAISI Listening Sessions on Barriers to AI Adoption
In addition to the RFI, CAISI has announced a series of virtual listening sessions scheduled for April 2026 to gather stakeholder perspectives on barriers to AI adoption. The sessions are intended to collect sector-specific feedback from the financial services, health care and education sectors.
CAISI seeks input regarding technical, operational and regulatory challenges associated with deploying AI systems and AI agents in production environments. Feedback from these sessions will inform future standards initiatives, research priorities and potential guidance relating to AI agent systems.
Organizations seeking to participate must submit a request to attend, along with relevant examples or areas of experience, by March 20, 2026.
NCCoE Project on Software and AI Agent Identity and Authorization
Separately, NIST’s National Cybersecurity Center of Excellence (NCCoE) has released a concept paper titled “Accelerating the Adoption of Software and AI Agent Identity and Authorization.” The project is intended to explore practical, standards-based approaches for authenticating software and AI agents, defining permissions and implementing authorization controls in enterprise environments.
According to NCCoE, the effort may lead to a demonstration project designed to show how existing identity and access management standards can be applied to AI agents operating across APIs, databases and other digital infrastructure. The initiative underscores NIST’s growing focus on identity governance as a foundational control for autonomous systems.
Stakeholders have until April 2, 2026, to submit public comment.
Recommended Actions
While NIST and CAISI initiatives are not binding regulations, their frameworks frequently influence supervisory expectations, procurement standards and industry best practices, especially in regulated sectors such as financial services.
Given the potential impact of AI policy developments, stakeholders should take proactive steps to engage with NIST prior to the deadlines set forth above, including engaging legal counsel to assist in submission of responses, and assessment of the potential impact on your organization.
Pillsbury’s Artificial Intelligence team is available to advise clients on the implications of NIST’s AI agent initiatives and to assist in preparing comments or evaluating governance frameworks for AI agent deployment.