Takeaways

The guidance outlines four principles to help critical infrastructure operators govern AI in OT environments.
AI-enabled OT products may face heightened vendor transparency, including model disclosures and safety reporting.
Organizations may need to update governance, data practices and incident-response plans as regulators increase scrutiny of AI.

On December 3, 2025, CISA, the NSA, the FBI and several international cyber authorities released Principles for the Secure Integration of Artificial Intelligence in Operational Technology, a joint framework aimed at helping critical infrastructure operators deploy AI safely and responsibly.

The publication arrives at a pivotal moment. AI capabilities are moving quickly from experimentation into core industrial processes, and many operators are now confronting what “secure” integration really means for systems that cannot tolerate failure.

Yet AI’s growing presence in operational technology (OT) environments brings with it technologies, risks and operational dependencies that remain unfamiliar territory for many engineering, cybersecurity and legal teams. The new principles offer a useful high-level roadmap, but they stop short of answering the practical questions organizations must navigate: how to operationalize governance, evaluate vendors, allocate liability and incorporate AI considerations into long-standing safety and compliance frameworks.

Key Takeaways from the Guidance: Summary of the Four Principles
The agencies distill their recommendations into four guiding principles, each aimed at reducing risk while enabling the responsible adoption of AI in OT environments.

1. Understand AI
Stakeholders must understand the ways AI systems function and the ways their introduction can reshape traditional OT risk models. Effective integration requires technical insight as well as awareness of how AI may influence safety, reliability and day-to-day operations.

  • Unique Risks of AI in OT. The guidance identifies several AI-specific risks not typically seen in OT, including model manipulation, data poisoning, prompt injection, data-quality issues that lead to incorrect outputs, model drift reducing accuracy over time, and limited explainability that complicates audits and incident analysis. False alarms or AI-generated errors may also increase operator burden and distract from critical decision-making.
  • Secure AI System Development Lifecycle. Agencies encourage a structured approach for AI systems that covers secure design, procurement, deployment and long-term operations. This model mirrors shared-responsibility practices used in cloud environments and requires clear allocation of security and safety obligations among owners, vendors and integrators.
  • Personnel Training and Preparedness. AI changes how operators and engineers interact with OT systems. Personnel must be trained to interpret outputs, validate recommendations and recognize anomalous behavior, while also maintaining manual competencies so operations remain safe if AI functions degrade or fail.

2. Assess AI Use in the OT Domain
Organizations should critically assess whether AI is appropriate for a given OT application and understand the broader operational implications of introducing AI into highly sensitive environments.

  • OT-Specific Business Case Assessment. AI should be deployed only where it offers a clear advantage over traditional automation. Organizations must evaluate performance requirements, system complexity, cost and potential safety impacts, as well as their capacity to maintain AI systems given the expanded attack surface and ongoing resource needs. A predictive-maintenance example in the guidance illustrates this approach.
  • Managing OT Data Security Risks. AI’s reliance on large volumes of operational data raises concerns about data assurance, sovereignty and privacy. Sensitive engineering data aggregated for AI may become more attractive to adversaries, and legacy OT architectures and data silos complicate secure integration. Effective model development often requires domain expertise to ensure data quality and capture safety-critical edge cases.
  • Vendor Roles in AI Integration. As more OT devices incorporate embedded AI, the guidance emphasizes increased transparency and contractual control. This includes software supply-chain disclosures, software bills of materials (SBOMs) for AI components, information on hosting locations and external connections and identification of unsafe model behaviors, along with the ability to disable AI features and impose data-usage restrictions.
  • Integration Challenges. AI can introduce new complexities and vulnerabilities, including latency constraints, cloud-based SCADA risks and compatibility issues with older systems. Recommended mitigations include testing environments before production deployment, strict network segmentation, push-based data architectures and ensuring the ability to revert to manual or deterministic control.

3. Establish AI Governance and Assurance Frameworks
The guidance encourages organizations to implement governance structures and technical processes that support secure, predictable and accountable use of AI in OT environments.

  • Governance Mechanisms. Governance should involve leadership, OT and information technology (IT) subject-matter experts, cybersecurity teams, and relevant vendors. Clear roles across the AI lifecycle, strengthened data governance through access controls, encryption and behavioral analytics, and regular audits help ensure models operate as intended.
  • Integration Into Existing Security Frameworks. AI should be incorporated into established OT risk-management processes. This includes adding AI-specific risk assessments, implementing enhanced monitoring such as egress logging and data-loss protections and using threat-modeling tools such as MITRE ATLAS.
  • Thorough Testing and Evaluation. The agencies stress rigorous testing before deployment, recommending staged environments from low-fidelity simulations to hardware-in-the-loop evaluations. Organizations should avoid exposing production data in testing and require vendors to provide transparency about dependencies and operational assumptions.
  • Regulatory and Compliance Considerations. The absence of AI standards tailored to OT may create compliance uncertainty. Limited explainability complicates auditability and safety assessments. Organizations should track emerging ETSI SAI standards and define clear criteria for reverting to non-AI modes if safety or performance thresholds are not met.

4. Embed Security and Safety Into AI and AI-Enabled OT Systems
Regardless of how sophisticated AI capabilities become, operators remain responsible for safety and must ensure robust oversight and dependable fallback mechanisms.

  • Monitoring and Oversight Mechanisms. Organizations should maintain an inventory of AI components and dependent systems, log AI inputs and outputs and establish known good states to support troubleshooting. Human-in-the-loop oversight remains critical for safety functions. Tools such as anomaly detection, behavioral analytics and red-team testing help validate resilience.
  • Safety and Failsafe Mechanisms. AI-enabled systems should include documented failure states and the ability to bypass or disable AI quickly. Operators must retain the ability to revert to manual or deterministic control. The guidance also recommends updating incident-response plans to account for AI compromise or manipulation and using push-based or unidirectional architectures to preserve OT segmentation and minimize attack paths.

Regulatory and Legal Implications
The guidance carries significant regulatory and contractual implications for critical infrastructure operators. As AI systems become more deeply embedded in OT environments, the regulatory landscape will increasingly turn on how operational data is governed, how vendor responsibilities are structured and how liability is allocated across complex technical ecosystems.

Data Governance and Sovereignty
AI systems expand the volume, sensitivity and retention of OT data, which in turn elevates regulatory exposure and cybersecurity obligations. Cross-border data access by AI vendors may introduce jurisdictional complications, particularly in regions where foreign laws could mandate disclosure of training or operational datasets. These developments point to a growing emphasis on data-use limitations, residency expectations and enhanced auditability in commercial and regulatory settings.

Vendor Contracting and SBOM Expectations
As OT products incorporate embedded AI, regulatory trends and industry expectations increasingly favor transparency regarding AI features, hosting arrangements and model supply chains. SBOMs for AI components are emerging as a common requirement across U.S. and international frameworks. These dynamics suggest that future contracting may place greater focus on feature control, data-use boundaries, model safety disclosures and timely notification of changes that could affect risk.

Liability Exposure from AI-Enabled OT Systems
The integration of AI into safety-critical systems introduces uncertainty regarding responsibility when failures occur. Liability risks may arise when AI-generated outputs contribute to physical harm or environmental damage, or when governance gaps lead to inadequate oversight throughout the AI lifecycle. Explainability challenges further complicate post-incident analysis, raising questions about how causation and fault will be established when AI decision paths cannot be reconstructed or justified.

Cross-Border Considerations
Many AI vendors rely on offshore development teams, cloud infrastructure or remote access models that pose regulatory concerns. Foreign jurisdictions may assert rights over data processed or stored abroad, and operators must assess whether vendor practices align with U.S., EU or sector-specific rules. These issues may require enhanced due diligence and more sophisticated contractual protections.

Incident Response and Regulatory Reporting
AI introduces new failure modes, including manipulation and unanticipated degradation, that may challenge traditional incident-response frameworks. Limited transparency into model behavior can complicate investigations and regulatory reporting, particularly in industries with stringent oversight. Regulators may also expect operators to retain the ability to revert to non-AI operating modes to maintain safety and operational continuity during an incident.

Practical Steps for Organizations Integrating AI
While the guidance outlines high-level principles, organizations ultimately need concrete steps to operationalize AI safely within OT environments. Several measures can help translate those principles into practice.

  1. Governance and Organizational Readiness. Establish a cross-functional governance structure that includes OT, IT, legal, procurement and compliance stakeholders. Roles and responsibilities should be clearly defined across the AI lifecycle, and systems should incorporate explainability features to support operator understanding and effective oversight.
  2. Data Strategy and Protection. Classify OT datasets and apply heightened protections to engineering configuration files and sensitive process telemetry. Organizations may also need policies governing how vendors handle operational data and monitoring tools that detect unexpected egress or anomalous access patterns.
  3. Vendor and Supply Chain Management. Update procurement processes to reflect AI-specific expectations, including requirements for SBOMs, model transparency and rights to disable or constrain AI functionality. Contracts should address timely notification of model changes or identified safety issues and ensure alignment with secure-by-design principles.
  4. Technical Integration Actions. Use testbeds, simulations and hardware-in-the-loop environments before introducing AI into production systems. Technical safeguards, such as push-based data architectures, limits on inbound vendor access, anomaly detection and defined thresholds for fallback to manual control, can reduce attack surface and enhance operational resilience.
  5. Workforce Training and SOP Modernization. Update training programs and standard operating procedures (SOP) to ensure operators can interpret AI outputs, validate recommendations and intervene effectively when AI components behave unpredictably. Organizations should regularly test operator readiness to maintain safe operations during degraded or non-AI modes.

Pillsbury’s Cybersecurity and Artificial Intelligence teams continue to track ongoing developments closely and regularly assist clients with the operational and legal considerations surrounding AI-enabled systems. We are available to support organizations as they evaluate how best to incorporate these principles into their OT and enterprise risk-management programs.

These and any accompanying materials are not legal advice, are not a complete summary of the subject matter, and are subject to the terms of use found at: https://www.pillsburylaw.com/en/terms-of-use.html. We recommend that you obtain separate legal advice.