NSF — NAISC Standards Framework

AI Security Standards Framework

A comprehensive, versioned, freely available framework for AI security and integrity — covering 10 critical domains.

CC-BY 4.0 LicensedVersioned & Public

Coverage Domains

Each domain includes detailed controls, implementation guidance, and assessment criteria.

01

AI System Security Architecture

Foundational security design principles for AI systems, including secure development lifecycles, threat modeling, and defense-in-depth strategies specific to AI workloads.

Secure AI SDLCThreat modeling for ML pipelinesNetwork segmentation for AI infrastructureAuthentication & authorization for AI services
02

Model Integrity & Tampering Prevention

Protecting AI models from adversarial manipulation, unauthorized modification, and integrity violations throughout their lifecycle.

Model signing & verificationAdversarial robustness testingModel watermarkingAnti-tampering controls
03

Training Data Security & Provenance

Ensuring the security, quality, and traceability of data used to train and fine-tune AI models.

Data provenance trackingData poisoning preventionSecure data pipelinesDataset integrity validation
04

Inference & Deployment Security

Securing AI systems in production, from model serving infrastructure to runtime monitoring and anomaly detection.

Secure model servingInput validation & sanitizationOutput filteringRuntime anomaly detection
05

AI Supply Chain Security

Managing security risks across the AI supply chain, including third-party models, libraries, frameworks, and cloud services.

Model supply chain auditingDependency vulnerability scanningVendor risk assessmentOpen-source AI component security
06

AI Incident Detection & Response

Establishing capabilities for detecting, responding to, and recovering from AI-specific security incidents.

AI-specific SIEM integrationIncident response playbooks for AIModel drift detectionAutomated rollback procedures
07

Governance, Accountability & Auditability

Frameworks for organizational accountability, decision logging, audit trails, and compliance documentation for AI systems.

AI decision audit trailsModel governance frameworksCompliance documentationBoard-level AI oversight
08

Privacy & Data Protection in AI Systems

Privacy-preserving techniques and data protection controls specific to AI system development and deployment.

Differential privacyFederated learning securityPII detection & redactionData minimization for AI
09

Third-Party AI Risk Management

Assessing and managing risks introduced by third-party AI products, APIs, and services integrated into organizational workflows.

Third-party AI assessmentAPI security for AI servicesSLA & security requirementsContinuous monitoring
10

AI in Critical Infrastructure

Sector-specific security requirements for AI deployed in healthcare, finance, energy, defense, and other critical infrastructure.

Healthcare AI security (HIPAA)Financial AI security (SOC 2)Energy sector AI controlsDefense & national security AI

Framework Mappings

The NSF is cross-referenced with major compliance and regulatory frameworks.

NIST AI RMF

Full mapping to all four functions: Govern, Map, Measure, Manage

ISO/IEC 42001

Alignment with AI management system requirements

SOC 2

AI-specific controls mapped to Trust Service Criteria

HIPAA

Healthcare AI security controls and safeguards

EU AI Act

Cross-reference for high-risk AI system requirements

Development Roadmap

1

NSF v0.1

In Progress

Internal draft — foundational controls and domain structure

6 months post-launch

2

NSF v1.0

Planned

Public release — full controls, implementation guidance, assessment criteria

18 months post-launch

3

NSF v1.1+

Future

Annual update cycle incorporating community feedback and emerging threats

Ongoing

Contribute to the NSF

The NAISC Standards Framework is developed with input from practitioners, academics, and industry leaders. Join us in building the definitive AI security standard.