AI Security Standards Framework
A comprehensive, versioned, freely available framework for AI security and integrity — covering 10 critical domains.
Coverage Domains
Each domain includes detailed controls, implementation guidance, and assessment criteria.
AI System Security Architecture
Foundational security design principles for AI systems, including secure development lifecycles, threat modeling, and defense-in-depth strategies specific to AI workloads.
Model Integrity & Tampering Prevention
Protecting AI models from adversarial manipulation, unauthorized modification, and integrity violations throughout their lifecycle.
Training Data Security & Provenance
Ensuring the security, quality, and traceability of data used to train and fine-tune AI models.
Inference & Deployment Security
Securing AI systems in production, from model serving infrastructure to runtime monitoring and anomaly detection.
AI Supply Chain Security
Managing security risks across the AI supply chain, including third-party models, libraries, frameworks, and cloud services.
AI Incident Detection & Response
Establishing capabilities for detecting, responding to, and recovering from AI-specific security incidents.
Governance, Accountability & Auditability
Frameworks for organizational accountability, decision logging, audit trails, and compliance documentation for AI systems.
Privacy & Data Protection in AI Systems
Privacy-preserving techniques and data protection controls specific to AI system development and deployment.
Third-Party AI Risk Management
Assessing and managing risks introduced by third-party AI products, APIs, and services integrated into organizational workflows.
AI in Critical Infrastructure
Sector-specific security requirements for AI deployed in healthcare, finance, energy, defense, and other critical infrastructure.
Framework Mappings
The NSF is cross-referenced with major compliance and regulatory frameworks.
NIST AI RMF
Full mapping to all four functions: Govern, Map, Measure, Manage
ISO/IEC 42001
Alignment with AI management system requirements
SOC 2
AI-specific controls mapped to Trust Service Criteria
HIPAA
Healthcare AI security controls and safeguards
EU AI Act
Cross-reference for high-risk AI system requirements
Development Roadmap
NSF v0.1
In ProgressInternal draft — foundational controls and domain structure
6 months post-launch
NSF v1.0
PlannedPublic release — full controls, implementation guidance, assessment criteria
18 months post-launch
NSF v1.1+
FutureAnnual update cycle incorporating community feedback and emerging threats
Ongoing
Contribute to the NSF
The NAISC Standards Framework is developed with input from practitioners, academics, and industry leaders. Join us in building the definitive AI security standard.