Policy & Advocacy
NAISC monitors, analyzes, and actively shapes the regulatory landscape for AI security through public comments, position papers, and congressional engagement.
Policy Activities
Public Comments
File public comments on proposed AI regulations from federal agencies.
Position Papers
Publish 2-4 policy position papers annually on critical AI security issues.
Congressional Briefings
Brief congressional staffers and committee staff on AI security matters.
Agency Engagement
Respond to RFIs from federal agencies and participate in public workshops.
NIST Collaboration
Participate in NIST AI RMF workshops and comment periods.
State Legislation
Monitor and engage with state-level AI legislation across key states.
Regulatory Monitoring
We track AI-related regulatory activity across all major federal agencies and key states.
FTC
AI rulemaking & consumer protection
CFPB
AI in financial services
SEC
AI disclosure requirements
FDA
AI/ML in medical devices
HHS
AI in healthcare
DHS / CISA
AI critical infrastructure guidance
NIST
AI Risk Management Framework
State Legislatures
NY, CA, TX, FL AI legislation
Policy Positions
NAISC is pioneering positions on critical AI security policy issues.
National AI Security Incident Reporting
Establishing mandatory reporting requirements for AI security incidents affecting critical infrastructure or public safety.
Minimum AI Security Standards for Federal Contractors
Requiring baseline AI security controls for any organization providing AI systems or services to the federal government.
AI Security Labeling & Trust Marks
Consumer-facing labels indicating AI products have met independent security evaluation criteria.
AI Red-Teaming Requirements for High-Risk Sectors
Mandating adversarial testing for AI systems deployed in healthcare, finance, energy, and defense.
Engage with NAISC Policy
Join the Policy & Advocacy Committee or submit your perspectives on AI security policy.