Interested in attending? Fill out the Interest Form! If inviting others, you may find our invite kit useful.
A 150-person event bringing together people working to secure AI systems through security research, engineering, and policy.
10:00 AM – 7:00 PM
Networking & Drinks to Follow (till midnight)
Core themes: AI model weight security, Cyber capability evaluations, Cyber capability policy, Attacks on AI systems, Monitoring and protecting from self-exfiltration, Threat models.
Why are we running this event?
Who is this event for?
Who attended last time?
Distinguished Researcher @ Palo Alto Networks
Director @ RAND Meselson Center
Health Security Policy Advisor @ Johns Hopkins
CEO @ Dreadnode (Previously) NVIDIA AI Red Team Lead
EU CoP Vice Chair (Previously) Cofounder UK AI Safety Institute
Professor and Director, Machine Learning Department @ Carnegie Mellon University
CEO @ Pattern Labs
AI Security Tech Lead @ Meta
Gen AI Security @ Meta
Director @ American Security Fund (Previously) Director of Strategy and Policy @ DoD Joint Artificial Intelligence Center
Full Professor @ Université de Montréal, Founder and Scientific Director @ Mila
Executive Director @ Confidential Computing Consortium
Executive Director, AI and Advanced Computing @ Schmidt Sciences
Workstream Lead @ UK AI Safety Institute
Policy Director @ METR
Postdoctoral Researcher @ Carnegie Mellon University
Technology Expert @ European Commission (AI Office)
Chief Technology Officer @ Redwood Research
Research Fellow @ Mercatus Center at George Mason University
More speakers to be announced shortly…
Organized by Nitarshan Rajkumar, Jacob Lagerros, and Caleb Parikh with operational support from Vael Gates and Lindsay Murachver. Special thanks to our speakers, Everett Smith, Lisa Thiergart, and others. This event is co-run by the AI Risk Mitigation Fund and FAR.AI.