Our strongest attendees often come through referrals. We'd love your help in gathering a great group for the Paris AI Security Forum 2025—feel free to share or adapt the invites below! For more information you can reach us at ai-security-forum@far.ai. Thank you for your help!
Email invite
Subject: Paris AI Security Forum - Feb 9, 2025
Hello [First name], I wanted to let you know about the upcoming Paris AI Security Forum, a one-day event bringing together ~150 experts in AI safety, policy, and cybersecurity to discuss securing powerful AI systems. This event will take place in Paris on Sunday, February 9th, scheduled near the AI Action Summit. If you’re interested in attending, please fill out the Paris AI Security Forum Interest Form (ideally in the next few days), or share with other potential invitees!
This year’s speakers include:
- Sella Nevo (RAND)
- Mark Beall (American Security Fund)
- Will Pearce (Dreadnode)
- Dan Lahav (Pattern Labs)
- Nicole Nichols (Palo Alto Networks)
Nitarshan Rajkumar, Caleb Parikh, Jacob Lagerros, with support from Lindsay Murachver and Vael Gates from FAR.AI, are organizing this continuation of the 2024 Forum. We expect a similar roster to past events, which brought together leaders from AI labs (Anthropic, OpenAI, DeepMind), academia (Harvard, MIT, Stanford), government (CISA, RAND), and the cybersecurity industry to accelerate AI security progress. If interested, please fill out the Paris AI Security Forum Interest Form. Best, [Your name]
Slack/discord message
TL;DR: Interested in AI security? Apply for the Paris AI Security Forum, happening Sunday, Feb 9, in Paris.
Hey everyone! 👋
The Paris AI Security Forum is happening Sunday, Feb 9: a 150-person event in Paris focused on AI security research, engineering, and policy (taking place around the AI Action Summit).
This year’s featured speakers will include Sella Nevo (RAND), Mark Beall (American Security Fund), Will Pearce (Dreadnode), Dan Lahav (Pattern Labs), Nicole Nichols (Palo Alto Networks), and Nitarshan Rajkumar (Independent). We expect a similar roster to past events, which brought together leaders from AI labs (Anthropic, OpenAI, DeepMind), academia (Harvard, MIT, Stanford), government (CISA, RAND), and the cybersecurity industry to accelerate AI security progress.
The forum is especially relevant for security engineers, researchers, policymakers, and those looking to transition into AI security. Core themes include AI IP security, cyber capability evaluations, hardware security, and threat models.
- Apply here
- Recommend others within the form, by sharing the form, or via the invite kit
(If you’re unsure about applying, please err on the side of applying, with applications especially encouraged from underrepresented groups in AI security!)
🛡️🔐 Paris AI Security Forum 2025 on Feb 9! 🇫🇷
Join 150+ engineers, researchers & policymakers from leading labs, academia & government to discuss challenges in AI system security & safety. Hear from speakers at RAND, Palo Alto Networks, Dreadnode & more. 🔗👇
Reply:
📝Apply or invite others: https://airtable.com/appZOFcnymTfcv9ml/shrmq4G7WggSSdXWV
🛡️ Paris AI Security Forum 2025 — Applications are open! 🇫🇷
Join us on Sunday, Feb 9, 2025 for a focused gathering of AI security engineers, researchers, and policymakers to discuss practical challenges in AI system security and safety.
Featured speakers include:
🔹 Sella Nevo, RAND
🔹 Nicole Nichols, Palo Alto Networks
🔹 Will Pearce, Dreadnode
🔹 Dan Lahav, Pattern Labs
🔹 Mark Beall, American Security Fund
🤝This event brings together AI security practitioners from AI labs, academia, and government agencies.
🔐Core themes include AI model weight security, cyber capability evaluations, self-exfiltration, and threat models.
📝Apply with this link. Know someone who would benefit? Please share!
https://airtable.com/appZOFcnymTfcv9ml/shrmq4G7WggSSdXWV
Blurb
The Paris AI Security Forum (Sun Feb 9, 2025) is a focused gathering of AI security practitioners from labs, academia, and government discussing practical challenges in AI system security and safety. Speakers from RAND, American Security Fund, Dreadnode, Pattern Labs, and Palo Alto Networks will be featured, with special attendee focus on security engineers, researchers, and policymakers from AI labs, academia, and government agencies.