A ~200 person event during CyberWeek bringing together people excited about securing AI systems.
9:00 AM – 6:00 PM
Cocktail Dinner & Drinks cosponsored by Halcyon Ventures to follow (till 10:00 PM)
Core themes: AI model weight security, cyber capability evaluations, hardware-enabled AI governance, AI supply chain security, attacks on AI systems, AI control methods, adversarial ML, threat models.
Applications are now closed.
Speakers:

Co-Founder and CTO @ Irregular
💼

AI Security Lead @ Meta
💼

Director @ RAND Center on AI, Security, and Technology
💼 🌐

AI Safety and Security Architecture Manager @ NVIDIA
💼

Founder @ Augur
💼 🌐

Senior Lecturer and Head of SAFE Lab @ Technion University
🌐

Engineering Manager @ Meta
💼

Co-Founder and CEO @ Attestable
💼 🌐

Partner @ TLV Partners
💼

Senior Lecturer @ Tel Aviv University
💼 🌐

Co-Founder and CTO @ HelmGuard AI
💼 🌐

Senior Researcher @ Institute for AI Policy & Strategy
💼

Research Lead @ SaferAI
💼 🌐

Investor @ Halcyon Ventures
💼

Associate Program Officer @ Coefficient Giving
💼

Assistant Professor @ Ben-Gurion University
💼 🌐

Mentaleap
💼 🌐

Founder @ AI Security Forum
💼

Director @ Heron AI Security Initiative
💼

Head of Cyber @ Heron AI Security
💼
FAQs
Why are we running this event?
Who is this event for?
Who attended last time?
Is it safe to travel to the conference?
Organized by Ezra Hausdorff, Shanni Gurkevitch, Eli Parkes and Caleb Parikh. Co-run by the AI Risk Mitigation Fund and the Heron AI Security initiative, in partnership with CyberWeek. Cocktail dinner cosponsored by Halcyon Ventures.
Questions? Contact tlvaisecurityforum@heronsec.ai