Overview
This one-day workshop aims to introduce experienced cybersecurity practitioners to key arguments around global catastrophic risks (GCRs) from AI systems. Participants will explore leveraged ways to contribute to GCR-relevant work and connect with organizations working to reduce risks and realise benefits of powerful AI systems.
This workshop will:
- Introduce participants to AI-driven GCR threatmodels.
- Present opportunities to apply participant’s expertise to prevent catastrophes from powerful AI systems.
- Facilitate connections between participants and AI safety organizations.
Am I a good fit for this workshop?
We are most excited about people with:
- Strong information security backgrounds, especially in hardware engineering or experience in high-stakes settings (e.g. defense, health, finance, critical infrastructure …).
- Interest in AI and its potential impacts, especially for powerful AI systems.
- Interest in pursuing new opportunities in AI security.
You do not need to have any existing knowledge or experience with AI or GCRs, though we may provide some pre-reading for people to get as much out of the workshop as possible.
What are AI-driven Global Catastrophic Risks?
Global Catastrophic Risks (GCRs) are potential events that could cause severe harm to a significant portion of the global population, potentially endangering billions of lives or fundamentally altering the course of human civilization. AI-driven GCRs specifically refer to catastrophic risks that are enabled, exacerbated, or directly caused by advanced artificial intelligence systems.
Proactively addressing AI-driven GCRs is crucial due to:
- The need for extensive preparation time
- The severity of potential consequences
- The rapid pace of AI advancement
Early action helps society prepare to maximize AI benefits while minimizing catastrophic risks.
Draft Schedule
Interested in attending our workshop?
Please fill in this form to register interest in upcoming workshops. Please note that we limit our workshops to 20-40 participants to ensure quality interactions.
For inquiries contact caleb@airiskfund.com. This workshop is run in collaboration with FAR.Labs.