Food, accommodation, and teaching are all provided at no cost to participants. Travel costs can be reimbursed if they're a barrier to attending
Everyone in a bootcamp comes from the same region so you'll be learning alongside people you can stay in touch with
Work alongside talented and motivated people who are committed to making an impact
All sessions, materials, and discussions are conducted in English
We look for people from all walks of life who are committed to contributing to AI safety
Accommodation is on-site and participants stay at the venue for the duration of the bootcamp
Political economy, corporate governance, compute governance, and alternative approaches
Scenario planning, analysing governance developments, and understanding policy levers
Articulating AI risks clearly and adapting messages for varying audiences
1-on-1 mentorship, pathways to contribute to AI Safety, and post-camp action planning
Afternoon literature review and a 2.5-day capstone project with mentorship and peer feedback
Governance frameworks, technical grounding, policy analysis
Case studies, readings, group debates
Scenario planning, communication training, applied exercises
Group projects, 1-on-1 career mentorship
Guest speakers, Q&A sessions, social events
AI is going to impact all parts of society and will require expertise from all fields, so there's no single profile that we're looking for. We don't expect any prior technical knowledge.
The ideal candidate is somebody committed to contributing to AI safety in a substantial way, either full-time as a job or as a side project.
We're looking for participants from a variety of backgrounds, from technical people looking to go into governance, to those with a background in communication, law, policy or entrepreneurship.
This also extends to career stage; we are most excited about people who are ready to actively contribute to AI safety, be that someone who has just finished their masters or PhD, someone with decades of work experience, or someone early in their career.
We expect participants to have some familiarity of the major risks from AI (e.g. misuse from bad actors, extinction risks) and a rough overview of some proposed solutions. We provide a prerequisite reading list to give everyone enough shared understanding to make the most out of the camp.
Auriane Técourt
Curriculum Developer, Teacher
A multidisciplinary engineer working on AI policy in the private sector, previously researching AI governance at a think tank. Auriane's background in teaching enables clear communication of complex technical topics to non-technical audiences.
Joel Christoph
Curriculum Developer, Teacher
Joel is a PhD Researcher in Economics at the European University Institute (EUI), focusing on the economics of growth, AI, and global governance. He brings experience in AI research, policy analysis, and educational program leadership from roles including Area Chair for AI Economics at Apart Research and Director at Effective Thesis. Joel founded the global public goods initiative 10Billion.org.
Elsa Donnat
Teaching Assistant
Elsa is an AI Policy Fellow at Ada Lovelace Institute. She studied law before moving into AI governance, completing many programmes in the field including ML4Good, MARS, Orion and Talos. Last summer, she was a summer fellow at GovAI where she explored legal issues surrounding future autonomous/AI-run businesses, specifically legal personhood and corporate law.
Charbel-Raphael Segerie
Co-founder, Curriculum Developer
Charbel is the Executive Director of CeSIA. He organized the Turing Seminar (MVA Master's AI safety course), initiated the ML4Good bootcamps, served as TA for ARENA and MLAB, and previously worked as CTO of Omnisciences and researcher at Inria Parietal and Neurospin.
No bootcamp in your area yet? Tell us where you are. We prioritise new locations based on interest from potential participants.