Food, accommodation, and teaching are all provided at no cost to participants. Travel costs can be reimbursed if they're a barrier to attending
Everyone in a bootcamp comes from the same region so you'll be learning alongside people you can stay in touch with
Work alongside talented and motivated people who are committed to making an impact
All sessions, materials, and discussions are conducted in English
We look for people from all walks of life who are committed to contributing to AI safety
Accommodation is on-site and participants stay at the venue for the duration of the bootcamp
Build transformers from scratch, LLM agents, interpretability techniques, RLHF, evaluations
AI capabilities and trends, risk modelling, alignment and control, forecasting, tradeoffs between mitigations
Technical works for governance, compute governance, recent developments
Formulation of your Theory of Change, literature review afternoon, 2-day capstone project with mentorship and peer feedback
Breaking apart technical concepts, technical frameworks, case studies, group debates
Notebooks with multiple difficulty levels
Notebooks with multiple difficulty levels
Self-reflection, group projects, and 1-on-1 career mentorship
Guest speakers, Q&A sessions, and social events
AI is going to impact all parts of society and handling it wisely will require people of every background. Our bootcamps are geared towards those who are in or adjacent to the tech world and are committed to contributing to AI safety in a substantial way, either full-time as a job or as a side project.
Some coding experience is useful to make the most of the bootcamp. Our Python workshops have multiple levels of difficulty to adapt to those with a decade of software development experience and those with little.
We're most excited about people who are ready to contribute to AI safety, be that someone with decades of work experience, someone who has just finished their master's or PhD, or someone early in their career.
We expect participants to have basic familiarity with the major risks from AI (e.g. misuse, loss of control) and a rough overview of some proposed solutions. We provide a prerequisite reading list and notebooks to give everyone enough shared understanding to make the most out of the camp.
Diego Dorn
Teacher
Diego is a Senior Software Developer at the PEReN, working with the EU AI Office to build the technical infrastructure required for the large-scale evaluation of models. After participating in the very first ML4Good, Diego has taught at 8+ bootcamps and is now training the next generation of teaching staff. He holds a Master's from EPFL and completed his thesis on LLM agent monitoring at CeSIA.
Julian Schulz
Teacher
Julian is a Visiting Researcher at Meridian in Cambridge, leading a project on encoded reasoning. He participated in ARENA and MATS, doing research on Steering Vectors, and worked as an independent researcher on automated feature labeling and robustness of sleeper agent detection.
Monika Jotautaite
Teacher
Monika is an independent researcher in AI Safety. Previously, she participated in Pivotal and Athena Fellowships and ML4Good, and worked for two years as a Data Scientist. She studied an AI Masters at Imperial College London.
Elsa Donnat
Teaching Assistant
Elsa is an AI Policy Fellow at Ada Lovelace Institute. She studied law before moving into AI governance, completing many programmes in the field including ML4Good, MARS, Orion and Talos. Last summer, she was a summer fellow at GovAI where she explored legal issues surrounding future autonomous/AI-run businesses, specifically legal personhood and corporate law.
Charbel-Raphael Segerie
Co-founder, Curriculum Developer
Charbel is the Executive Director of CeSIA. He organized the Turing Seminar (MVA Master's AI safety course), initiated the ML4Good bootcamps, served as TA for ARENA and MLAB, and previously worked as CTO of Omnisciences and researcher at Inria Parietal and Neurospin.
Rich Barton-Cooper
Teacher
Rich is a Research Manager at MATS Research. He worked as a software engineer before transitioning into AI safety by completing AI Safety Fundamentals in 2024 and ML4Good in 2025, and worked on black-box scheming monitoring in MATS for 6 months before joining full-time as a Research Manager.
No bootcamp in your area yet? Tell us where you are. We prioritise new locations based on interest from potential participants.