P1033313

Work in AI Governance

Where our alumni are now

EU AI Office
UK AISI
CeSIA

Recommended by 98% of participants

Free for you

Food, accommodation, and teaching are all provided at no cost to participants. Travel costs can be reimbursed if they're a barrier to attending

Close to home

Everyone in a bootcamp comes from the same region so you'll be learning alongside people you can stay in touch with

Great People

Work alongside talented and motivated people who are committed to making an impact

In English

All sessions, materials, and discussions are conducted in English

All backgrounds welcome

We look for people from all walks of life who are committed to contributing to AI safety

A focused environment

Accommodation is on-site and participants stay at the venue for the duration of the bootcamp

Curriculum

Themes

AI Governance

Political economy, corporate governance, compute governance, and alternative approaches

Strategy

Scenario planning, analysing governance developments, and understanding policy levers

Communication

Articulating AI risks clearly and adapting messages for varying audiences

Career Planning

1-on-1 mentorship, pathways to contribute to AI Safety, and post-camp action planning

Applied Projects

Afternoon literature review and a 2.5-day capstone project with mentorship and peer feedback

Typical schedule

08:00
Breakfast
09:00
Lectures & Practicals

Governance frameworks, technical grounding, policy analysis

12:00
Discussions

Case studies, readings, group debates

13:00
Lunch
14:30
Workshops

Scenario planning, communication training, applied exercises

16:30
Break
17:00
Projects & Careers

Group projects, 1-on-1 career mentorship

19:30
Dinner & Evening

Guest speakers, Q&A sessions, social events

Guest speakers
Mentorship
Real projects
Scenario planning

Eligibility

AI is going to impact all parts of society and will require expertise from all fields, so there's no single profile that we're looking for. We don't expect any prior technical knowledge.

The ideal candidate is somebody committed to contributing to AI safety in a substantial way, either full-time as a job or as a side project.

We're looking for participants from a variety of backgrounds, from technical people looking to go into governance, to those with a background in communication, law, policy or entrepreneurship.

This also extends to career stage; we are most excited about people who are ready to actively contribute to AI safety, be that someone who has just finished their masters or PhD, someone with decades of work experience, or someone early in their career.

We expect participants to have some familiarity of the major risks from AI (e.g. misuse from bad actors, extinction risks) and a rough overview of some proposed solutions. We provide a prerequisite reading list to give everyone enough shared understanding to make the most out of the camp.

Upcoming bootcamps

South Africa

In partnership with AI Safety South Africa

Apr 17 - 25
Apply

Western Europe

Near Lyon, France

May 04 - 12
Apply

Meet our teachers

6811dc4136d8f0a3145ce64f_auriane.jpg [airtable:attEqwBTIBI7tvDmI]

Auriane Técourt

Curriculum Developer, Teacher

A multidisciplinary engineer working on AI policy in the private sector, previously researching AI governance at a think tank. Auriane's background in teaching enables clear communication of complex technical topics to non-technical audiences.

6805e1180c84d966fe2d6e23_joelchristoph.webp [airtable:att1B1Bn7byLAl2Tb]

Joel Christoph

Curriculum Developer, Teacher

Joel is a PhD Researcher in Economics at the European University Institute (EUI), focusing on the economics of growth, AI, and global governance. He brings experience in AI research, policy analysis, and educational program leadership from roles including Area Chair for AI Economics at Apart Research and Director at Effective Thesis. Joel founded the global public goods initiative 10Billion.org.

elsa_profile.jpg [airtable:attDAmYpnBM4AThuM]

Elsa Donnat

Teaching Assistant

Elsa is an AI Policy Fellow at Ada Lovelace Institute. She studied law before moving into AI governance, completing many programmes in the field including ML4Good, MARS, Orion and Talos. Last summer, she was a summer fellow at GovAI where she explored legal issues surrounding future autonomous/AI-run businesses, specifically legal personhood and corporate law.

charbel_fancy.jpg [airtable:attkDzkPE4UhyT7xM]

Charbel-Raphael Segerie

Co-founder, Curriculum Developer

Charbel is the Executive Director of CeSIA. He organized the Turing Seminar (MVA Master's AI safety course), initiated the ML4Good bootcamps, served as TA for ARENA and MLAB, and previously worked as CTO of Omnisciences and researcher at Inria Parietal and Neurospin.

Past bootcamps

693fcf1d3789c43572241279_IMG_4576.jpg [airtable:attIlQSwgIpOTh9uA]

France - October '25

Community.JPG [airtable:attnOBzMgHOUtBTs2]

France - July '25

Interested in bootcamps in your region?

No bootcamp in your area yet? Tell us where you are. We prioritise new locations based on interest from potential participants.