Cohort Size
DATES
September 23 - October 3 2024
Duration
10 days
LOCATION
Germany
APPLICATION DEADLINE
July 21, 2024

Course Description

ML4Good is a bootcamp that aims to provide an opportunity to upskill in deep learning, explore the existing research landscape and delve into conceptual AI safety topics to those who want to work towards making AI safe and beneficial to humanity.

This camp will fast-track your deep learning skills, inform you about AI safety research, allow you to explore conceptual challenges, and connect you with like-minded individuals for potential friendship and collaboration.

Activities

How will the days be spent? 

  • Peer-coding sessions following a technical curriculum with mentors.
  • Presentations by experts in the field.
  • Review and discussion of AI Safety literature.
  • Personal career advice and mentorship.
  • Discussion groups
Logistics

The bootcamp is free. There is no fee for room, board, or tuition.

This bootcamp is aimed at people currently based in Europe. There will be more camps running in 2024 - please sign up on our website to be notified when these are confirmed and when applications open.

We ask participants to pay for their own travel costs - however, if this is preventing you from attending we will have the option to apply for travel support. The location is easy to access by public transport from Berlin.

Curriculum

We update our program between each camp, to stay up to date with the rapid development of the field of AI.

The program of the last camp was composed of technical content including:

  • Implement SGD and other local optimisation algorithms, run remote hyper-parameter searches on a simple architecture
  • Implement GPT-2 from scratch
  • Implement and run RLHF
  • Look at various interpretability techniques on GPT models and the ResNet
  • Implement DQN and A2C, two important reinforcement learning algorithms
  • Implement adversarial attacks and defences
  • Implement an LLM agent

Alongside talks, workshops and group discussions on:

  • model evaluations
  • AI trends
  • forecasting and timelines
  • risk models, risk scenarios and classifications
  • landscape of solutions
  • corporate governance
  • international governance

There is also the opportunity to dive more into the topic of your choice during the literature review afternoon and the 2.5-day project at the end of the camp.

Eligibility

These are simply guidelines - anyone is welcome to apply.

This program is aimed at people in Europe who are comfortable with programming and ideally one year’s worth of university level applied mathematics.

If you are someone working in AI Governance and could do with a more technical foundation on which to act, for example, this bootcamp may prove useful.

We welcome applications from those who fit a majority of the following criteria. We are looking for people with diverse skillsets; there are many roles in AI safety that could benefit from people with the knowledge provided by this camp, such as roles in communications, governance and policy. If you have some combination of these skills, such as advanced mathematics skills but less programming experience, or lots of experience in communication and less in programming and mathematics, we would like to encourage you to apply.

  • You are motivated to work on addressing the societal risks posed by advanced AI systems - ideally, motivated enough to consider making significant career decisions such as transitioning to technical alignment work, setting up a university AI safety group, or founding a project
  • You have a programming background and want to learn how to contribute your skills to the field of AI Safety
  • You have relatively strong maths skills (e.g. Mathematics level equivalent to at least one year of university education).
  • You are skilled at communicating technical concepts to both technical and non-technical audiences and collaborating with people with varying levels of expertise
  • You have a high level of proficiency in English
  • You can commit to completing our prerequisite material before the bootcamp (we will send this to you upon acceptance). We expect this material to take 10-20 hours. It will include AI safety conceptual readings and may include programming or mathematics preparation depending on your strengths.

We might be willing to relax some of the math and programming requirements for promising candidates working on AI Governance.   

FAQs

Will there be any spare time? There will be periods of leisure and rest during the camp. However, the course is intensive and full-time - don’t plan to do anything else during the camp and we would recommend giving yourself a day off afterwards before returning to full-time work!

What language will the camp be in? All courses, instruction, resources, and conversations will be in English.

What do you mean by AI Safety? By “AI Safety” we mean ensuring that AI doesn’t lead to negative outcomes for sentient beings or the premature disempowerment of humanity. In a recent open letter signed by many deep learning pioneers, it is stated that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Avoiding these bad outcomes is a challenge that has to be tackled on a societal level. In this camp, we will focus on technical approaches to building safer AI systems, for example by making their internal processes more interpretable.

I am not sure my level of technical knowledge is sufficient. Please see the prerequisite section above to see what level of technical knowledge we are looking for. If you are unsure, please err on the side of applying and feel free to contact us with any questions. Additionally, before the camp begins we will provide some preparation work.

How much do I need to know about AI Safety to apply? We expect participants to have a solid grasp of why and how an AI could become existentially unsafe for sentient beings and a general overview of proposed solutions. When it comes to theoretical AI Safety topics, we don’t expect an advanced level of knowledge. You will get more value out of the event if you have more familiarity with AI Safety beforehand. We will provide some reading before the camp for those less familiar.

What might an ideal candidate look like? We have particular interest in those who we can support in planning concrete actions towards working on the reduction of AI risks. Examples of promising candidates include:

  • You are an undergraduate in a technical subject with an active github account and you would consider setting up an AI Safety Reading Group at your university.
  • You are early in your career or are a masters student in a technical field and you are interested in exploring a future career in alignment to reduce risk from advanced AI.
  • You are a professional in the field of software engineering or data science and are looking for a way to alter the trajectory of your career towards work on AI Safety. You would be happy contributing engineering talent to open source tooling or helping found a new project.
  • You already have prior machine learning experience and are keen to apply your skills to reduce risk from AI and plan to act on this by e.g. changing jobs, or planning your career, or would be willing to join early stage projects.
  • You have experience communicating technical topics clearly through writing posts for technical and non-technical audiences distilling problems and their solutions. You would be interested in working with technical people to communicate their ideas or on developing government policy.

Our Venue

Team

Lovkush Agarwal
Teacher
Lovkush is a mathematician turned lecturer turned data scientist turned aspiring AI safety researcher. he attended ML4Good UK in March, which gave him the boost he needed to pursue AI safety. He is now upskilling full-time, which includes participating in SPAR over the summer.
Jonathan Claybrough
Governance and strategy teacher
Jonathan shifted to full-time involvement in AI safety around late 2022 from a background in networks and software engineering, Since then, he has contributed to technical standards for the EU AI Act, delivered introductory talks on AI safety, co-founded the European Network for AI Safety (ENAIS), and worked on AI threat models and risk reduction.
Evander Hammer
Organiser
Evander has a Bachelor's in Behavioral Disorders and experience in organizing community events. He is motivated to contribute with his skills to AI Safety and started his first projects in field building. He is also interested in compute governance and wants to strengthen his understanding of technical safety approaches.
Yannick Muehlhaeuser
Organiser and Teaching Assistant
Yannick is currently studying physics at the University of Tuebingen and has multiple years of experience organizing groups and events. He spent last summer working on Space Governance as a CHERI Fellow, where he also co-authored the research agenda of the Space Futures Initiative.
Jonathan Mannhart
Organiser
Jonathan is a cognitive science student at the University of Tübingen, where he is the co-organiser of the EA Tübingen and AI Safety Tübingen groups.
Lorenzo Venieri
Teaching Assistant
Lorenzo has a Master's degree in Artificial Intelligence and a Bachelor's in Mathematics. Last year, he participated in the ML4G Germany camp and will now join as a teaching assistant. He works as a data scientist, focusing on projects involving machine learning models for microscopy and healthcare.