AI Safety Specialist [Applications Closed]

Shape the future of AI safety research by designing the world’s most popular courses on AI safety.

Apply by 4 June

Who are we

We help people create a better future for humanity by designing and running courses on the world’s most pressing problems. We find people who could have enormous impact, we motivate and equip them via our courses, and we connect them with impactful opportunities.

BlueDot Impact was founded in August 2022, and grew out of a non-profit supporting students at the University of Cambridge to pursue high-impact careers. To learn more, check out this podcast interview and our blog.

Over the past 2 years, we’ve supported 2,500 people to learn about AI safety and pandemic preparedness, and our alumni work in critical roles across government and industry. In 2024, we will train ~3x more people than 2023 by running each of our three existing courses every four months (AI Alignment, AI Governance, Pandemics). In 2025, we will design more courses on important topics, build a platform to enable local communities to run amazing versions of our courses, and provide intensive support for our top alumni to start new projects.

What you'll do

We’re looking for an AI safety specialist to own and redesign the world’s most popular course on AI safety. You’ll work with a small and ambitious team who are focused on building the world’s best learning experiences and supporting our students to have a significant positive impact with their careers.  

In the first 6 months, you will:

  • Work with top AI researchers and conduct expert interviews to determine the goals and narrative of the AI Alignment course;
  • Read cutting-edge research in AI safety, and prioritise which papers and arguments students should engage with;
  • Design a world-class learning experience for students on the AI Alignment course, applying insights from the research into effective learning; and
  • Collaborate with our community of facilitators to continuously improve the course based on user experience and feedback.

After that, you will: 

  • Research and determine the highest priorities for our graduate community’s further learning, such as deep dives into specific alignment agendas;
  • Build partnerships with top experts, government departments and AI companies to design more advanced courses;
  • Support the next generation of researchers on our courses to land impactful roles after they graduate; and
  • Help to scale our courses by identifying niche target audiences and contributing to marketing campaigns.

Richard Ngo initially designed the course in 2021, and we have a graduate community of over 2,000 individuals working in all the major AI companies, top universities and governments. The course is widely regarded as the go-to location to learn about AI Alignment, and the AI Safety Fundamentals website has over 10,000 unique visitors each month. Over the next year, we’ll grow this audience by scaling up our digital marketing campaigns, giving access to our course platform to local groups, and launching a new programme to scale up facilitation capacity. You’ll be responsible for ensuring that this growing audience has an excellent and informative experience learning about AI Alignment, which provides them with the foundational knowledge and motivation required to contribute to the field.

About you

We are looking for someone who is actively engaged with the AI safety field and is motivated to create excellent learning experiences that support the next generation of researchers. 

You might be a particularly good fit for this role if you have:

  • Written about or conducted research on topics related to AI safety, and enjoy communicating with large audiences.
  • A strong understanding of machine learning, or have the technical abilities to learn this quickly.
  • Created educational courses or learning experiences on any topic, especially using “active learning” techniques.
  • Participated in or facilitated discussions in one of the AI Safety Fundamentals courses and had opinions on how the course could be improved.
  • Established relationships with individuals across the AI Safety and Alignment ecosystem, and you feel excited to deepen those relationships.
  • Been a teacher or teaching assistant at university, and felt motivated to improve the quality of learning of your students.

We encourage speculative applications; we expect many strong candidates will not meet all of the criteria listed here.

We believe that a more diverse team leads to a healthier and happier workplace culture, better decision-making, and greater long-term impact. In this spirit, we especially encourage people from underrepresented groups to apply to this role, and to join us in our mission to help solve the world’s biggest problems.

Location and compensation

We’re based in London, and we accept applications from all countries. We can sponsor UK visas. We have a strong preference for individuals who can move to London, though we will consider remote-first for exceptional candidates.

Compensation is based on your experience and relevant industry benchmarks, and will likely fall within £60-90k.

Apply for this role

The application process consists of:

  • Stage 1: Initial application (<20 minutes).
  • Stage 2: Work test (2 hours).
  • Stage 3: Interview (45 mins).
  • Stage 4: 1-day virtual work trial.

Deadline: 4 June

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.