AI Governance Specialist

Work with us to design the world’s most popular courses on AI governance and policy.

Who we are

We’re focused on helping people create a better future for humanity. We do this by designing and running courses on some of the world’s most pressing problems, and providing engaging and action-guiding experiences for individuals and organisations that want to make a positive difference.

BlueDot Impact was founded in August 2022 in Cambridge, UK, and grew out of a non-profit supporting students at the University of Cambridge to pursue high-impact careers. Our courses quickly gained traction, as many of the challenges facing students at the university were also faced by students and professionals worldwide. To learn more about our company’s story, check out this podcast interview with Dewi, one of our founding team members.

Thus far in 2023, we’ve supported over 1,000 people to learn about and contribute to AI safety and pandemic preparedness. During the first 6 months of 2024, we will:

  • Ship a new iteration of our courses every month to generate faster organisational learning and support more students (up from every 3-4 months in 2023);
  • Pilot new initiatives to increase the proportion of students taking impactful actions after graduating our courses; and
  • Build on our existing relationships with teams in the UK Government to support their AI Safety work, including the UK Office for AI and the UK’s AI Safety Institute.

Note: we recently updated this job title and description. If you’ve already applied for the “Course Designer (Policy)” role, you do not need to apply again to this role.

What you'll do

We’re looking for an AI Governance specialist to accelerate our efforts to support policymakers and governance researchers to learn about the risks and opportunities from AI, and positively shape this technology’s development over the coming decades. You’ll work with a small and ambitious team who are focused on building the world’s best learning experiences and supporting our students to have a significant positive impact with their careers.

In the first 6 months, you will:

  • Work with top AI Governance researchers and policymakers to determine the goals and narrative of the AI Governance course;
  • Read cutting-edge research in AI safety and governance, and prioritise which papers and arguments students should engage with;
  • Design a world-class learning experience for students on the AI Governance course, applying insights from the research into effective learning;
  • Build and run training programmes for the UK Government on AI Governance and policymaking, including in-person workshops in Whitehall; and
  • Collaborate with our community of facilitators to continuously improve the course based on user experience and feedback.

After that, you will:

  • Research the highest priorities for our graduate community’s further learning, such as deep dives into specific governance proposals;
  • Build partnerships with top experts, government departments and AI companies to design more advanced courses;
  • Support the next generation of researchers on our courses to land impactful roles after they graduate; and
  • Help to scale our courses by identifying niche target audiences and contributing to marketing campaigns.

The AI Governance course was initially an optional week in the AI Alignment course, but due to significant demand, we prioritised developing a standalone course on this topic in 2022. We worked with prominent AI Governance researchers to develop the course, and we have over 600 course graduates working at major AI companies, top universities and governments.

The course is widely regarded as the go-to location to learn about AI governance, and the AI Safety Fundamentals website has over 10,000 unique visitors each month. Over the next year, we’ll grow this audience by scaling up our digital marketing campaigns, giving access to our course platform to local groups, and launching a new programme to scale up facilitation capacity. You’ll be responsible for ensuring that this growing audience has an excellent and informative experience learning about the policy efforts to reduce risks from frontier AI models, enabling them to contribute to the field.

About you

We are looking for someone who is actively engaged with the AI Governance field and is motivated to create excellent learning experiences that support the next generation of policymakers and governance researchers.

You might be a particularly good for this role if you have:

  • Experience working in government or within the governance team of a large AI company.
  • Written about or conducted research on topics related to AI safety or policy-making, and enjoy communicating with large audiences.
  • Built relationships with individuals across the AI Safety and governance ecosystem, and you feel excited to deepen those relationships.
  • A basic understanding of machine learning, or are motivated to develop this understanding.
  • Participated in or facilitated discussions in one of the AI Safety Fundamentals courses and had opinions on how the course could be improved.
  • Created educational courses or learning experiences on any topic, especially using “active learning” techniques.
  • Been a teacher or teaching assistant at university, and felt motivated to improve the quality of learning of your students.

We encourage speculative applications; we expect many strong candidates will not meet all of the criteria listed here.

Location and compensation

We’re based in London, and we accept applications from all countries. We can sponsor UK visas. We have a strong preference for individuals who can move to London, though we will consider remote-first for exceptional candidates.

Compensation is based on your experience and relevant industry benchmarks, and will likely fall within £60-90k.

Apply for this role

The application process consists of five stages:

  • Stage 1: Initial application (<20 minutes).
  • Stage 2: Work test (2 hours).
  • Stage 3: Interview (45 mins).
  • Stage 4:  Work trial (~1 day).
  • Stage 5: Reference checks.

We’re evaluating candidates on a rolling basis and we encourage you to apply as soon as possible.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.