TECHNICAL AI SAFETY

Start building safer AI

Understand current safety techniques. Map the gaps. Identify where you can contribute. Get funded to start shipping. All in 30 hours.

Technical AI Safety visualization
Matthew
Sarah
Kendrea

Our 4000+ alumni work at

  • OpenAI
  • AI Security Institute
  • United Nations
  • Anthropic
  • Amnesty International
  • Time
  • Google DeepMind
  • NATO
  • OECD
  • Stanford University: Human Centered Artificial Intelligence Institute
  • Apple
  • Harvard Kennedy School

Who this course is for

For ML researchers who want to take big bets on the most impactful research ideas.

For policy professionals who need deep technical understanding to build governance solutions.

For leaders who want to drive high-impact safety work.

Curriculum Overview

How this course will benefit you

Take action in less than 30 hours

Skip months of scattered reading. This Technical AI Safety course gives you a structured overview of key safety techniques. Understand what works, what fails, and where the gaps are. You'll finish with a fundable plan.

Join a network of builders

This course isn't for everyone. We're building a community of people who are energised to take ambitious actions to make AI go well, including starting new companies, policy entrepreneurship, and high-impact research bets. Completing this course will give you access to this community.

Get funded to accelerate your impact

If your final course proposal is strong, you'll receive $10-50k to kickstart your transition into impactful work, and you'll be invited to co-work with us in London for 1-2 weeks. We'll do whatever it takes to accelerate your journey.

Course information

Options

Intensive: 6-day course (5h/day)
Part-time: 6-week course (5h/week)

Commitment

Each day or week, you will:
Complete 2-3 hours of reading and writing, and join ~8 peers in a 2-hour Zoom meeting to discuss the content.

Facilitator

All discussions will be facilitated by an AI safety expert.

Price

This course is freely available and operates on a "pay-what-you-want" model.

Schedule

Intensive round starts 3 Nov, application deadline 30 Oct
Part-time round starts 17 Nov, application deadline 9 Nov
"We should not underestimate the real threats coming from AI [while] we have a narrowing window of opportunity to guide this technology responsibly."
Ursula von der Leyen
Ursula von der Leyen
President, European Commission

Meet our alumni shaping AI's future

Our students and graduates work at some of the most respectable AI organizations in the world. Here are a few of the people who graduated from Blue Dot:

Frequently Asked Questions

Funding is only available for graduates of the course.
You should understand the basics of how LLMs are trained/fine-tuned, that AI development is driven by data, algorithms and compute, and that the reward function for neural networks is optimised through gradient descent.

Our 2-hour, self-paced AI Foundations course will give you enough background.
It is not a prerequisite, but we recommend it!

The AGI Strategy course shows how technical safety fits into the broader strategy for making AI go well. Technical safety is one component among many.
We're a London-based startup. Since 2022, we've trained 5,000 people, with ~1,000 now working on making AI go well.

Our courses are the main entry point into the AI safety field.

We're an intense 4-person team. We've raised $35M in total, including $25M in 2025.