Technical AI Safety visualization

COHORT-BASED COURSE

Technical AI Safety

AI capabilities are advancing faster than our ability to make them safe. The field needs more technical people working on safety, and it needs them now. You could be one of them.

Our 7,000+ alumni work at

  • OpenAI
  • Anthropic
  • Google DeepMind
  • AI Security Institute
  • United Nations
  • Amnesty International
  • Time
  • NATO
  • OECD
  • Stanford HAI
  • Apple
  • Harvard Kennedy School

Who this course is for

You know how to do research. Now you want to point it at impactful safety problems.

You have a background in ML, CS, mathematics, or an adjacent field. Maybe you're doing a PhD, maybe you're at a lab, maybe you're publishing already. You understand how models work, but you haven't mapped the safety landscape in enough detail to know where your skills would have the most impact. This course gives you that map. You'll survey the major open problems in alignment, interpretability, evaluations, and control, and figure out which ones you're best positioned to work on.

You build things. Now you want to scale AI safety.

You're a software engineer or technical person. You've shipped products and built large systems. You can see that safety matters, but the field can feel academic and hard to break into from industry. This course is the on-ramp. You'll build a working understanding of the technical safety landscape and connect with people who are already doing this work. Many of our alumni have made the move from industry into safety roles at frontier labs and research organisations.

You have options. You're not the type to drift into a default path.

You're at a top university or recently graduated. You've engaged seriously with AI, through our AGI Strategy course, a university group, or your own deep reading. You're considering fellowships, graduate school, or roles you haven't fully mapped yet. You know AI safety matters and want to do something. You just don't know what the jobs are yet.

Don't fit these perfectly? Apply anyway.

Where this leads — and how we help

This course doesn't end at Unit 6. Here's where our alumni go - and how we help them get there.

We don't just teach

BlueDot runs a talent pipeline, not just a course. We actively scout for high-potential participants during the course, facilitate introductions to hiring managers and fellowship leads, and run a Rapid Grants program to fund participants who come out ready to build something. Our community Slack is where job leads, collaboration opportunities, and technical debate happens daily.

Frontier AI labs

Anthropic, OpenAI, Google DeepMind, and others all have dedicated technical safety teams working on interpretability, evaluations, and alignment. These roles require both deep technical fluency and strategic clarity about which safety problems matter most - exactly what this course builds. Alumni work on these teams today.

Fellowships

MATS, Astra, Anthropic Fellows, LASR, ERA, Pivotal, ARENA, SPAR. These are competitive, and this course is designed to make you a strong candidate. Alumni from our courses have gone on to all of them. If you want a fellowship, we can help you decide which one, and we'll help you get there.

Technical policy

AISI, NIST, and lab policy teams are all hiring people who actually understand the systems being regulated. The people shaping AI policy need technical advisors - and there aren't enough of them. This course builds the technical foundation; you bring it to the policy table.

Start something new

Some participants realize the highest-leverage move is to build: a research project, policy initiative, community, tool, or company. That's why BlueDot runs Rapid Grants and incubator programming for people who come out ready to launch. Several projects and organizations have roots in our courses.

Technical AI Safety Project Sprint

After completing this course, you can apply for our Project Sprint: a focused program where you work with an AI safety expert to produce a real contribution to the field. It's how many of our alumni build their first portfolio piece in safety research or engineering.

What you'll actually do

Learn from an AI safety expert

Every discussion is led by an AI safety expert. They'll answer your technical questions, challenge your assumptions, and connect what you're reading to real work happening at labs and research organisations.

Cover key safety techniques

You'll build a working understanding of the major open problems and approaches in technical AI safety, including alignment and RLHF, mechanistic interpretability, evaluations and red-teaming, AI control, and scalable oversight.

Join a community building towards safety

You'll join a curated cohort of people who are serious about making AI go well. Many stay connected long after the course ends, through our community Slack, events, and collaborative projects. This is a network of people starting companies, leading research, and shaping policy in AI safety.

How it works

Commitment

Each day or week, you will:
Complete 2-3 hours of reading and writing, and join ~8 peers in a 2-hour Zoom meeting to discuss the content.

Facilitator

All discussions will be facilitated by an AI safety expert.

Time

~30 hours total

Price

Free (pay-what-you-want)

Schedule

Frequently Asked Questions

You should understand the basics of how LLMs are trained/fine-tuned, that AI development is driven by data, algorithms and compute, and that the reward function for neural networks is optimised through gradient descent.

Our 2-hour, self-paced AI Foundations course will give you enough background.

It's not required, but strongly recommended. The AGI Strategy course provides essential context that this course builds on. While you can start here directly, you'll get more value if you understand how technical safety fits into the broader landscape of making AI go well.

We're a London-based startup. Since 2022, we've trained 7,000+ people, with 100s now working on making AI go well.

Our courses are the main entry point into the AI safety field.

We're an intense 4-person team. We've raised $35M in total, including $25M in 2025.