AGI Strategy visualization

COHORT-BASED COURSE

AGI Strategy

Optimistic and concerned about AI's trajectory? Want to do something about it? Start here. 25 hours to understand the strategic landscape, find your entry point, and get moving.

Our 8,000+ alumni work at

  • OpenAI
  • Anthropic
  • Google DeepMind
  • AI Security Institute
  • United Nations
  • Amnesty International
  • Time
  • NATO
  • OECD
  • Stanford HAI
  • Apple
  • Harvard Kennedy School

Who this course is for

You've read some essays, watched the talks, and you don't think the people building AGI have a serious plan for making it go well. You want to change that.

The course is an in-depth introduction to what's going on with AI development, what the good and bad outcomes could be, and what could be done to steer AI towards better futures.

It's built for three groups: 1) domain experts in policy, security, operations, or engineering looking to redirect their skills; 2) people heading into technical safety or governance roles who want the strategic picture first; and 3) newcomers who are serious about making big moves and having a huge impact.

Not sure you fit? Apply anyway. Recent cohorts have also included teachers, lawyers, engineers, and community organisers.

How this course will benefit you

A launchpad for your AI safety career

You'll leave this course with an opinion on which threats matter, early takes on how we could solve these problems, and concrete next steps you can take.

A clear way to think about the future of AI

You'll analyse the incentives facing AI companies. You'll develop "kill chains" to analyse the threats. And you'll apply defense in depth to evaluate and prioritise interventions. You'll know enough to hold your own in rooms with experts.

A community of builders

BlueDot has 7,000+ alumni, with many now working at Anthropic, DeepMind, UK AISI, and dozens of organisations working on a safe transition to advanced AI. You'll meet people in the field who can open doors for you and pressure-test your thinking.

How the course works

Options

Intensive (~5 days at ~5h/day) or part-time (~5 weeks at ~5h/week). Same content, different pace.

Commitment

Each day or week, you will:
Complete 3 hours of reading and writing, and join ~8 peers in a 2-hour Zoom meeting to discuss the content.

Facilitator

All discussions will be facilitated by an AI safety expert.

Price

This course is freely available and operates on a "pay-what-you-want" model.

Schedule

"We should not underestimate the real threats coming from AI [while] we have a narrowing window of opportunity to guide this technology responsibly."
Ursula von der Leyen
Ursula von der Leyen
President, European Commission
1 / 4

Frequently Asked Questions

We don't care about your CV. We care about what you'll do next. Recent cohorts have included people from policy, engineering, law, medicine, operations, and academia. What they shared was drive and a bias toward action.

It's for people new to working on AI safety, not new to thinking hard. If you've been reading and thinking, and are ready to act, this is where you start.

Same content, different pace. Intensive is 5 days at ~5h/day, for people who can clear a week and want to move fast. Part-time is 5 weeks at ~5h/week, for people fitting this around other commitments. Both end in the same place.

We run discussions across a wide range of timezones. You'll tell us your availability and we'll put you in a group that works for your current schedule.

That's what the course is for. You'll leave with a view on which problems matter most and which path fits your skills: technical research, governance, biosecurity, or building something new. Figuring that out is the work.

Yes. See bluedot.org/programs for current grants and how to apply.

Yes. Participants who complete the course receive a digital certificate they can share on LinkedIn or with employers.

Yes.

BlueDot is the leading talent accelerator for beneficial AI and societal resilience. We run courses, help people land jobs, organise events around the world, and back people starting new organisations. We've trained thousands of people since 2022. Our alumni now work at Anthropic, DeepMind, UK AISI, and have founded new organisations working on a safe transition to advanced AI.