
COHORT-BASED COURSE
Frontier AI Governance
Governments are making decisions about AI, and those decisions are only getting harder as capabilities advance. They don't have enough people who get it. You could be one of them.
COHORT-BASED COURSE
Frontier AI Governance
Governments are making decisions about AI, and those decisions are only getting harder as capabilities advance. They don't have enough people who get it. You could be one of them.

Our 7,000+ alumni work at
Who this course is for
You get the tech. Now you want to know if policy is where you should point it.
You understand the technology - how the systems work, what scaling means, where the risks come from. Maybe you've built systems, shipped products, worked at or started companies in AI. You're now considering whether to point those skills at policy. You can translate between technical and policy worlds, but you don't know how governance actually works - or what the jobs look like.
What this looks like
People like you have made this move. Engineers from frontier labs, PMs, founders - now at AISI, NIST, GovAI, and lab policy teams, shaping how we navigate AGI. Your cohort will include others making the same bet. Many become collaborators or co-workers for years. You'll develop the political judgment to match your technical judgment - and know which roles actually have leverage.
You have options. You're not the type to drift into a default path.
You're at a top university or recently graduated. You've engaged seriously with AI - through our AGI Strategy course, a university group, or your own deep reading. You're considering fellowships, graduate school, or roles you haven't fully mapped yet. You know AI governance matters and want to do something - you just don't know what the jobs are yet.
What this looks like
You'll join a cohort of others at the same stage - high-potential, high-optionality, figuring out how to make AI go well. Alumni from this track have gone on to Horizon, GovAI, AISI, and lab policy teams - many deciding their path during the course. The cohort becomes a professional network that lasts well beyond it. You won't be figuring this out alone.
You know how institutions work. AI is reshaping everything and you need to lead on it.
You already have a career - in policy, national security, economics, law, diplomacy, intelligence, journalism, finance, or something else entirely - and you can see that AI is going to reshape the world. Maybe you already work in government and want to become the person your agency turns to on frontier AI. Maybe you're an economist who sees AI is about to become the most important variable in your models. Maybe you're in national security and need to understand what these systems can actually do.
What this looks like
The common thread: you have judgment and institutional knowledge, and you want to apply it to humanity's most important technology. You'll leave fluent in the debates, credible when technical claims come up, and with a concrete sense of where your existing expertise has the most leverage.
Where this leads — and how we help
This course doesn't end at Unit 6. Here's where our alumni go - and how we help them get there.
We don't just teach
BlueDot runs a talent pipeline, not just a course. We actively scout for high-potential participants during the course, facilitate introductions to hiring managers and fellowship leads, and run a Rapid Grants program to fund participants who come out ready to build something. Our community Slack is where job leads, collaboration opportunities, and policy debates happen daily.
Government
AISI, NIST/CAISI, OSTP, congressional offices, state-level policy, international bodies. These are the rooms where AI decisions get made - and most of them are understaffed on frontier AI. We'll help you understand what roles exist, which ones have real leverage, and connect you to people already inside. The career landscape is mapped in detail on Horizon's Emerging Tech Policy Careers site. We recommend it.
Fellowships
Horizon, GovAI Summer Fellowship, IAPS, TechCongress. These are competitive - acceptance rates range from 5-25% - and this course is designed to make you a strong candidate. Several are downstream of us: alumni from earlier cohorts of our governance courses have gone on to all four. If you want a fellowship, we can help you decide which one, and we'll help you get there.
Frontier lab policy teams
Anthropic, OpenAI, Google DeepMind, and others all have policy, trust & safety, and governance teams. These roles require both technical fluency and political judgment - exactly what this course builds. Alumni work on these teams today.
Think tanks and advocacy organizations
RAND, CSET, Institute for Progress, IAPS, and others are producing the research and building the arguments that shape legislation and norms. If your strength is analysis and writing, this is where many people find their calling and highest impact.
Leading on AI where you already are
Not everyone needs to change organizations. If you're already in government, national security, law, economics, or journalism, the goal might be becoming the person your agency or firm turns to on AI. You bring the institutional knowledge; we give you the frontier AI fluency and the network to back it up.
Start something new
Some participants realize the highest-leverage move is to build: a research project, policy initiative, community, tool, or company. That's why BlueDot runs Rapid Grants and incubator programming for people who come out ready to launch. Several projects and organizations have roots in our courses.
What you'll actually do
Unit 1: From models to briefings
You'll read a real system card in full - currently Anthropic's Claude Opus 4.6 - alongside the latest evaluation reports from METR and Epoch. You'll figure out what's actually being claimed versus what's missing or omitted, and produce a policy-relevant briefing tailored to a specific decision-maker. You won't summarize - you'll learn to translate.
Go to Unit 1Unit 2: The governance landscape
Who has power over frontier AI development? Where are the dependencies between labs, governments, and international bodies? Where are the gaps? You'll map the institutional landscape - including how China approaches AI risk - and develop a working picture of who can actually do what.
Go to Unit 2Unit 3: Proposals under pressure
You'll survey the governance frameworks actually on the table - compute governance, safety standards, liability, international coordination - and stress-test them. The format is adversarial: you argue for and against proposals you didn't choose, surface foundational assumptions, and build the habit of evaluating governance ideas on their merits rather than by who proposed them.
Go to Unit 3Unit 4: Governance in the limit
This is the unit most governance courses don't have. Competitive dynamics between labs and between states. The concentration of power in AI systems. What governance looks like as capabilities approach and exceed human-level. You'll stress-test your preferred governance approach against the intelligence explosion - break it, redesign it, and name what the redesign costs.
Go to Unit 4Unit 5: Take a stand
You'll go deep on one of AI governance's live, unresolved debates - currently open-weight models as capabilities increase, and a new track on when, if ever, frontier development should be slowed. You'll read the strongest arguments across the spectrum, take a position, and defend it in writing.
Go to Unit 5Unit 6: Your leverage point
Given everything you've learned, what specifically needs to happen - and what are you positioned to do about it? You'll audit your skills, network, and comparative advantage, and produce a concrete 6-month roadmap. Not a vague plan. A specific one, with the expectation you'll act on it.
Go to Unit 6How it works
Prerequisites
AGI Strategy course (or equivalent background in AI risks)
High-level understanding of AI
High agency - bias toward action, not just learning
We're selective. The course has an acceptance rate of roughly 20-25%.
We're looking for people who are analytical, motivated, and genuinely considering making this their life's work. If you're here to add a credential, this isn't for you.
And to be clear: this is not a corporate AI governance or AI ethics course.
Time
~30 hours total
Format
2-3 hours readings, exercises and reflections per unit
2 hours live discussion with your cohort of 7-9 per unit
Led by a Teaching Fellow working in AI governance
Price
Free (pay-what-you-want)
Schedule
Help build the field
We're also looking for people to help teach this course - and more broadly, to help build the AI governance talent pipeline.
Adjunct Expert
You work in AI governance - at a think tank, in government, at a frontier lab - and want to teach one cohort while keeping your primary role. ~5 hours per week per cohort. Compensation starts at $50/hour.
Apply as a Teaching FellowFellow-Researcher
You're doing independent research, in a fellowship, or building something in the AI governance space, and want to combine that with teaching. A part-time teaching fellowship at 20-30 hours per week.
Apply as a Teaching FellowFacilitator
A lighter commitment focused on leading weekly small-group discussions with a cohort of 7-9. Good fit if you work in the field and want to contribute without a larger time commitment.
Apply as a FacilitatorFAQ
Not corporate AI ethics committees or responsible AI checklists. We mean the governance of frontier AI and AGI - the policy, coordination, and institutional decisions that will shape whether advanced AI goes well. AI governance - in the view of this course - is the practice of shaping how AI is built and deployed through policy, institutions, norms, and relationships. It requires both analytical judgment (what interventions would actually work?) and political judgment (what's achievable, and how do you help make it happen?). This still-young field has many disagreements over goals and methods - which makes it even more important to evaluate proposals rigorously and build the influence to move the ones you believe in. That's where our course starts.
No. We don't mean AI ethics committees or responsible AI checklists. We mean the governance of frontier AI and AGI - the policy, coordination, and institutional decisions that will shape whether advanced AI goes well. This course is about shaping how AI is built and deployed through policy, institutions, norms, and relationships. It requires both analytical judgment - what interventions would actually work? - and political judgment - what's achievable, and how do you help make it happen?
You need to understand AI well enough to engage - but that's about understanding, not credentials. Some of the best people in governance came from policy, law, or other fields but did the work to understand the technology.
Yes - if you want to specialize in AI policy with a focus on making AI go well.
If you've done AGI Strategy (or equivalent) and can engage with AI capabilities and risks, you're ready. Many participants are early-career.
In part as we believe the US is where most leverage is right now. Though, we also cover other jurisdictions and higher-level proposals.
Lower commitment (30 hours vs full-time for months). We're often upstream of those programs - the course helps you decide, and if you want to pursue fellowships, we can help you get there.
~5 hours per unit. Intensive is a 6-day sprint. Take some time off and lock in. Part-time spreads it over six weeks. Most people do it alongside work or school.








