Case for Support: 2024-2026
Introduction
AI and synthetic biology will define the 21st century. They could contribute to an abundant future where humanity has access to everything it desires, or they could contribute to widespread devastation. The speed of development leaves influential decision-makers unprepared and incapable of steering humanity towards a positive outcome.
By default, no single stakeholder has both the capability and the incentive to steer the development of powerful emerging technologies. Government bureaucracies struggle to attract top talent due to low salaries and poor working conditions. The training they offer staff is poor or non-existent. On the other hand, tech companies attract the smartest people in the world, and they’re in a race to build some of the most powerful technologies since nuclear weapons. They’re incentivised to move fast, cut corners, and compete to gain market share.
We’re focused on solving this by equipping decision-makers in industry and government with the understanding and motivation needed to ensure powerful new technologies benefit all of humanity.
We’re a UK-based nonprofit, and we design and run the world’s most popular courses on AI safety and biosecurity. To date, we’ve trained 2,000 people from 120 countries. We aspire to support the most motivated, thoughtful, and action-driven people in the world. We help policymakers understand how they can shape the development of AI and synthetic biology, and we support technologists to build safe world-changing products. We’ve trained technical staff at OpenAI and Anthropic, government policymakers, startup CEOs, and journalists at leading publications. Our alumni have gone on to develop Anthropic’s Responsible Scaling Policies, conduct AI evaluations at the UK’s AI Safety Institute, work at Google DeepMind on mechanistic interpretability, and much more.
We’re seeking funding to train thousands of the world’s most influential decision-makers over the next two years and shift the tide of technological change towards a safer, more prosperous world.
How our courses work
The courses begin with a Learning Phase where students develop a comprehensive understanding of the field, and close with a Project Phase where they apply their new knowledge and skills to dig deep into one research question or hypothesis.
During the Learning Phase, students spend 8 weeks reading, completing exercises, and attending weekly sessions. The resources are developed in collaboration with subject matter experts, the exercises help students think through the content further, and the sessions provide an opportunity to learn from others. Each student is assigned to a cohort with 5-8 other students, meeting once a week to be guided through the session activities by an expert facilitator. The activities include role-plays, case studies, debates, and open-ended discussion.
During the Project Phase, students propose and work on their own project for 4 weeks, and we connect them to other students working on similar projects. The best projects are offered cash prizes. During the summer of 2024, we’ll experiment with turning the Project Phase into an intensive weekend hackathon, to evaluate if that form factor leads to more engagement and greater conversion to long-term impact.
Once a course concludes, we connect the best students with experts in relevant fields. We also share data with partner organisations like Anthropic, Constellation, and GovAI, so they can identify promising leads for their open roles.
Impact
In May 2024, we analysed the career trajectories of the January 2022 AI Alignment course alumni. Only 5% (18 total) of students were working on AI safety when they started the course, but 37% (123) work in impactful AI safety roles two years later. The course helped them to learn about the field, build relationships with future collaborators, become motivated to contribute, and identify impactful opportunities. This was the second time we’d ever run a course, and we believe our courses today are far more effective.
To learn more about our impact to date, read our 2023 Impact Report summary.
Our plans
2024: Scaling existing courses
Steering the development of the world’s most powerful technologies will require contributions from thousands of skilled individuals, so scaling the reach of our training among influential decision-makers is key to our success.
From 2023 to 2024, we increased both the quality and the frequency of our courses, and expect our cost-efficiency to improve from £1,200/student to £550/student.
With your financial support, we could accelerate our growth and train more influential decision-makers who can’t otherwise afford this training. We trained 800 individuals in 2023, and expect to train ~3,000 in 2024 and ~5,000 in 2025. To facilitate this, we’ll need to hire more Teaching Fellows as full-time facilitators.
2024: Developing our own content
We bootstrapped the world’s most popular courses on AI safety and biosecurity by relying on existing resources for the curricula. This served us well, but academia doesn’t incentivise high-quality explanation of concepts, and many arguments haven’t been presented in an accessible and effective way. Therefore, we’ve begun creating our own written content. We’re also considering video explanations of challenging concepts to further aid learning, and podcast interviews with alumni who’ve gone on to have a large impact to inspire our students and support our marketing efforts. Your support would enable us to build these resources, and provide a world-class learning experience on the most important ideas in AI safety and biosecurity.
2025: Developing new courses
Throughout 2025, we’ll leverage the capability we’ve developed to design and run high-quality courses at scale. We’ll build partnerships with influential organisations to identify topics where new courses would be impactful, we’ll work with subject matter experts to design those courses, and we’ll use the reach of the partner organisations to raise awareness of the courses among influential decision-makers. We anticipate designing 5-10 new courses in 2025. Tentative course ideas include cybersecurity for AI companies, technical AI governance, deep dives on specific biosecurity proposals, and pandemic preparedness policy.
2025: A platform for locally-run courses
Our courses have become the standard for educating university students about AI safety. Roughly 100 student groups worldwide run their own versions of our courses, including Cambridge, Harvard, Oxford, and Stanford. Local organisers invest hundreds of hours into operational tasks that could be automated, they use old versions of our curricula, and they don’t follow the best practices from learning science. In 2025, we’ll roll out access to our platform to these groups, so local organisers can prioritise promoting the courses within their institutions and adapting the courses to their local context. This will also provide us with data on who’s engaging at the university level, so we can feed that data to relevant partner organisations.
How you can help
We’re looking to raise $5M for our 2024-2026 budget, and we’d be incredibly grateful for any support you can provide. We’re seeking one donation greater than $2M, which would enable us to scale our operations, improve every student’s experience, and sprint into 2025; 2-3 donations of $0.7-1M, with each donation supporting one course for an entire year; and 3-5 donations of $200-400k, each supporting one course iteration and enabling us to train 400 students.
Conclusion
Over the next two years, we hope to train the most influential decision-makers on this century’s most important problems.
In order to do this, we need your support to promote our courses among higher-leverage audiences, design more courses on impactful topics, recruit more Teaching Fellows, and develop more engaging learning material.
This funding will give policymakers the tools needed to shape the development of AI and synthetic biology, and support technologists to build safe world-changing products.
If you’re interested in learning more or you have any feedback or questions, please contact dewi [at] bluedot [dot] org. Thank you so much.
Appendix
Applications as of June 2024
Students as of June 2024