AI Safety Strategist (Contractor / RFP) – BlueDot Impact

AI Safety Strategist (Contractor / RFP)

Help develop a comprehensive strategy to address catastrophic risks from artificial intelligence.

Apply now

Who we are

BlueDot Impact runs the world’s largest courses on AI Safety. Our AI alignment and AI governance courses have trained over 2,000 individuals, many of whom now work in all the major AI companies, top universities and governments.

The course is widely regarded as the go-to location to learn about AI Alignment, and the AI Safety Fundamentals website has over 10,000 unique visitors each month.

Our article Why we run our AI safety courses explains our motivations in more detail.

The challenge

Transformative AI could be humanity’s most consequential technology, with potential for both immense benefit and catastrophic harm through misuse or loss of control.

Despite these risks, few people with the right combination of skills, context, and motivation are working in roles reducing these risks.

Our courses aim to provide people with these attributes, so they can contribute to catastrophic risk reduction from AI.

However, we’ve realised we’re not certain what exactly we need people to be working on for AI to go well. Which makes it hard to accurately target our course portfolio and content to achieve this objective.

We looked outwards to find a strategy or plan for AI safety, that would help guide us here. But we didn’t find anything we that met our needs here. In particular, we wanted something that is:

  • Sufficient: if all the actions are carried out, we would consider the world in a good state
  • Feasible: we think it’s reasonably plausible the plan can be executed. This generally excludes plans that require actors to take significant actions against their own interests.
  • Action-orientated: the plan is a set of actions that explain how people can contribute (and thus how our course could prepare people to contribute), and does not just describe a list of events that happen.
  • Comprehensive: the plan covers all actions needed, not just some for a short time frame or one jurisdiction.

We therefore are trying to compile an AI safety strategy to inform both our courses and the wider field that meets these criteria.

For more details about what we’re trying to achieve, see the context section of the public work test details.

What you’ll do

Ultimately, we’re looking for a great AI safety strategy meeting our criteria above. We are flexible on how people want to engage with us on this.

We are open to:

  • Hiring a fixed-term contractor to help us develop the strategy with us.
  • Suggestions for plans you think meet (at least some of) the criteria above. We’ve made a compilation of ones we’ve already considered.
  • Invitations to collaborate on something together.
  • Recommendations of people we reach out to, either to hire as a contractor or collaborate with.
  • Any other proposals that will help us get closer to having an AI safety strategy!

To put forward any other suggestions or proposals email adam@bluedot.org.

 

If you joined us as a contractor, you’d spend 2-3 months helping us to:

  • Create a comprehensive AI safety strategy specifying:
    • Actions needed across technical research, policy, and other areas
    • Who should take these actions and when
    • Dependencies and sequencing
    • Success metrics to track progress on the strategy
  • Map required workforce capabilities to this strategy
  • Adapt BlueDot’s educational programs

Location and compensation

Location: Hybrid in London, UK (preferred) or fully remote. We accept applications from anywhere in the world.

Compensation: £4000-£7500/month. You will be hired as a contractor and responsible for your own taxes. You can also do this as an unpaid volunteer for visa or conflict of interest reasons – opting to go without payment will have no effect on whether we select you for this role.

Dates: December 2024 to January/February 2025.

Times: UK hours preferred (9:30am to 5pm-6pm). We can accommodate other timezones provided there is sufficient overlap with UK hours.

About you

You might be particularly good for the contractor role if you have:

  • Broad AI safety expertise, across both technical and governance work.
  • Strategy skills, for example having worked in a government or corporate strategy role, or having thought about the strategy for an altruistic cause area.
  • Written communication skills, particularly writing for more general audiences

We encourage speculative applications; we expect many strong candidates will not meet all of the criteria listed here.

Apply for this role

The application process consists of three stages:

  • Stage 1: Initial application (15 minutes)
  • Stage 2: Paid work test (2 hours)
  • Stage 3: Interview (60 mins)

The work test will involve you analyzing an existing AI strategy. There are some public details here, but the exact task is kept private until you start to ensure fairness between applicants. If we ask you to do this, we will pay you £60 for the time you spend on this.

The interview will mainly be a discussion about ideas on AI safety strategy, to understand your thinking and evaluate collaborative skills.

The above is currently how we expect to hire. However, this may change as we learn more about how well this process works.

We are hiring on a rolling basis, and encourage you to apply early.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.