Results from testing ad adjustments – BlueDot Impact

Results from testing ad adjustments

By Adam Jones (Published on August 7, 2024)

At BlueDot Impact, we run educational courses that help people develop knowledge, skills and connections to pursue high-impact careers. To find great people for these courses, we’ve been experimenting with paid advertising.

This article explores the results of tests we ran on specific adjustments to our ads for our AI Alignment (June 2024) course. The course focuses on helping people get into technical AI safety research to reduce risk from advanced AI.

While we accept people from a range of backgrounds, our most typical participant:

  • is a working professional, usually with several years of experience
  • has a background in computer science, machine learning or software engineering
  • lives in a developed English-speaking country, often the US or UK

Given this is our most identifiable audience, we focused most of our efforts here. We also advertised a little to ML academics, and some specific AI organisations.[1] We previously advertised to university students but chose not to this round.[2]

Ad adjustments

This section looks at specific experiments we ran to better understand what kind of ad visuals perform better. These experiments were solely run on LinkedIn, as it had the best analytics capabilities out of the platforms we used and it’s where we spent the most so had more data generally.

We use cost per click as a measure of ad performance. This is a lossy proxy for what we actually want: great people who will actually apply. However, it strikes a balance between relevance to our goal, while being high volume enough so we can make meaningful comparisons. We use cost per click (rather than click-through rate, or total clicks, etc.) because we only pay when someone clicks an ad: so this best represents how effective our ad spend is on different ads.

Lower is better for cost per click (as this means we’re paying less for someone to click the ad, and come to our website).

‘Apply now’ banner vs no banner

Classic marketing literature suggests having very clear and prominent calls to action. We tried adding a bright purple ‘Apply now’ banner to the bottom of our ads, to see how this changed things.

We weren’t sure whether this would have much of an effect, given that LinkedIn already shows a call to action banner under the advert. However, it’s relatively muted and perhaps less attention grabbing than our one - so we went ahead and tested it.

It turns out that this did help result in more clicks for our budget, with an average cost per click that was 14% cheaper.

Skills vs individual focus

Between these two ads, the main difference is the copy used. In the first ad, we focus on the reader having relevant skills to solve problems. While in the second, we focus on the individual themselves, saying that they were needed to make AI safe. There are also some changes to the size of the image and text.

In this case, focusing on the person’s ML skills performed considerably better.

Image vs plain colour

We experimented with a basic concept here: big text that tried to grab the attention of ML engineers, on either an image or purple background. We’ve been told that images of people tend to perform better, but also that our bright purple brand colours might be particularly striking and attention grabbing.

In the end, this didn’t have much of a difference.

Light vs dark background

This concept lists a number of different AI risks, and we tested this out on a black, dark grey, and light background. We ended up not getting enough data on the dark grey background (but the small amount of data suggested it wasn’t much different to the black background).

For our audience, it seemed like a dark background performed slightly better. But there was more limited data so we can’t be sure - our confidence is lower in this one than others (P = 0.29).

More vs less text on AI images

An ad concept that performed relatively well for us was a series of AI-generated images that highlighted some of the oddities of AI image generators.

Classic marketing literature suggests having a simpler, shorter message is better. And this seemed to be true here: less text outperformed having more text.

Other areas for improvement

We had hoped to test a few other adjustments. However, we didn’t get around to testing these. These included:

  • More vs less text on standard imagery
  • Focus on risks vs benefits
  • One big testimonial vs multiple short testimonials

We did make creatives for these, but we later realised this would split our budget too thin to properly evaluate these tests. Additionally, it’s a fair bit of work to set up and track each ad design.

We also realised afterwards that there are often easier ways to decrease the cost per click than these kinds of adjustments. For example:

  • running the campaign for longer at a lower budget
  • trying completely different concepts, rather than minor adjustments to the same concept

We also decreased the cost per conversion significantly by making our website much better. We discussed this in our retrospective for our March 2024 AI Alignment course.

Also see

If you enjoyed this article, you might also want to see our articles about:

Footnotes

  1. All the platform’s targeting abilities are far from perfect, so implementations are fuzzy. LinkedIn was most specific, where we created 6 audiences:

    • Machine learning professionals
    • Software engineers, with some machine learning skills
    • Tech professionals (software engineers, data scientists, technical product managers etc.) interested in AI, and open to switching jobs
    • Machine learning academics
    • Specific AI organisations, for example people already in frontier AI companies
    • Retargeting, i.e. people who had already engaged with us on LinkedIn previously
  2. University students are much cheaper to market to, but tend to be much lower quality participants than working professionals.

    Undergraduates have much worse attendance, submit poorer quality projects, and are less likely to go on to working in the field. They’re also more likely to have alternative opportunities to learn about our topic areas, for example through university groups and societies.

    Of course there are exceptions, and we are keen for great students to apply! But previously we’ve seen that the low quality of student applications is more significant than the reduced cost of marketing to them, so decided not to focus on them this round.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.