A wall of marketing samples for testing.
Enrollment Insights Blog

A Guide to Digital Marketing Experimentation and Optimization in Education

Estimated read time – 15 minutes

This post has been updated to reflect Google’s decision to change Data Studio’s name to Looker Studio in the fall of 2022.

Unless your gut instinct is 100% correct 100% of the time, you should be testing your assumptions to ensure the content you produce is resonating with your audience. If you’re not experimenting with your ads there’s potential that you’re wasting thousands of dollars of your already insufficient budget (because who ever has all the budget they would like?). 

Every campaign should have some aspect of experimentation. You should be striving to not only achieve your school’s goals but to learn something with each and every campaign. The most important thing is to share your findings with your team in order to help everyone improve. Some of the elements that could be tested are:

  • Graphic or photograph
  • Professional photo vs user generated content
  • Layout
  • Color
  • Button wording or color
  • Tone of language
  • The use of a video on a landing page
  • Use of white space on a landing page

Experimentation is integral to building a successful website, running a top performing ad campaign, or designing the best comprehensive communication flow. These are techniques that we at Niche use to ensure UI/UX decisions resonate with our users. Take time to brainstorm experiments with those in and out of your department or group to build stronger experiments, and take the time to make sure that you can measure and analyze the results.

Click to expand image in new tab

When running an experiment, it’s very important to use UTM codes to track user behavior once they get back to your website. The content parameter is perfect for labeling your variables, and can be pulled in to reports or considered as a secondary dimension in Google Analytics. To make the results easier to analyze and share you should pull them in to a Looker Studio dashboard. 

There are two types of experiments you can run. The first, which most people have likely heard of, is an A/B test. The more advanced and efficient version is known as a factorial test, which sometimes will be referred to as a multivariate test. 

A/B Testing

An A/B test is a simple experiment to find the impact of a single element by using a control and a variant. The variant can focus on a variety of elements within the ad, but it only makes one change at a time in order to isolate the impact. An A/B test is the easiest to start with, especially if you have not been experimenting in the past. Start with something you believe will impact conversions and use information from your other marketing to determine where to start. Develop your hypotheses using promising practices for the ad type or by looking at what your competitors are doing differently.

Using A/B tests:

  • Search ads – An A/B test in a search ad can experiment with the tone of language, order of institution name and call to action, or in phrasing of the call to action. You could also vary capitalization or difficulty of words used. 
  • Display ads – Display ads offer many opportunities for small experiments. Try a graphic versus a photo, or put a photo taken by your staff against user generated content. An experiment comparing photos with and without students in them can yield interesting results as well. A more design centered experiment could compare a static display ad to an animated GIF/PNG ad.
  • Social ads – Use the same tactics that you would with display ads for your social ads experiments. The added opportunity offered by social networks is the ability to experiment with placements as well (though there are more efficient ways to do it). 
  • Video ads – Performing an A/B test with video ads can be done similarly as an audio ad, however you have the added ability to experiment with the visuals associated with the audio. While the length of video ads are typically dictated by the placements, you can also vary the lengths of ads where you are able.
  • Audio ads – With an audio ad you can experiment with using a male or female voice, varying background music, or changing the order of elements within the audio. 
  • Landing pages – An A/B test for landing pages can experiment sending traffic to existing pages or to dedicated campaign landing pages. On a landing page, an A/B test can focus on an image, color, button text, or even which element appears above the fold (what a user sees on their screen when the page first loads).

When not to use an A/B test: If you want to make a large change, involving multiple variables, an A/B test is not the right choice. You will attribute the difference in campaign performance to the variant, but by changing multiple elements you’re not just isolating one variable and won’t know which element is responsible for the increase or decrease in conversions.

An example A/B test

Spring is the perfect time for yield events, so let’s plan a remarketing campaign to our admitted students. For this experiment we’ll keep the language simple with “Join other future Niches March 14 to stay overnight on campus and experience what your life will be like next year!” We will use a photo of the event last year against similar user generated content from our students for this. This experiment will help us know if admitted students gravitate towards more polished or rough images to inform future experiments or compare against similar campaigns for inquiries and applicants.

Set the UTM parameters for source as display (for example), medium as cpc, and campaign as March 14 Yield Visit. For the prior event photo we’ll set the content UTM parameter as prior and for the student photos we’ll set the content UTM parameter to UGC for user generated content. SInce our goal of this campaign is to drive an action, we’ll need to make sure to have a conversion pixel for the ad platform implemented so that it can track registrations by students who have viewed or engaged with your ad. We will also want to set up a goal in Google Analytics to correspond to the event registration. Once you have the campaign running on a chosen platform be sure to check in on your analytics to ensure the traffic is coming through as expected.

To analyze the results we need to consider what our goals are (or key performance indicators). In this case we want registrations, so we will want to compare the two ads by conversions. We’ll check on the campaign in Google Analytics in the Acquisitions > Campaigns > All Campaigns report. Then we click on the “March 14 Yield Visit” campaign, add the secondary dimension of Ad Content, and look at the Goal Conversions for the event. To compare the results of each ad it’s helpful to put the relevant metrics in a spreadsheet for easy comparison.

Click to expand image in new tab

In this example, you can quickly see that even though the user generated content received fewer impressions and clicks (commonly overvalued metrics), it generated more conversions at a lower cost and more engaged website users. After running these ads for a few weeks we will optimize by replacing the photos from the previous year’s event with a different type of user generated content or different ad copy to try to improve upon our results.

Factorial (Multivariate) Testing

Factorial testing is a great way to quickly optimize your ads by running multiple iterations simultaneously. You will be able to make changes and find insights more quickly, making it more cost effective. It may sound intimidating, but this guide should help walk you through it. And, as you might guess if you’ve been reading the Enrollment Insights blog or attending our webinars, I have a template to help analyze and report on your factorial tests

A factorial test typically involves running an experiment with multiple versions of copy (the text in the ad or landing page) with multiple assets (the visuals). If we did a simple factorial test we might use the copy “Apply now!” and “Free application” with a photo of students and a photo of a campus landmark. You would have four ads running in your ad set simultaneously to provide you more data to work with. After two to three weeks, depending on the volume your campaign is generating, you would be able to analyze the early results and optimize. Turn off the underperforming ads and work towards new variants based on the high performer(s).

In our example, if the “Apply now!” with the campus landmark was our best ad we could add a new copy that also uses a direct call to action. And add another photo option of a campus landmark, or perhaps use a building interior this time. Now we can wait another two to three weeks and optimize again. In this way you’re always moving towards better performing ads and preventing audience fatigue during long-term campaigns. 

When setting up a factorial test for a campaign, which I recommend doing every single time that you have the capacity, keep the options clearly differentiated to start. Starting with three versions of copy and three assets is a good goal. If two assets or copy are performing similarly then you can look for something that blends them both as your next iteration. If a user generated asset and one focused on a campus landmark both performed well you should try to find user generated content of the landmark as a middle ground for your asset iteration.

The great news for you is that it’s easy to do factorial testing with Google and Facebook at least, which are the two networks on which you are mostly likely to be advertising. In Google there are Responsive Search and Responsive Display ads that do just this. Unfortunately, the native reporting only provides qualitative combination results. Google’s algorithm will optimize automatically for the combination that performs the best and you can see which one that was, so you can still glean some insights.

In Facebook ads you can create ads within your ad set with each asset and one of your copy variants. Then all you need to do is duplicate your ads in the same ad set and with all of the duplicates selected change the copy and repeat for as many variants as you have. This allows you to quickly create large groups of ads without starting from scratch each time. Just be sure to go back and edit your ad names and Content UTM parameter.

Using factorial tests:

  • Search ads – You can save time and use responsive search ads if you’re using Google. Otherwise, you will need to manually change Headline and Description options. Vary the order of key points and the tone of language to determine impact. Add another layer of complexity by using the same ads across multiple ad groups of specific keywords to determine if the results are universal or vary by searcher intent.
  • Display ads – Display ads are where factorial tests really shine. Again, responsive display ads that will automatically optimize by only entering your headlines, descriptions, and a set of images. The disadvantage is that you only have aggregate data and descriptive results as to which performed the best. For those who are data averse this is a big advantage however. For traditional display ads this is a more time consuming process. 
  • Social ads – As mentioned above, Facebook factorial testing is so easy that there’s no excuse not too (aside from not having the assets). It gets more complicated with Twitter, LinkedIn, Reddit, or Pinterest; but the process is the same as your would use for display ads. Pay attention to any differences in performance by placement as well. Your winning combination in a native position may be different than in a story format.
  • Video ads – A factorial test for video ads can be very time consuming. There are some tools, like Director Mix, to help customize videos at scale; but this will be the most complicated type of experiment you could run. If you keep the videos simple, and have the time to create multiple videos, this can be a way to optimize your video efforts quickly.
  • Audio ads – You will need to consider your audio ads differently than your other ads. For your audio ads consider using different voices and transcripts to use in your experiments. This can be a good experience for students to earn some extra money and provides added authenticity. 
  • Landing pages – Factorial testing for landing pages can be done, but can be very complicated as well. You’re best off to use a tool like Google Optimize to set it up rather than trying to build a series of pages and craft individual campaigns sending traffic to each. Optimize allows you to use a WYSIWIG editor to craft experimental variants using existing pages and choose how frequently to serve each one. You would only need to build one landing page and use that as your template to edit for the variants You can see results both within Optimize or Analytics. You can even build a dashboard in Data Studio to see only the metrics you care about.

When not to use a factorial test: If you don’t have the capacity to build multiple ads then I would recommend keeping it simple and doing an A:B test. Landing page experiments are also more difficult and typically require additional software, so if you’re a novice with experimentation I recommend not leaping into a factorial landing page experiment. 

An example factorial test

We’re feeling more confident about experimenting with our digital ads now, so let’s run another ad campaign for our spring yield event. This time we’ll use three versions of the copy to go with our photo from the prior year, user generated content, and now a short video with details. We have nine different ads being served now instead of only two, which means we’re gathering data faster, so we’ll use the template to help organize our results.

Click to expand image in new tab

This time our most engaging ad was the one using asset 2 and copy 2, and overall those were both our best options. Our goal was to generate registrations (conversions) though, so we need to look beyond the vanity metrics. Now we see that asset 2 with copy 3 did the best job of generating registrations. Our next experiment should find assets similar to asset 2 with wording that combines elements of copy 2 and copy 3 since those were our winners. 

The Optimization Cycle

You can run the absolute best experiments and collect great data, however, if you never share and use the data to optimize and improve it will all be a wasted exercise. It’s helpful to remember a simple acronym: EMO. Experiment – Measure – Optimize. Run an experiment, then after a few weeks you can measure the results and analyze them to determine the best option offered, and then optimize based on the results. You can continue this cycle throughout a campaign to eliminate ad fatigue by continually trying new messages and assets. Use findings from prior experiments to inform future hypotheses, and take time to learn from others on campus.

Building a Culture of Experimentation

Setting up and analyzing experiments cannot be done in a vacuum. You will need buy-in from leadership, affected departments and offices, and the rest of your marketing and communications teams in order to be successful. It will take some planning and organization to run multiple experiments across your organization and continually optimize while planning weeks or months ahead as well. Here are a few tips to help you build a culture of experimentation in your office:

  • Have a short and focused weekly or biweekly meeting to make sure key personnel are updated on what experiments are running and how they are performing. If you’re working on them every day you might not realize that others forget and will have valuable outsider input.
  • Use project management software or a shared document to organize ideas for new experiments or new variables for current experiments. This helps reduce the lag time when coming up with the next iteration.
  • Challenge staff to create iterations when you first start planning a campaign or designing a page. Experimentation should be well integrated from the very beginning. Use prior experiments and learn from others in your network to develop hypotheses for future campaigns.
  • Take a full picture of your experiments. Results are rarely black and white, there are always subtleties. When planning the next iteration for optimization consider the tone, difficulty of words used, colors in imagery, pacing of video, and placement of elements. Have designers and writers work alongside your analytical staff to uncover new insights. If you can develop a small team who all think differently and can work together, you will have stronger ads. 
  • Keep the results of your experiments with samples in shared folders or documents that anyone in the office can access. You may forget results over time. Also, the results could be useful for others on campus working on email, print, and organic social media.
  • Use students to generate ideas for new experiments, they know what they want more than you do. If you don’t have students already working in your office you can take advantage of marketing classes or just offer some food to anyone who attends a quick session.
Prior to coming to Niche in 2019 Will served 9 years at Manchester University in roles as an Admissions Counselor, Associate Director for Admissions Operations, Social Media Coordinator, and ultimately as Digital Strategist. Will surfaces tactical insights from user behavior and surveys to help higher ed build recruitment strategies. In addition to the Enrollment Insights blog, webinars, and podcast; Will is a frequent conference speaker and podcast guest. He has presented at NACAC, AACRAO-SEM, AMA Higher Ed, CASE V, EduWeb, and EMA. Will's work has been featured in Forbes, Inside Higher Ed, CNBC, CNN, the LA Times, and The New York Times among other outlets.