Fix your campaigns w this one weird trick
Aka “math for marketing”
Every time you put a brand name or a budget behind demand gen $$, you’ll need to justify it, and decide if it went well afterwards.
- You need to convince the client, if you’re consulting/at an agency
- You’ll need to convince internal partners (leadership, finance, sales etc) if it’s your company budget
It used to be harder to get an efficient measurement on campaigns. You’re running TV ads or sending out mail. Maybe putting things in the newspaper. You might have a phone line to take in direct orders, but otherwise, you’re stuck waiting till results come through analyzing the performance of mail returns. OR trying to understand any change in same-store sales. Both over a long window of time.
Fast forward to 2020 with the opposite problem: too MANY data sources on campaign performance, and multiple tech platforms measuring the same thing in real time. Do you use numbers from the finance systems for your total conversions or do you use the conversion tracking in Marketo? Do you enable Google’s Optimize 360 for A/B testing or set up an à la carte test solution like Optimizely with more bells and whistles?
And once you’ve sourced the numbers — what do they mean? Which numbers should you ingest for your reporting? What metrics truly show impact and performance?
If your technology product relies on marketing inputs, you need to understand the physics of marketing. There’s actually simple math you should be doing for every campaign. In ANY media. Coupons, emails, targeted BOGOs, you name it.
- First, to establish a benchmark for success/rev forecasting.
- Then partways through the campaign (if data is available) when you still have a chance to course correct.
- Then after the campaign ends. To understand what happened and test improvements, the next time you paddle out.
In a job interview years back, someone asked me what I meant by “math for marketing.” I ended up drawing Peter’s version of the quadratic formula.
↑↑↑ This is the generic version — and it changes based on what your campaign is selling. This is different than conversion funnel math and only focuses on attributable campaigns — “how did this one activation do in my existing leads?”
It looks different depending on what you’re selling with that campaign.
At the beginning of this process — you’re forecasting. You are establishing a reasonable benchmark for performance and seeing the numerical outcomes.
- This is important for budgetary purposes (spend approval) as well as planning for levels of customer demand (website capacity, IRL supply chain), etc.
- A prior year or campaign’s results is always the best place to start. If you don’t have this for a new form of campaign, your media vendor may have estimated rates in their sales materials. (“We typically see coupon redemptions between X% & Y%”)
- Without prior data or a vendor estimate, you may need to forecast a high/low set of scenarios, within constraints like your ROI ratio or budget $. Tweak the figures in yellow across a range of values, see possible outcomes.
Interested parties can recreate this in Google sheets or Excel at their leisure. The example below has a placeholder cost of $0.55 per audience member.
Partway through the campaign, it’s time to pulse-check on performance. Are you near your forecast? Are you falling short or exceeding them? Digital campaigns are ongoing, with options to switch or change user impressions from the next day forward. In some channels (like direct mail or a one-time email blast) you won’t have the ability to change a thing. But having this data won’t hurt.
- My favorite tweak, and the most cost-efficient one, is seeing if the conversion/landing page has a high bounce rate. Is something off here — and you’re sending the most interested people to a leaky sieve of a landing page?
- You also may want to exercise caution in weighing results too prematurely. Prior campaigns should give you an idea on how long it takes the majority of responses to come in.
- Consider the impact of changes mid-campaign on any tests you may have. Too many changes to a landing page or your targeting can invalidate your test results. In that case you’d have to measure before and after the change separately.
- Lastly — is your original forecast still accurate? You may have been set up to fail with the wrong numbers. Is your audience drastically different? Have costs changed in some unenforceable way? Reforecast if needed.
At the end of the campaign, it’s time to review results and plan for the future. Were you well served by this media format? (if new). Are you seeing diminishing results compared to prior campaigns? (if returning).
The three questions you should be asking:
- Did this perform the way I needed it to?
- What did I learn?
- What can I test to improve? (dedicating a future post to this one)
In the finalized numbers example I put together, we were able to turn around the Multichannel audience, but we’ve learned something about forecasting and targeting for that group. (See below)
And there you have it. This largely focuses on “known customers to purchase” types of campaign. Lead generation (the first step in the conversion funnel) is its own beast. And we’ll be talking about that next post.
By the way, the “one weird trick” in the headline refers to the landing page. Seriously, it’s probably your landing page. Go check that thing, right now.