Has your brand adopted a test-and-learn approach in your omni channel digital marketing campaigns? With Google’s (almost) daily launch of new bid optimization methods and Facebook’s ever-changing suite of features and overall ability to access data, it could be detrimental not to.
When leveraged correctly, a well thought out, holistic test-and-learn approach examines each channel and tactic using meaningful success metrics according to where the tactic falls in the consumer journey. For example, since we know that the paid search channel is among the closest to generating a lead or action, it should be measured through metrics like cost-per-lead (CPL), cost-per-acquisition (CPA), return on investment (ROI), or return on ad spend (ROA). Conversely, display ads can reach more of the target audience at a greater frequency to generate awareness or buzz and are, therefore, measured with a broader KPI or metric such as click-through rate (CTR). This, however, is only the tip of the iceberg.
Brands and agencies need to come together to develop a clear benchmark for client success based on their specific problem, initiative, or objective in order to develop a well-informed, test-and-learn strategy.
This is where we must not fear failure!
Now comes the difficult part—what do we test?
To begin, we need to look for a variable that will have a significant impact on campaign performance—something that can actually change the results. Is that the audience? The creative? The ad placement? Another thing to consider is the cost involved to implement the test. For example, does any collateral need to be created? Finally, we want to estimate or model the lift in the KPI goal to determine if it’s worth the investment.
By testing multiple combinations of variables to answer a few focused questions, we can clearly see which variables generate the best consumer response and action (with oftentimes VERY surprising results).
What are some ways to test?
It’s surprisingly simple to set up a test-and-learn approach to digital. For example, for one client we wanted to test how different creatives and their flighting could inform future strategy. After running the test, we noticed that the time did have substantial impact; however, that impact varied based on the target and their reason for visiting the site. We then only varied the flighting for those audiences that generated maximum impact.
Another client wanted to test if promotions really work. They were not receiving any response to an incentive. The agreed solution was to offer larger incentives. In this case, the superior incentive did not matter to the audience. Though this is not an outcome that the purists would like to see, the learning was paramount in determining segmentation for future promotions.
See, I told you that it was OK to make mistakes.