Finding Digital Measurement Success, Part 2: Attribution and Incrementality | Advertising

In our first installment of this series (“Finding Numerical Measurement Success, Part 1: Cohorts vs. Clicks”), we established that it is important to use a metric cohort to measure success, but there are two One more methods savvy marketers use to really quantify success: attribution and incrementality. Although these terms are widely used to solve the measurement problem, they are often confused, leading to confusion.

Let’s start by defining what they mean

Attribution and incrementality quantify different things:

Attribution examines touchpoints along the journey that impacted the purchase. It is correlative rother than causal, because although it tries to assign credit, it cannot explicitly assign it to a point of contact for the sale. It answers the question, “What touchpoints have been associated with a consumer conversion?” “

Incrementality measures the impact of a single variable on the behavior of an individual user. For digital signage marketing, it is most often used to measure the impact of a branded digital ad (exposed group) versus a public service advertisement (PSA) (control group). Elevator is measured as the percentage difference between the two. Incrementality demonstrates the value of advertising, helping answer the question, “Did my ad result in a purchase?” “

A deep dive into attribution

While it’s nuanced in its own way, it’s also important to understand the challenges and solutions of attribution. In the example below, there are so many touch points in a consumer’s buying journey today, and this is where it becomes difficult to understand which advertising partner helped generate the final conversion.

This glimpse of a journey “Sarah” could take reveals the challenges of conversion and performance metrics:

To mitigate this, the first thing marketers need to do is use common sense – what are you expecting and are your campaign results meeting your expectations?

The next step is to think of measurement in “shapes” rather than individual numbers (eg, an individual CPA), as these singular numbers often obscure the reality and complexity of campaign results. You may find it much easier to gauge the success of tactics when you don’t consolidate results into one number; Think of an ad campaign as a portfolio of ad impressions that aren’t isolated.

A look at incrementality

Incrementality testing compares marketing results between a test group and a control group, which can help advertisers better understand whether KPIs are a direct result of their own campaigns or external effects.

At Quantcast, we define incrementality testing as the measurement of how well a specific marketing event has been influenced by a media channel or tactic, in this case display, over a set time period and budget.

The challenges here are inventory bias, cookie churn rate, and in-game benchmarks.

  • Publisher inventory bias occurs when ad exchanges and publishers are selective about the inventory they will show on their sites, which affects creative performance differently.

  • The problems with unsubscribing cookies arise from the passage of cookies from control to processing (and vice versa), which can cause the elevator to drop to zero as it scrambles the causal signal.

  • Poor or uneven benchmarks happen because your control (or baseline) will have a huge impact on your results. Some people use non-visible impressions as a control, but this adds new behavior that could skew the results.

To resolve this issue, we recommend that you deploy adaptive block or allow lists to address publisher inventory bias, experiment on traceable traffic to resolve cookie opt-out issues, run a study consistent with suppliers to establish a level playing field with consistent benchmarks. , and aligning your measurement and attribution criteria.

Finding digital measurement success with cohorts

Reaching and influencing audiences, cutting through the noise, and coming up with a value proposition that can drive behavior is incredibly difficult. Reducing that to a single metric is ideal, but probably impossible as the metric continues to change as the approach to digital advertising becomes more and more diverse.

As mentioned in “Finding Numerical Measurement Success, Part 1: Cohorts vs. Clicks”, every metric you look at, every audience you try to reach, every methodology you use, all needs to be evaluated as part of a cohort, making sure you weigh the pros and cons of different approaches. These principles will help you to constantly learn from the continuous feedback loop and to evolve your own measurement strategy, thereby improving the performance of your brand.

Sonal Patel is Managing Director for South East Asia at Quantcast.