Feasibilty

Customers often want to know whether a campaign is "worth it" to measure. As such, we provide the following guidelines that outline where we have seen success in the past.

Please note: meeting these guidelines does not guarantee positive significant results. Also note, these are not finite thresholds, so if you do not meet these guidelines, it's still possible to observe significant results. Rather these are just meant to guide customers.

Guidelines

We label a campaign as feasible if both of the following guidelines are met.

Impressions

Campaigns should have at least 10 million impressions per exposure-side cut

  • e.g. if you'd like to cut a campaign by market, each market should have over 10M impressions

If the estimated impressions are under 10 million, check with us because something like 7m impressions with 50K conversion events per day could be a good candidate.

Conversions

We have various levels for conversion-side events

  • 50k+ events per day(site visits, installs etc.)
    • If there’s an effect, we’ll likely detect it and have a statistically significant result
  • 10k+ events per day(site visits, installs, etc.)
    • This is fairly standard and if the effect is within the normal bounds of what we usually see, we’ll likely detect it and probably have statistically significant results
  • Under 10k events per day(site visits, installs, etc.)
    • We can measure this. In some cases, the lift has been large enough that we’ve achieved a statistical significance, but that’s not the norm. We’d more likely see directional results only.

Explanation

OOH attribution is based on sampling the population and running a statistical hypothesis test. As such, to obtain meaningful insights we need to have enough observations.

While we can measure and report on any sized campaign, with very small campaigns it is unlikely that we will be able to statistically prove that the campaign had an effect on the population (that is, on all users, including those we did not observe). For example, if we only observe 100 users in each of the control and exposed group, we might observe a 15% lift in conversions, but because the sample is so small we cannot say it is statistically significant.

The statistical significance of the results we report typically depend on three things:

  • the sample size (number of users in the exposed and control groups)
  • the number of successful conversions (or: the conversion rate in conjunction with the sample size)
  • the size of the effect (i.e. the magnitude of the net lift)

This is a complicated balancing act that makes it difficult to establish thresholds that we are certain will lead to significant results if met. For example, it's possible to observe a significant result with a small number of users if the lift is very large; but it is also possible to not observe a significant lift with enormous sample sizes. This is expected -- if there is truly no effect of the media on conversion, then we shouldn't be detecting significant differences between the exposed and control groups even if we observed the entire population.