Publishers Need a Better Way to Measure Ad Viewability
When you take Statistics 101, one of the first things you learn is that some variables are “categorical,” some are “ordinal,” and some are “interval.” Categorical variables are the ones that don’t have any inherent arithmetic dimension to them (e.g. male/female, Democrat/Republican). Ordinal variables are those that do have a more/less arithmetic dimension, but where the calibration is a bit subjective (e.g. somewhat liberal, very conservative). Interval-level variables are those that have nice, clear-cut, unambiguous, measurable meaning (e.g. age, height). As the student progresses through Stat 101, she discovers that this simple distinction among the three levels of measurement drives many of the choices about the kinds of statistical tools used to make sense of data.
So far, most of our discussion about the measurement and certification of ad viewability has treated the phenomenon as a categorical, indeed a binary variable. The Media Ratings Council (MRC) set minimum standards for crediting an ad as viewable: if a static banner ad had 50% of its pixels viewable for at least one continuous second, it is deemed viewable; otherwise not. For a video ad, you need two continuous seconds to meet the minimum standard. Many advertisers and agencies objected, of course, that this minimum set the bar way too low to represent a standard. Agency network GroupM famously said that they would only accept 100% viewability — meaning that all ads had to clear the MRC minimum to merit compensation — while media companies set about working with third-party measurement companies to figure out how to boost their viewability scores.
But these viewability scores still reflect this binary thinking: what percentage of my ads meets the minimum standard? As such, they miss the important point that not all opportunities to see are the same. Some ad exposures are quick, while others linger. Some ads fill the entire screen, while others take up just a fraction of it while competing with lots of other stuff. The viewer initiates some video ads, while other ads play in background. Some have sound, others are muted. Some draw the viewer voluntarily while others operate by stealth or intrusion to interrupt the viewer’s intended activity.
Thus it was a great relief to learn recently that Moat, one of the leading companies providing measurement of ad viewability, introduced the Moat Video Score as a continuous rather than a binary variable. In effect, it picks up where the MRC’s minimum standard leaves off, but gives both advertisers and media companies a much more nuanced view of the differences among various ad exposures. The Moat Video Score is built from three simple components: 1) the percentage of the ad that is viewed (regardless of whether it is a 5 second, 15 second, or 30 second ad); 2) the percentage of the ad that is viewed with sound; and 3) the percentage of the screen real estate that is occupied by the ad when it is being viewed. These three raw measurements are combined into a simple 0 to 100 scale score that Moat clients can look at for individual creative units, or rolled up into such standard cuts as campaign, placement, or domain. At least initially, Moat is giving equal weighting to the visibility and audibility variables, though that might change as it learns more about which drives ad recall, brand recall, purchase intent, or action.
So far, Moat has not yet published a white paper spelling out what they are learning about the validity of the Moat Video Score — that is, its ability to predict the outcomes that advertisers seek. However they say that one is in the works and will be published soon. And of course, once Moat clients have a chance to play with the measure, they will surely subject it to the kind of analyses that either brings wide acceptance or further tuning. From the press release announcing the arrival of the Moat Video Index, that early adopter list includes Unilever, Bank of America, GroupM, Fox, Condé Nast, Snapchat, and NBC Universal.
Publishers have long argued that the quality of ad exposures matters as much as, if not more than, the quantity of the exposures. This often referred to the “halo effects” conferred by better editorial environments, or to preferred positions and placements. And MRI’s syndicated magazine measurement has long included qualitative comparisons of how likely readers of paper editions are to see an ad, based on measures of how thoroughly readers go through their copies. So it is ironic that the measurement of the viewability of digital ads has heretofore treated ad exposures as a zero/one game. Moat’s new video score is an important conceptual step forward in shifting the conversation from the MRC minimum standard to the much more interesting and nuanced questions about wide variations in quality of exposure and how those variations affect advertising outcomes.