Recognize, Monitor and Tolerate: Discrepancy and Header-Bidding (Tagless Ad Tech)

When I first wrote about ‘tagless’ ad tech (also passionately referred to as header-bidding or pre-bidding) back in late December, my aim was to share some of what I’d learned about the technology as well as gain a better understanding of it myself.

My colleague also wrote a post (here and here) that details the more technical side of this setup.

Over the past nine months we’ve integrated with five partners across three websites and have tested a variety of serving types. We’ve built and have had built thousands of line items and have dealt with a number of challenges (both positive and negative) with this tech.

Before going further with this post, I want to be very clear about this: My overall view on this technology is that it’s absolutely beneficial for the publisher and something everyone should explore. At its core, it gives enabled partners the ability to compete dynamically for inventory they otherwise wouldn’t have an opportunity to buy. Put simply this helps level the playing field, creating a much more unified auction for a publisher’s programmatic inventory. If you’re interested in maximizing the value of your unsold inventory (which I’m guessing you are since you’re reading this), this is a technology worth committing resources to.

Now that my support has been clearly established, I’d like to switch gears and examine an issue we experienced on our properties as a result of this technology and offer some solutions for identifying, monitoring and potentially solving it.

By understanding what to look for and how to collect and analyze the appropriate data, you’ll be better able to monitor performance and the impact of this tech.

Across our websites we collect a variety of data points daily. We do this to not only track things like ad partner performance, but also important metrics like revenue per session, pageview and impression growth. It’s a bit overwhelming, but ultimately committing to collecting and reviewing data is the only way to accurately measure whether or not the changes we’re making are having a positive impact on our performance.

Trouble afoot: impression and pageview growth

On two of our three sites, we began noticing a trend where pageview growth vs. the previous year was outpacing impression growth vs. the previous year. Sometimes by as much as 20% which had me like…

Assuming there were no changes to the number of ads served per page, impression and pageview growth should be a 1:1 measurement.

At first we thought this might be a result of an increased percentage of users running ad blocking software. While it’s certainly possible that would skew this measurement, that didn’t align with the ad blocking data we were also collecting.

We then took a look at the number of impressions we were serving per pageview. Let’s say we’re running three ad placements per page, ideally you’d like the number of impressions showing per pageview to be as close as possible to three. In our case we saw that our impressions per pageview were also falling well short of the number of ads that were appearing on the page. Something was up.

In order to try and establish some sort of baseline, we ran the same measurements for the same day (not date) of the previous year. In both cases the data was dramatically improved. Impression growth trailed pageview growth by less than a percent and average impressions per pageview were up dramatically.

While our ad setup and partners shift often, the main difference between our 2015 and 2014 solutions was the addition of header-bidding technology. At this point it seemed clear that the addition of these partners in their current state was creating some sort of latency issue.

What was really odd however is this wasn’t always the case. We had been running the same four partners for months and this metric had only recently ballooned. Had something changed? Also, why was this happening on two of three properties but not the third? They were different sites but they all worked with the same partners.

Finally, how could we be certain this was caused by one of our header-bidding partners? What if something completely non-ad related had been deployed that was causing a delayed page load. That was also entirely possible, right?




Dealing with it:

In an effort to control what was easiest to control we elected to comment out all header-bidding partners from our properties. If they were the reason for the issue we should see the data normalize quickly.

Within 24 hours things improved. Impression and pageview growth once again aligned (impressions still trailed pageviews but now by less than 3% on average) and the impressions per pageview number was back to normal.

Over the next few days we decided to leave the header-bidding partners off the site to establish a non-header baseline for our current setup.


While the data had returned to normal, we also saw CPM rates decrease significantly with this tech stripped from our properties. To me this was the most interesting, and validating, part of this experience. While it was clear that somehow our header-bidding setup was causing pretty significant latency, it was also clear that the impact the technology has on our CPM rates and overall revenue was significant. Therefore, striking the right balance between implementation and latency is going to be the key to making this work.

Next steps:

We plan on going into much more specific detail in another post (specifically detailing the nuances of how we got it to work and why multiple partner harmony is complex and challenging), but put simply our strategy is to switch 100% of our header-bidding partners to an async setup in an attempt to level the KVP/serving-time playing field.

We’ll also be adding a script that monitors how long it takes for each enabled partner to return their bid (as well as whether or not they actually returned a bid at all), and a timeout function where GPT is called to begin ad serving. This is beneficial because we’ll now have more insight into how long a partner takes to submit a bid, when they don’t submit a bit, and direct control over when GPT is called to avoid extended latency.

This being ad tech, we have no illusions of a perfect setup, but these simple data points collected over time will be valuable to share with our partners so we can work together on ways to speed up the bidding process and what the appropriate timeout is before GPT is called.

Tracking header-bidding data:

As referenced above we use a variety of simple Excel formulas (percentage change and absolute value) for monitoring discrepancy on our sites.

If you’re not already measuring these data points they are pretty easy to add to your daily routine and will help keep you informed on header-bidding (or other partner type) performance/changes.

  • Impression growth of current year vs. previous year
  • Pageview growth of current year vs. previous year
  • The difference between impression growth of current year and pageview growth of current year
  • Impressions served per pageview
  • rCPM growth of current year vs. previous year
  • Revenue per 1,000 user sessions growth vs. previous year

Reporting metric/Excel formula:

  • Impression growth of current year vs. previous year | =((year two total impressions – year one total impressions) / year one total impressions)*1
  • Pageview growth of current yr vs. previous yr | Same formula as above just use pageview data
  • Difference between imp growth of current yr and pageview growth of current yr | =ABS(Impression growth % – Pageview growth %)
  • Impressions served per pageview | =Daily total impressions/daily total pageviews
  • rCPM difference vs. previous year | =Current year daily rCPM – previous year daily rCPM
  • Revenue per 1,000 user sessions growth vs. previous yr | Same formula as impression growth just use rev per 1,000 user session data
  • Rev per 1,000 user sessions | =Daily ad revenue total/(Daily session total/1000)
  • When measuring year to year remember to align days and not dates

It’s important for publishers to understand and be OK with the fact that adding header-bidding solutions to your page will absolutely increase latency and cause discrepancy data to rise. The point of this post isn’t to condemn this tech as a result of that, it’s to better inform publishers on how to monitor and prepare for it.A quick note about discrepancy:

Equally as important as monitoring discrepancy is understanding that it’s a reality of the tech and being aware of your sites baseline without header-bidding enabled and post enabling is crucial.

The balance between CPM and discrepancy:

It’s also important to remember that increased discrepancy should be tolerated if it comes with an increased eCPM/rCPM and isn’t hurting your UX.

Again this is all about balance and finding the appropriate range between an increased CPM rate and an increased discrepancy rate is going to be the key for this stuff not driving you insane.

Earlier I referenced the fact that we saw impression growth trail pageview growth by over 20%. Obviously that’s not acceptable but even on days we aren’t having any issues with serving we’ll still see this number in the 3-5% range.

Ultimately each publisher will need to decide what they are OK with and closely monitor the increase in revenue growth and revenue per session growth to make informed decisions.