← Back
July 25, 2024

We Audited a Real $50K/Month Display Campaign. Here's What We Found.

Most Google Ads practitioners focus on the inputs they can clearly control: targeting settings, bid strategy, ad creative, landing page experience. The placement report — the list of every site your Display ads actually ran on — gets checked occasionally, if at all.

Here's what a thorough audit of a real $50,000/month Google Display campaign revealed when we went through it systematically.

The Setup

The campaign had been running for 14 months with auto-managed placements and a Target CPA bid strategy. The account manager checked performance at the campaign level and was reasonably satisfied: cost per conversion was within target, overall ROAS was positive.

The placement report had 847 active domains.

What the Report Contained

Breaking down the 847 placements by quality:

Clean placements (legitimate publishers, real audiences): About 420 domains. These accounted for most of the meaningful conversions. Average session duration on post-click landing pages was 2:40. Conversion rate was 1.8%.

Marginal placements (real but low-relevance sites): About 280 domains. Some conversions, mostly thin engagement. These sites have real audiences but the audience isn't remotely close to the campaign's target customer.

Flagged placements (MFA, content farms, high ad density): 147 domains — 17% of all active placements. These 147 sites consumed approximately 23% of the budget that month. Conversions from these placements: zero over 90 days. Average bounce rate post-click: 91%.

Twenty-three percent of a $50,000/month budget is $11,500. Every month. Going to sites that generated no conversions in three months of data.

Why Smart Bidding Didn't Fix This

The account was running Target CPA with three years of conversion data. This is the scenario where automated bidding is supposed to work well.

Smart Bidding optimizes for conversion likelihood given user signals. It can reduce bids on users it predicts won't convert. What it can't easily do is suppress entire domains where the traffic looks like it might convert but consistently doesn't.

The MFA sites in this campaign had traffic patterns that resembled legitimate sites — real users, reasonable session lengths before the ad click, no obvious bot signals. Smart Bidding had no strong reason to reduce bids on them. The conversion failure only showed up in aggregate, not in any individual auction signal.

The Fix

We extracted the 147 flagged placements and added them to an account-level exclusion list. We also added the top 40 marginal placements where spend was high and engagement was consistently near-zero.

In the following 60 days: budget redistributed to cleaner placements, overall ROAS improved 31%, cost per conversion dropped 18%. The campaign volume didn't decrease — the same budget went further because it was concentrating on placements with actual engagement.

The Broader Point

This isn't an unusual account. The pattern — a technically well-managed campaign leaking meaningful spend to placements that are measurably worthless — is common.

The placement report is one of the most underused levers in display advertising. Looking at it seriously, once a month, and building an exclusion list from what you find is one of the highest-ROI activities available to any account manager running an active Display campaign.

The budget is there. The data is there. The work is just looking at it.