Tuesday, 12 July 2011

The PPC Test You Shouldn’t Run

PPC has always been a fast-moving channel. The search engines (and lately, social channels) are continually innovating, adding great new products and features for PPC advertisers to try. In the past month alone, there have been Search Engine Watch columns about inventory feeds, interest category targeting, sitelinks, and remarketing.

And this is just Google. Microsoft adCenter, Facebook, LinkedIn, and second-tier PPC engines are adding features all the time, too.

The really great thing about PPC is that nearly everything can be tested. In addition to testing ad copy and keywords, you can test all of the features listed above, and more. Landing pages, ad formats (for display), ad extensions – all of these, and more, can be tested to learn what drives the best results for advertisers.

Testing is great. I love PPC testing. I’ve even called out “not testing” as a common campaign mistake. But there’s one PPC test you should never run.

Testing Too Many Things at Once
Even experienced PPC managers can be tempted to immediately try all the latest and greatest new features. It’s like a dessert table of PPC full of delectable choices – you can’t decide, so you just grab one of each.

In a way, this makes sense. You never know what can end up being the game-changer for your PPC campaign – that’s why we test, after all. But if you’re testing too many things at once, you’ll never know which thing (or things) really changed the game for you.

Combining Search & Display Campaigns
One common mistake novice PPC managers make is to combine search and display campaigns. These networks rarely perform the same way, and require different optimization strategies.

Furthermore, at least at a high level, campaign stats are muddied when both search and display are factored in. One may assume their campaign’s click-through rate (CTR) has tanked, when really, low CTRs are normal for display campaigns – and this data is skewing the averages. An ad group that does well in search may not do well at all in display, and vice versa.

Don’t combine the two – you won’t know what’s working!

Sitelinks & Ad Copy Changes
While most seasoned PPC managers know better than to mix search and display, there are other more subtle ways that combining too many tests can backfire.

Let’s say that you started some new ad copy tests on the first of the month, and then on the 7th of the month you added sitelinks to your campaign. On the 15th, you look at your ad test data – and it’s all over the place.

Your CTR is way up, but conversion rates are down. Why? Is it because your new ad copy sucks? Or is it because sitelinks don’t convert?

And sure, you can tag your sitelinks and track them separately – but it’s nearly impossible to drill down into the effect of sitelinks on individual ad variations. Testing ad copy on top of sitelinks is a test you don’t want to run!






Google Tests

Even Google’s own testing can mess up your tests. When Google started testing long headlines in the top ad spots, many of us saw our performance stats start to go all over the place. While ultimately the long headlines appear to have a positive impact, ad tests run during Google’s test of this feature were, again, all over the place.

So if you’re running a test to figure out which ad gets the best CTR, and Google starts playing around with ad display formats in the middle of your test, your results are going to be wonky and possibly even invalid.


While you can’t control Google’s testing, you absolutely can control your own. Make sure to use a systematic approach to PPC testing: don’t test too many things at once, and measure, measure, measure! Don’t get impatient – as they say, patience is a virtue – especially in ad testing.

No comments:

Post a Comment