A/B testing, gamification and why we're all basically screwed

Profile picture for user gonzodaddy By Den Howlett May 24, 2014
A/B testing and gamification are two techniques claimed to deliver value for marketers. Do we know enough to make them work well for us?

© Torbz - Fotolia.com
As we've been tweaking and fixing the inevitable bugs, flaws and gotchas here at Diginomica Towers a couple of things swung into view we've never really thought about. One in particular is causing me great personal stress. Have you noticed it yet? I'll tell you at the end but it kinda goes like this:

There is one thing we don't really think too much about when assembling stories but which our UI/UX engineer thinks is super important. In fact it is so important he wrote us a long email on why it matters. Still not figured it out? OK. Here's a clue - it's a sorta social thing that could lead to minor awkwardness and he's suggested we do a bit of A/B testing around it. So we are...errr...will.

Still not got it?

The A/B testing problem

This got me thinking about A/B testing since it is one of the marketing techniques du jour. Being a contrarian type I Googled ''A/B Testing is useless." I won't share the link as results might vary but was surprised at just how many dumb assed articles there are out there on this topic. I say dumb assed because so many that I came across try and mix science with hunches, pseudo science and other nonsense that I am surprised anyone gets anything useful out of the rubbish that's out there. There is the odd interesting piece but despite claimed success, I do wonder about repeatability.

The biggest difficulties with A/B testing comes in four pieces:

Why are we doing it? Getting those objectives figured out is really hard. Do I want more traffic?  More clicks? Some Techmeme love?  Feedback? Clicks on something else? A combination of a few of these things? What about repeatability in testing for a specific outcome?

Having any clue as to what a viable A/B test might look like. In today's case it is very simple - still not got it? Read on. In the meantime, take the example of writing a headline. I'm a bit obsessive about headlines even though I know I can't write them as well as others. What I do know is that I can create link and click bait really easily. Some people thought this was an example whereas I knew I'd thought long and hard based upon what I knew and was NOT looking for click bait. I could have written an entirely different headline. On the other hand, the topic of that article is something I know is of interest to a lot of people. Did more people read and comment because of the headline? I have no real way of knowing because I didn't A/B test it. (Hint: that might change in the future.)

Understanding the stats. Most marketers I come across are clueless on stats but then I can't blame them because we are surrounded by: lies, damned lies and statistics. We're bombarded with stats dressed up into pretty infographics and yet so few I meet have any real clue about the underlying theories that determine whether a particular set of 'facts' are valuable stats as opposed to useless stats. In A/B testing terms the math tells us that we need to do a lot of testing in order to get a viable set from which to draw even tentative conclusions. Yet so few people actually do that. Here is a very good primer on the topic. (PDF), that will get your head spinning.

Understanding the outcome. The best outcome for most marketers is something like: improve the likelihood of buying more shit. I totally get that but is that what really happens? Here's an idea. Despite some evidence to suggest that webinars are fading as a marketing tool, most enterprise vendors I know still put a lot of store by them. Most vendors reflexively believe that ensuring there is a replay is a net good. This experimental set suggests the complete opposite where the goal is to get signups to attend. I was however left wondering what would happen if a recording was made available at a future time. They don't say.

The gamification problem

Some analysts I know think gamification is one of the most powerful ways to achieve all sorts of magical outcomes. I am less certain.

While I have a great deal of respect for companies like Lithium, I get the uneasy sense that too many marketers have been caught up in the term rather than the actualité of devising good techniques that make sense and which don't fool the intended audience. Some think gamification is bullshit. Personally, I find that people dislike the idea of being 'gamed' in any sense of the word. When they get wind this is happening, people I know often turn incredibly hostile.

Proponents of gamification say it doesn't matter if the person(s) being gamed don't know it. That's a rathole I'd rather not go down. In a past life, I spent three years studying and critiquing the outcomes of social psychology experiments that were set up as what we'd call gamified experiments but which had horrible outcomes. I'm not suggesting that today's gamification techniques could go the same way but you see the inherent dangers. Here's an apparently innocuous example.

In the early 2000's, vendors of enhanced yoghurts made all sorts of claims about the health benefits of 'probiotic' on the digestive system. Sales rocketed as smart marketers devised ever more ingenious ways of subtly suggesting that we all really needed probiotic' yoghurt. In 2012, the EU banned references to 'probiotic' in advertising and labeling because they were of the view that such health benefits were not proven. That doesn't stop the marketers. We were gamed and didn't know it.

Of course in the A/B testing arena, gamification is...err...the name of the game and we've already discovered that without a lot of work, that technique itself might be open to question.

We're all screwed

In the last 10 years, we've been brought up on an ever growing list of techniques that are supposed to improve marketing results. We now have a broad array of tools from which to choose that measure 'stuff.' More than ever before, marketers are under the spotlight to deliver  based upon measurable results and yet as I have outlined, techniques at our disposal may - or more likely may not - lead to optimal outcomes.

Just knowing what to measure and how is remarkably complex. In our own little world, we struggle with this every day. Why did X article do so apparently well in being Tweeted, FB liked etc? What analysis can we realistically apply that gives us meaningful information upon which to act? We think we know and we use techniques to validate our internal points of discussion.

But at the end of the day I keep coming back to the same basic problem. In relying upon statistical evidence, we run the great danger of being fooled into believing that the desirable outcomes we wish for can be directly attributed to one or two distinct factors. That may be true in some cases but not universally so.

The good news is that we get to continue experimenting. We get to continue trying new things. We get to recognizing that while outcomes are the great measuring stick around which we are all judged, the real outcome is something quite different. It's the learnings and learning processes that really matter. Because without that effort, we're all basically screwed.

Endnote: did you figure out the thing I did this time that is different to what I usually do? I put the top image on the right hand side of the page and not the left. Here's why - it doesn't get in the way of your reading the text as much - allegedly. But only if you're viewing this on desktop :-)

Bonus points: here's one for the SEO types.

Image credit: Plan A-B © Torbz - Fotolia.com

Featured image creditjwinfred: Flickr Creative Commons]