Jacques Mattheij

Technology, Coding and Business

AB Testing make sure you are optimizing for the right variable profit

AB testing is testing several ways of doing the same thing side-by-side in order to optimize some variable that you measure during the testing period.

For instance, you could try two forms side-by-side, and this will tell you which one of the two ‘converts’ better.

Now almost everybody assumes that when you increase all these variables that you will find the global maximum for your business, but it is extremely easy to throw the baby right out with the bathwater when doing short term tests like this.

One possible scenario for instance is that (in the case of a subscription service) you are optimizing for conversion of a segment of your audience that is not going to stick around long enough to make you money. Assuming that cost of acquisition is roughly the same if the retention isn’t also the same then an increase in conversion rate could very well turn in to a reduction in turnover a month or two from now and eventually a decrease in profits.

You can’t really make sweeping statements about the effect of your tweaks until the customers that you signed up during your A/B test have gone through a significant part of their life-cycle, and only then are you in a position to really evaluate the result of your test.

If your test was successful then your net profits increase, otherwise they’ll be the same or will even decrease.

So be careful when you do your A/B tests, track the users that you signed up via the different paths over the longer term, even if there seems to be a short term improvement you have to keep the longer view as well to make sure that you are optimizing the right variable, which is not ‘conversion rate’ or ‘clickthrough’ but profits.

That assumes that you are in it for the money, and not for eyeballs or charity, so this may not apply to you. But if you’re running a business that is traditional in the sense that your first motivation to run it is monetary then make sure that you keep focus and do not get lost in optimizing the wrong variables.

A numerical example:

Form ‘A’ converts 1.8% of the viewers to a trial account.

Form ‘B’ converts 1.5% of the viewers to a trial account.

Easy right? Pick form ‘A’ and you should increase your profits.

But that’s picture that oversimplifies reality. For instance, the wording on the ‘B’ page may make it more clear what it is that the product does, so less people are initially attracted by it (because more of them realize they don’t need or want it).

But the segment that did sign up was not the same segment as the other, in other words the overlap between the two sample populations was not good, you are effectively fishing in a different bucket. And these people apparently feel they’ve made the right decision to become your customer because retention is up from 4 to 6 months.

If 1.8% translates to 18 users, and 1.5% translates to 15 users at a price-point of $20 you will make 4*20*18=$1440 in the first case and $1800 in the second.

So a lower conversion rate is not always bad. A/B testing without long term tracking of your customers’ life cycle is not the right way to go about it, you really need both. You need A/B testing to identify places where significant differences can be made in the design of your product, and you need long term tracking to make sure that those differences translate in to actual gains.

HN Submission/Discussion
If you read this far you should probably follow me on twitter: