In this chapter, we will be discussing how to Measuring Results and simplify statistics of the testing done on the website.
Understanding Statistics
Researchers may or may not understand statistics. But, the A/B Testing tool proves to be a savior by simplifying those statistics. So a lot of calculations can be avoided. Most of the testing tools are consistent in using a 95% criteria as a successful goal completion.
That means out of 10 you are a winner by 9. Let us take an example. Your testing tool report is as follows −
Variations | Conversion rate |
---|---|
Control page | 1.91% |
1 | 2.39% |
2 | 2.16% |
3 | 3.10% |
This report predicts a more or less .20% conversion rate variance at the 95% interval. Statistically, the goal range is between 1.76 and 2.06.
Fetching Insights
While planning a test I keep two goals in mind. The first, is boosting revenue and the other is fetching insights about what prompted a higher ROI.
For instance, in a case study, we get diverting traffic to product page rather than category page or homepage improves conversion rates or not. We took three variations, in one we directed traffic on the homepage loaded with categories and subcategories directing further to the product page. In the second, we directed traffic to the category page adding filters. In the third, we directed it directly on the product detail page with the buy button.
To my surprise, the third variation won. This is just the information required by a buyer about the product. This lets us learn how conversion rate lift and continuous improvements can make us grow our leads.
Undoubtedly, adding many variations and insights over tests gave us the website redesign.
Understanding Results
Let me clear it out. NOT ALL TESTS WIN. Yes, it is painful yet true.
There are tests that give you results in flying colors. There are others that after so many tries too would be without a result. But if you plan a test with an insight driving segmentation, you can have a new hypothesis to be tested on. Not all test helps you to improve revenue.
Take an example to understand. There are three campaigns with different conversion rates.
Campaign A | 8.2% |
Campaign B | 19.1% |
Campaign C | 5.2% |
Anyone will blindly say ‘Campaign B’ is a super performer. But let us dig some more.
Visits | Transactions | Conversion Rate | |
---|---|---|---|
Campaign A | 1820 | 150 | 8.2% |
Campaign B | 20 | 4 | 19.1% |
Campaign C | 780 | 41 | 5.2% |
See closely, ‘Campaign B’ is too small to be statistically significant. Campaign B with 1 transaction with one visit will give a 100 percent conversion rate. ‘Campaign A’ performs over ‘Campaign C’. While concluding results, there are several factors that need to be looked at and it may differ every time. It is you who need to look at all insights and decide the results.
In this chapter, we will learn how to Measuring Results in conservation rate optimization. To know more click here.
Pingback: Testing and Optimization - Adglob Infosystem Pvt Ltd
Pingback: Conversion Rate Optimization - Tips - Adglob Infosystem Pvt Ltd