“We have a strong digital team to continuously improve our digital presence in the market. Like our amazing products, we always want to provide a great customer experience across all of our digital presence. Working with EchoLogyx and getting their experienced Shopify engineer provided us the necessary resource and expertise to achieve our digital strategy and create a wonderful experience for our customers.

The team at EchoLogyx showed us how we can test different concepts before launching the changes to our website – allowing us to become a lot more data driven in order to make the rights business decisions. They have been helping us to implement and develop new apps, change templates and test different concepts on our Shopify store.

I would definitely recommend EchoLogyx without any doubt – these guys go above and beyond the call of duty to help us out with our Shopify site”
– Rachael W., Founder

About Chinti and Parker

Since 2009, Chinti & Parker, a London based brand, has been dedicated to creating collections which aim to invigorate women’s wardrobes with knitwear that celebrates joyful colour, timeless shapes, and innovative texture.

Download the full case study

The Challenge

The founders of Chinti and Parker are not only visionary with their amazingly stunning knitwear range, they wanted to provide a great customer experience through their Shopify store with innovating ideas and design.

When they started the process of implementing the back-to-back changes, they noticed that the development work was extremely slow and expensive. As a result, even though they had a digital strategy in place, they were unable to delivery the changes as quickly as they were hoping.

How EchoLogyx helped Chinti and Parker’s Shopify Development

Chinti and Parker started working with EchoLogyx from early 2022. The first project was to develop a bespoke private app to integrate with their fulfilment house. The Shopify engineers from EchoLogyx went through the scoping, making sure that all the functionalities are documented and started the development work. Within just over a month, they delivered the fully functioning private app that integrated seamlessly with their fulfilment company.

Following on to this, Chinti and Parker took one full time Shopify Engineer from EchoLogyx to help them with their digital strategy. Since then, Chinti and Parker have seen their development work move significantly faster than before. This allowed them to make bigger and bolder changes to the store, creating an amazing customer experience.

On top of using the Shopify development and QA work, Chinti and Parker started to A/B test their bigger and bolder concepts. This has started to help them make better decisions based on the data and what works best for their end users.

Download the full case study to find out more.

Case Study: EchoLogyx helping Chinti and Parker with their Shopify Store

No comments yet

“We have a strong digital team to continuously improve our digital presence in the market. Like our amazing products, we always want to provide a great customer experience across all of our digital presence. Working with EchoLogyx and getting their experienced Shopify engineer provided us the necessary resource and expertise to achieve our digital strategy and create a wonderful experience for our customers.

The team at EchoLogyx showed us how we can test different concepts before launching the changes to our website – allowing us to become a lot more data driven in order to make the rights business decisions. They have been helping us to implement and develop new apps, change templates and test different concepts on our Shopify store.

I would definitely recommend EchoLogyx without any doubt – these guys go above and beyond the call of duty to help us out with our Shopify site”
– Rachael W., Founder

About Chinti and Parker

Since 2009, Chinti & Parker, a London based brand, has been dedicated to creating collections which aim to invigorate women’s wardrobes with knitwear that celebrates joyful colour, timeless shapes, and innovative texture.

Download the full case study

The Challenge

The founders of Chinti and Parker are not only visionary with their amazingly stunning knitwear range, they wanted to provide a great customer experience through their Shopify store with innovating ideas and design.

When they started the process of implementing the back-to-back changes, they noticed that the development work was extremely slow and expensive. As a result, even though they had a digital strategy in place, they were unable to delivery the changes as quickly as they were hoping.

How EchoLogyx helped Chinti and Parker’s Shopify Development

Chinti and Parker started working with EchoLogyx from early 2022. The first project was to develop a bespoke private app to integrate with their fulfilment house. The Shopify engineers from EchoLogyx went through the scoping, making sure that all the functionalities are documented and started the development work. Within just over a month, they delivered the fully functioning private app that integrated seamlessly with their fulfilment company.

Following on to this, Chinti and Parker took one full time Shopify Engineer from EchoLogyx to help them with their digital strategy. Since then, Chinti and Parker have seen their development work move significantly faster than before. This allowed them to make bigger and bolder changes to the store, creating an amazing customer experience.

On top of using the Shopify development and QA work, Chinti and Parker started to A/B test their bigger and bolder concepts. This has started to help them make better decisions based on the data and what works best for their end users.

Download the full case study to find out more.

About Wax London

Wax is a London-based family label established in 2015, inspired by the places, faces and stories that surround us. Keeping sustainability at their core, Wax’s clothing is made with carefully sourced materials, designed to be worn, time and time again.

Download the full case study

The Challenge

After Wax moved their ecommerce platform to Shopify, they started to face some challenges to make necessary changes on their new platform. Their dynamic ecommerce and digital marketing team were getting blocked due to reduced amount of support received by their previous agency. As a result, even though they had new and innovative ideas to try and test to improve customer experience, they had to wait for months to get things implemented. They needed a solution that was efficient, cost effective and high standard.

How EchoLogyx helped Wax to scale their Shopify Development

Wax started working with EchoLogyx from early 2022. With a fulltime experienced Shopify engineer from EchoLogyx directly supporting Wax’s digital team, Wax was able to move quickly with the necessary changes on their Shopify site. On top of that, all changes were tested across multiple devices and browsers by their dedicated QA engineers, ensuring that everything is bug free when anything is going live. Download the full case study to find out more.

“When we first started working with EchoLogyx, we realised that it is possible to move things faster without losing the quality and spending a huge amount of money for the development support. We are using EchoLogyx and their Shopify developers for a while now. They are great to work with, understand our challenges and produce innovative solutions that is continuously helping us to fulfil our digital requirements. On top of that, their QA is extremely thorough, making sure that we are not making any mistakes when pushing any changes live to our storefront. This is allowing us to try out the boundaries of Shopify and enhance the customer experience of our store. Without any hesitation, I would highly recommend EchoLogyx for their Shopify development support.” – Nicolo T., Head of Digital and Ecommerce

Case Study: Wax London increases their velocity of Shopify Development with EchoLogyx

No comments yet

About Wax London

Wax is a London-based family label established in 2015, inspired by the places, faces and stories that surround us. Keeping sustainability at their core, Wax’s clothing is made with carefully sourced materials, designed to be worn, time and time again.

Download the full case study

The Challenge

After Wax moved their ecommerce platform to Shopify, they started to face some challenges to make necessary changes on their new platform. Their dynamic ecommerce and digital marketing team were getting blocked due to reduced amount of support received by their previous agency. As a result, even though they had new and innovative ideas to try and test to improve customer experience, they had to wait for months to get things implemented. They needed a solution that was efficient, cost effective and high standard.

How EchoLogyx helped Wax to scale their Shopify Development

Wax started working with EchoLogyx from early 2022. With a fulltime experienced Shopify engineer from EchoLogyx directly supporting Wax’s digital team, Wax was able to move quickly with the necessary changes on their Shopify site. On top of that, all changes were tested across multiple devices and browsers by their dedicated QA engineers, ensuring that everything is bug free when anything is going live. Download the full case study to find out more.

“When we first started working with EchoLogyx, we realised that it is possible to move things faster without losing the quality and spending a huge amount of money for the development support. We are using EchoLogyx and their Shopify developers for a while now. They are great to work with, understand our challenges and produce innovative solutions that is continuously helping us to fulfil our digital requirements. On top of that, their QA is extremely thorough, making sure that we are not making any mistakes when pushing any changes live to our storefront. This is allowing us to try out the boundaries of Shopify and enhance the customer experience of our store. Without any hesitation, I would highly recommend EchoLogyx for their Shopify development support.” – Nicolo T., Head of Digital and Ecommerce

We have recently created a new Chrome Extension to check if a particular goal is firing or not. Currently it supports 7 testing tools. The plan is to gradually increase the number of A/B testing tools to help marketers, CRO consutlants, QA engineers, and developers to QA metrics.

But why this is important?

Well – first of all, if your metrics are not firing as expected, how are you going to analyze the results of your tests? In simple terms, there is no point of running an experiment if you can’t measure the performance – whether it is a button click, pageview goal or transactional metrics.

The other reasion for making sure that the metrics are working correctly up front is so that you don’t lose out the days of testing – only to find at a later stage that some of them were not working as expected.

We all know that in simple terms, Conversion Rate Optimisaiton is making changes to improve specific KPIs. If you are not making sure that the metrics are being tracked properly – what’s the point!

Key takeaway – QA your metrics before you launch the tests!

Metrics checking – why this is important?

No comments yet

We have recently created a new Chrome Extension to check if a particular goal is firing or not. Currently it supports 7 testing tools. The plan is to gradually increase the number of A/B testing tools to help marketers, CRO consutlants, QA engineers, and developers to QA metrics.

But why this is important?

Well – first of all, if your metrics are not firing as expected, how are you going to analyze the results of your tests? In simple terms, there is no point of running an experiment if you can’t measure the performance – whether it is a button click, pageview goal or transactional metrics.

The other reasion for making sure that the metrics are working correctly up front is so that you don’t lose out the days of testing – only to find at a later stage that some of them were not working as expected.

We all know that in simple terms, Conversion Rate Optimisaiton is making changes to improve specific KPIs. If you are not making sure that the metrics are being tracked properly – what’s the point!

Key takeaway – QA your metrics before you launch the tests!

Want to check the metrics, goals, events for your experiments? Not sure if they are firing properly? Tired of going through the network tab and find the right calls?
EchoLogyx Test Metrics Debugger allows you to check the goals and metrics – if they are firing at the right time, and with the right interaction. Simply, add the extension, enable it, visit the page where the test is live and see whether the metrics are firing or not.
You can see the detailed information about the metrics that are being fired and what information is being passed to the testing tool.
Currently, this extension works with:

  1. Convert
  2. VWO
  3. Optimizely
  4. Adobe Target (mbox V1 and mbox V2)
  5. Dynamic Yield
  6. AB Tasty
  7. Google Optimize / Analytics

We are working to cover more A/B testing tools to easily check metrics and goals.

Get it from Chrome Web Store.

EchoLogyx All-in-one test metrics debugger

No comments yet

Want to check the metrics, goals, events for your experiments? Not sure if they are firing properly? Tired of going through the network tab and find the right calls?
EchoLogyx Test Metrics Debugger allows you to check the goals and metrics – if they are firing at the right time, and with the right interaction. Simply, add the extension, enable it, visit the page where the test is live and see whether the metrics are firing or not.
You can see the detailed information about the metrics that are being fired and what information is being passed to the testing tool.
Currently, this extension works with:

  1. Convert
  2. VWO
  3. Optimizely
  4. Adobe Target (mbox V1 and mbox V2)
  5. Dynamic Yield
  6. AB Tasty
  7. Google Optimize / Analytics

We are working to cover more A/B testing tools to easily check metrics and goals.

Get it from Chrome Web Store.

A simple (yet powerful) guide to AB Testing Development

Over the past decade, companies had no choice but to take the online user experience more seriously than ever before, to increase their online sales, customer loyalty and achieve their business goals.

As the online businesses race towards becoming the best amongst their competitors, CRO and AB Testing have started to play a significant role to improve digital channel performances. From large companies such as Google, Facebook, Amazon, Netflix, Microsoft to Start-ups, started experimenting with different concepts to gain additional customers or users to their website.

In this article, we are providing a simple guide to develop A/B tests. However, before doing that, it is important to understand the basics, which will help to provide a solid foundation to understand the success of CRO.

What is CRO (Conversion Rate Optimization)?

There are many definitions out there of Conversion Rate Optimization. For example, Econsultancy defines CRO as a “process of optimizing site to increase the likelihood that visitors will complete that specific action” [1] . HubSpot provides an action-focused definition of CRO by adding enabling people to take action when they visit a website. “By designing and modifying certain elements of a webpage, a business can increase the chances that site visitors will “convert” into a lead or customer before they leave”[2].

In simple terms, the definition of CRO is

“Making changes to improve metrics.”

The changes can vary from small – like a simple headline change, to large – where you could be adding a new feature to your product. The changes can be on your website, emails, mobile Apps, Search keywords, banners, or even physical entities. The ultimate thing is by making these changes, you are improving your business metrics, performance indicators or KPIs. Moreover, this process of continuous improvement of KPIs by making changes is CRO.

[1] What is conversion rate optimisation (CRO) and why do you need it? https://econsultancy.com/what-is-conversion-rate-optimisation-cro-and-why-do-you-need-it/

[2] The Beginner’s Guide to Conversion Rate Optimization (CRO) https://blog.hubspot.com/marketing/conversion-rate-optimization-guide

 

What is A/B Testing?

As part of the CRO program, you are coming up with ideas to make changes. These change ideas are commonly known as Hypothesis.

As with any research, a hypothesis is always validated by different forms of testing. The most common form of testing a hypothesis within the CRO program is A/B Testing (or ABn Testing) where version A signifies the control or original – what is currently live, and version B is the new variation created based on the hypothesis.

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is A/B Testing?” img_alt=”” css_class=””] A/B Testing simply helps you to check if the changes you are making are truly improving the target KPIs. It takes the metric performance of the current version and compares it with the new version where the changes have been applied. [/sc_fs_faq]

For example, Neil – an Optimisation Consultant has completed some research on the e-commerce site he is involved with. Based on the analytics data, he identified that on the product details pages, the price of the product is presented just below the title of the product, which some users were ignoring. His idea is to change the position of this price and make it closer to the main Call to action (CTA) button so that users can see the price before adding the item to the basket and more people will complete their purchase through this website.

This is a simple example of an AB test to find out if Neil’s hypothesis is true – by moving the price closer to the CTA, more people will complete their purchase.

Figure 1: Example of a simple AB Test where Version A or Control where the price is just below the title of the product. Version B, where the price is just above the Buy now CTA button.

A hypothesis can create more than one variation. In such cases, your AB test becomes ABn test where A still signifies the control and after that, you have variation B, variation C and so forth to signify the other variation.

In case if you are interested in changing multiple elements and test a combination of different elements, you can setup a Multivariate Test (MVT).

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is Multivariate Test or MVT?” img_alt=”” css_class=””] Multivariate Test or MVT is a form of a test where you want to find the impact of the combinations of more than one change across multiple places within a page or site section. Let us break this definition down with an example. [/sc_fs_faq]

Let’s go back to Neil’s research, and this time, he noticed that when visitors landed on the homepage of the best retailer website, they completely ignored the Sale link and also did not notice the key USP (see below).

Figure 2: Example of a Multi-variate test where two different elements are going to be tested with multiple variations

He wants to test out two different versions of the Sale message against the control version:

  • Control Sale message: Up to 50% off! Shop Sale
  • Variation 1 Sale message: Sale now on! Get Upto 50% off
  • Variation 2 Sale message: Up to 50% off! Hurry while stock lasts!

At the same time, two versions of the main heading to promote free delivery and return messaging against the control:

  • Control Heading: Welcome to the BestRetailer site – number 1 in sustainable products
  • Variation 1 Heading: Number 1 Sustainable Shopping site with FREE Delivery and Returns
  • Variation 2 Heading: Free Delivery and Returns! BestRetailer for Sustainable products

Neil is interested in finding out not only which individual sale message or heading works best to increase the browsability and purchase, but also to see which combination of these messages work best. This is where he sets this test up as a multi-variate test or MVT to test out all of the combinations against the control.

All MVT or multi-variate tests are AB tests – with a lot of variations. For example, the total number of combinations in this example will be 9 (including the control). Each of these combinations can be considered as a variation of an AB test.

Testing tools that provide the option of setting up MVT usually gives the user some additional options – for example, excluding specific combinations and reporting on the performance of individual elements or variates.

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is a Redirect Test or URL Redirect Test?” img_alt=”” css_class=””] This is another form of A/B test where you are simply redirecting the variation traffic to your newly developed page(s). One important thing to consider when setting up a redirect A/B test is to use a rel=canonical tag. This will tell Google and other search engines that the content is similar to the original URL and that your SEO ranking is not penalised when running the test. [/sc_fs_faq]

What is Server-Side testing?

Most commonly used A/B testing methods are client-side – i.e. the changes are applied on the browser using JavaScript. This does not require any code push from the server and marketers and A/B testing developers can rely on the testing tool to set up the A/B test.

Client-side testing allows users to test almost everything. There can be cases where a hypothesis might be testing additional features – such as the pricing structure, new site search algorithm, anything that will require backend changes to the site and you cannot test them with the client-side testing. This is where server-side testing becomes handy. The backend developers can create the variation, and with the help of the testing tool, they can launch the test by splitting the traffic.

For example, let’s say Neil identified that the site search on the Best Retailer website is not returning the right set of results. The developed built a new search algorithm. But Neil wants to make sure that the new search algorithm is actually performing better in comparison to the existing search.

In this case, he requests the AB Testing development team to create a server-side test and using the server-side testing tool, the development team can launch the new version of the search alongside the existing version by splitting the traffic.

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What are the key components of A/B test?” img_alt=”” css_class=””] 1. Test Plan or Test Brief 2. Design of the A/B test variations 3. Development of the variations 4. Tool Setup 5. QA [/sc_fs_faq]

There are 5 core components of an A/B test:

  1. Test Plan or Test Brief
  2. Design of the A/B test variations
  3. Development of the variations
  4. Tool Setup
  5. QA

1.      Test plan or Test Brief

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is Test Plan in A/B Testing?” img_alt=”” css_class=””] Also sometimes known as ‘Blueprint’, a test plan or test brief should provide all the necessary information about the A/B test. It should include, but not limited to, the hypothesis, audiences, targeting conditions, URL where the test will be launched, variation information, metrics that need to be tracked, QA scenarios – i.e. anything that would define any aspect of this A/B test will need to be documented on the Test Plan. This should be treated as the reference points for everyone involved in this CRO programme to learn more about an individual test. [/sc_fs_faq]

You can download a test plan template from here[1].

2.      Design of the A/B test variations

Depending on the changes being made with the test, the design of the variations needs to be specified. Ideally, it is great if the A/B testing developers get access to the raw design files (e.g. Photoshop, Figma, Zeplin etc). That way, when the A/B Test developers are building the variations, they can rely on the exact specification from the raw file (e.g. colour code, pixel amount etc.).

If the A/B test needs to be developed across all devices (e.g. Desktop, Tablet, Mobile), then the design should specify how the variation will be displayed across all of those devices. Additionally, if there are specific scenarios that would impact how the variation will be displayed (e.g. clicking on an accordion/tab), the design should also provide information such as active state, highlighted state.

3.      Development of the A/B test variations

Once the test plan and design variations are ready, A/B test developers or Solutions Engineers start the development of the variations. Depending on the changes that are being made with the variations, the test developers would write the code using JavaScript / CSS / HTML. They can use the Browser Console to run the code and check if the desired changes are happening or not. Once they are happy, they can move into the tool setup.

4.      Tool Setup

Depending on the testing tool, the setup of an A/B testing tool may vary. Additionally, the type of test can also dictate the setup of the test. For example, if it is an MVT, the developer needs to define the area where the changes are being made. If it is a redirect test, then the right parameters need to be used.

The A/B testing developer also needs to think about the targeting conditions based on the test plan. Who should be targeted, where is the test going to be live, if the site is developed using SPA, what needs to be done to ensure that the experiment is running at the right location.

Metrics setup is another important part of this stage of A/B testing development. The developers need to go through the list of metrics and implement them. Note that in some testing tools, you can save the metrics and re-use the saved metrics. Whereas in some other tools, every time you create a new test, you need to create the metrics.

When implementing the metrics, A/B testing developers would need to make sure that they are tracked in variations, as well as in Control. Once this is done, the test developer will need to prepare this for QA of the variations.

5.      A/B Test QA

Once the development is complete, the test must go for QA. Test QA Engineers at this stage would pick up the test plan, check the variations against the design and the test plan to make sure that the variation is rendering as it should. They will report all bugs, which will then need to go back to the developer to fix. Furthermore, once the developer fixes the bugs, they will need to go through a re-QA process.

This is a significant part of any A/B test. Without QAing the test, the test should never go live to the end-user and you can read why A/B Testing QA is really important. The last thing you want is your test becomes invalidated by a bug in the variation.

QA Engineers also need to make sure that the metrics are getting the right data in control as well as in variation. Depending on the testing tools, the way of checking metrics can vary. However, in most cases, the metrics that are being passed to the testing tool server can be found on the network tab of the browsers.

Conclusion

A/B testing comes in different forms. However, all tests have some fundamental components – and they will define the quality of the tests that you are building for your CRO / Experimentation program.

A simple guide to AB testing Development

No comments yet

A simple (yet powerful) guide to AB Testing Development

Over the past decade, companies had no choice but to take the online user experience more seriously than ever before, to increase their online sales, customer loyalty and achieve their business goals.

As the online businesses race towards becoming the best amongst their competitors, CRO and AB Testing have started to play a significant role to improve digital channel performances. From large companies such as Google, Facebook, Amazon, Netflix, Microsoft to Start-ups, started experimenting with different concepts to gain additional customers or users to their website.

In this article, we are providing a simple guide to develop A/B tests. However, before doing that, it is important to understand the basics, which will help to provide a solid foundation to understand the success of CRO.

What is CRO (Conversion Rate Optimization)?

There are many definitions out there of Conversion Rate Optimization. For example, Econsultancy defines CRO as a “process of optimizing site to increase the likelihood that visitors will complete that specific action” [1] . HubSpot provides an action-focused definition of CRO by adding enabling people to take action when they visit a website. “By designing and modifying certain elements of a webpage, a business can increase the chances that site visitors will “convert” into a lead or customer before they leave”[2].

In simple terms, the definition of CRO is

“Making changes to improve metrics.”

The changes can vary from small – like a simple headline change, to large – where you could be adding a new feature to your product. The changes can be on your website, emails, mobile Apps, Search keywords, banners, or even physical entities. The ultimate thing is by making these changes, you are improving your business metrics, performance indicators or KPIs. Moreover, this process of continuous improvement of KPIs by making changes is CRO.

[1] What is conversion rate optimisation (CRO) and why do you need it? https://econsultancy.com/what-is-conversion-rate-optimisation-cro-and-why-do-you-need-it/

[2] The Beginner’s Guide to Conversion Rate Optimization (CRO) https://blog.hubspot.com/marketing/conversion-rate-optimization-guide

 

What is A/B Testing?

As part of the CRO program, you are coming up with ideas to make changes. These change ideas are commonly known as Hypothesis.

As with any research, a hypothesis is always validated by different forms of testing. The most common form of testing a hypothesis within the CRO program is A/B Testing (or ABn Testing) where version A signifies the control or original – what is currently live, and version B is the new variation created based on the hypothesis.

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is A/B Testing?” img_alt=”” css_class=””] A/B Testing simply helps you to check if the changes you are making are truly improving the target KPIs. It takes the metric performance of the current version and compares it with the new version where the changes have been applied. [/sc_fs_faq]

For example, Neil – an Optimisation Consultant has completed some research on the e-commerce site he is involved with. Based on the analytics data, he identified that on the product details pages, the price of the product is presented just below the title of the product, which some users were ignoring. His idea is to change the position of this price and make it closer to the main Call to action (CTA) button so that users can see the price before adding the item to the basket and more people will complete their purchase through this website.

This is a simple example of an AB test to find out if Neil’s hypothesis is true – by moving the price closer to the CTA, more people will complete their purchase.

Figure 1: Example of a simple AB Test where Version A or Control where the price is just below the title of the product. Version B, where the price is just above the Buy now CTA button.

A hypothesis can create more than one variation. In such cases, your AB test becomes ABn test where A still signifies the control and after that, you have variation B, variation C and so forth to signify the other variation.

In case if you are interested in changing multiple elements and test a combination of different elements, you can setup a Multivariate Test (MVT).

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is Multivariate Test or MVT?” img_alt=”” css_class=””] Multivariate Test or MVT is a form of a test where you want to find the impact of the combinations of more than one change across multiple places within a page or site section. Let us break this definition down with an example. [/sc_fs_faq]

Let’s go back to Neil’s research, and this time, he noticed that when visitors landed on the homepage of the best retailer website, they completely ignored the Sale link and also did not notice the key USP (see below).

Figure 2: Example of a Multi-variate test where two different elements are going to be tested with multiple variations

He wants to test out two different versions of the Sale message against the control version:

  • Control Sale message: Up to 50% off! Shop Sale
  • Variation 1 Sale message: Sale now on! Get Upto 50% off
  • Variation 2 Sale message: Up to 50% off! Hurry while stock lasts!

At the same time, two versions of the main heading to promote free delivery and return messaging against the control:

  • Control Heading: Welcome to the BestRetailer site – number 1 in sustainable products
  • Variation 1 Heading: Number 1 Sustainable Shopping site with FREE Delivery and Returns
  • Variation 2 Heading: Free Delivery and Returns! BestRetailer for Sustainable products

Neil is interested in finding out not only which individual sale message or heading works best to increase the browsability and purchase, but also to see which combination of these messages work best. This is where he sets this test up as a multi-variate test or MVT to test out all of the combinations against the control.

All MVT or multi-variate tests are AB tests – with a lot of variations. For example, the total number of combinations in this example will be 9 (including the control). Each of these combinations can be considered as a variation of an AB test.

Testing tools that provide the option of setting up MVT usually gives the user some additional options – for example, excluding specific combinations and reporting on the performance of individual elements or variates.

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is a Redirect Test or URL Redirect Test?” img_alt=”” css_class=””] This is another form of A/B test where you are simply redirecting the variation traffic to your newly developed page(s). One important thing to consider when setting up a redirect A/B test is to use a rel=canonical tag. This will tell Google and other search engines that the content is similar to the original URL and that your SEO ranking is not penalised when running the test. [/sc_fs_faq]

What is Server-Side testing?

Most commonly used A/B testing methods are client-side – i.e. the changes are applied on the browser using JavaScript. This does not require any code push from the server and marketers and A/B testing developers can rely on the testing tool to set up the A/B test.

Client-side testing allows users to test almost everything. There can be cases where a hypothesis might be testing additional features – such as the pricing structure, new site search algorithm, anything that will require backend changes to the site and you cannot test them with the client-side testing. This is where server-side testing becomes handy. The backend developers can create the variation, and with the help of the testing tool, they can launch the test by splitting the traffic.

For example, let’s say Neil identified that the site search on the Best Retailer website is not returning the right set of results. The developed built a new search algorithm. But Neil wants to make sure that the new search algorithm is actually performing better in comparison to the existing search.

In this case, he requests the AB Testing development team to create a server-side test and using the server-side testing tool, the development team can launch the new version of the search alongside the existing version by splitting the traffic.

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What are the key components of A/B test?” img_alt=”” css_class=””] 1. Test Plan or Test Brief 2. Design of the A/B test variations 3. Development of the variations 4. Tool Setup 5. QA [/sc_fs_faq]

There are 5 core components of an A/B test:

  1. Test Plan or Test Brief
  2. Design of the A/B test variations
  3. Development of the variations
  4. Tool Setup
  5. QA

1.      Test plan or Test Brief

[sc_fs_faq html=”false” headline=”h2″ img=”” question=”What is Test Plan in A/B Testing?” img_alt=”” css_class=””] Also sometimes known as ‘Blueprint’, a test plan or test brief should provide all the necessary information about the A/B test. It should include, but not limited to, the hypothesis, audiences, targeting conditions, URL where the test will be launched, variation information, metrics that need to be tracked, QA scenarios – i.e. anything that would define any aspect of this A/B test will need to be documented on the Test Plan. This should be treated as the reference points for everyone involved in this CRO programme to learn more about an individual test. [/sc_fs_faq]

You can download a test plan template from here[1].

2.      Design of the A/B test variations

Depending on the changes being made with the test, the design of the variations needs to be specified. Ideally, it is great if the A/B testing developers get access to the raw design files (e.g. Photoshop, Figma, Zeplin etc). That way, when the A/B Test developers are building the variations, they can rely on the exact specification from the raw file (e.g. colour code, pixel amount etc.).

If the A/B test needs to be developed across all devices (e.g. Desktop, Tablet, Mobile), then the design should specify how the variation will be displayed across all of those devices. Additionally, if there are specific scenarios that would impact how the variation will be displayed (e.g. clicking on an accordion/tab), the design should also provide information such as active state, highlighted state.

3.      Development of the A/B test variations

Once the test plan and design variations are ready, A/B test developers or Solutions Engineers start the development of the variations. Depending on the changes that are being made with the variations, the test developers would write the code using JavaScript / CSS / HTML. They can use the Browser Console to run the code and check if the desired changes are happening or not. Once they are happy, they can move into the tool setup.

4.      Tool Setup

Depending on the testing tool, the setup of an A/B testing tool may vary. Additionally, the type of test can also dictate the setup of the test. For example, if it is an MVT, the developer needs to define the area where the changes are being made. If it is a redirect test, then the right parameters need to be used.

The A/B testing developer also needs to think about the targeting conditions based on the test plan. Who should be targeted, where is the test going to be live, if the site is developed using SPA, what needs to be done to ensure that the experiment is running at the right location.

Metrics setup is another important part of this stage of A/B testing development. The developers need to go through the list of metrics and implement them. Note that in some testing tools, you can save the metrics and re-use the saved metrics. Whereas in some other tools, every time you create a new test, you need to create the metrics.

When implementing the metrics, A/B testing developers would need to make sure that they are tracked in variations, as well as in Control. Once this is done, the test developer will need to prepare this for QA of the variations.

5.      A/B Test QA

Once the development is complete, the test must go for QA. Test QA Engineers at this stage would pick up the test plan, check the variations against the design and the test plan to make sure that the variation is rendering as it should. They will report all bugs, which will then need to go back to the developer to fix. Furthermore, once the developer fixes the bugs, they will need to go through a re-QA process.

This is a significant part of any A/B test. Without QAing the test, the test should never go live to the end-user and you can read why A/B Testing QA is really important. The last thing you want is your test becomes invalidated by a bug in the variation.

QA Engineers also need to make sure that the metrics are getting the right data in control as well as in variation. Depending on the testing tools, the way of checking metrics can vary. However, in most cases, the metrics that are being passed to the testing tool server can be found on the network tab of the browsers.

Conclusion

A/B testing comes in different forms. However, all tests have some fundamental components – and they will define the quality of the tests that you are building for your CRO / Experimentation program.

A/B testing is the practice of showing two variants of the same web page to different segments of visitors at the same time and comparing which variant drives more conversions. In an A/B test, the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, VWO provides several ways to troubleshoot your experiment or campaign.

 

Tools for QA:

  • Result page: helps you to view result for each goal and the good news is that it updates the result immediately.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Browser cookie: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all types of goals.

 

Among all of them, I will say the browser cookie is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences and goals QA.

 

Browser cookie:

VWO log all records events that occur as you interact with a page on your browser’s cookie. When you trigger an event in VWO it fires a tracking call and stores that information on the browser’s cookie.

To access on browser cookie tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Cookies tab.
  4. Select the Domain name of your site.
  5. Filter with “_vis_opt_exp_”.
  6. More specific for a campaign filter with “_vis_opt_exp_{CAMPAIGNID}_goal_”.

You can see the list of all events (all types of goals like click, custom, transection etc) that fired. VWO has a specific number of each goal. I have highlighted the events for few goals on the below screenshot.

VWO stores almost all information (that needed for a developer to troubleshoot) in the browser cookies; like experiments, audiences/segments, goals, users, referrers, session etc. You can find the details about VWO cookies from here.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in VWO it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “ping_tpc”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events that fired. I have highlighted the event that has a specific experiment and goal ID on the below screenshot.

Note: If have already bucketed in an experiment and fired few goals you might not see any network calls. So always go to a fresh incognito browser to troubleshoot goals/experiments.

 

As VWO immediately update campaign results so it’s always another good option to check result page. But make sure you are the only visitor at that time who is seeing the experiment.

Goals Troubleshooting/QA In VWO

No comments yet

A/B testing is the practice of showing two variants of the same web page to different segments of visitors at the same time and comparing which variant drives more conversions. In an A/B test, the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, VWO provides several ways to troubleshoot your experiment or campaign.

 

Tools for QA:

  • Result page: helps you to view result for each goal and the good news is that it updates the result immediately.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Browser cookie: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all types of goals.

 

Among all of them, I will say the browser cookie is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences and goals QA.

 

Browser cookie:

VWO log all records events that occur as you interact with a page on your browser’s cookie. When you trigger an event in VWO it fires a tracking call and stores that information on the browser’s cookie.

To access on browser cookie tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Cookies tab.
  4. Select the Domain name of your site.
  5. Filter with “_vis_opt_exp_”.
  6. More specific for a campaign filter with “_vis_opt_exp_{CAMPAIGNID}_goal_”.

You can see the list of all events (all types of goals like click, custom, transection etc) that fired. VWO has a specific number of each goal. I have highlighted the events for few goals on the below screenshot.

VWO stores almost all information (that needed for a developer to troubleshoot) in the browser cookies; like experiments, audiences/segments, goals, users, referrers, session etc. You can find the details about VWO cookies from here.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in VWO it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “ping_tpc”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events that fired. I have highlighted the event that has a specific experiment and goal ID on the below screenshot.

Note: If have already bucketed in an experiment and fired few goals you might not see any network calls. So always go to a fresh incognito browser to troubleshoot goals/experiments.

 

As VWO immediately update campaign results so it’s always another good option to check result page. But make sure you are the only visitor at that time who is seeing the experiment.

A/B testing is a marketing technique that involves comparing two versions of a web page or application to see which performs better. AB test developing within AB Tasty has few parallels with conventional front-end development. Where the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

 

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, AB Tasty provides several ways to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview link: helps you to view variation and move one variation to another, you can also track click goals by enabling “Display Click tracking info’s”.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Local storage: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all click & custom goals.

 

Among all of them, I will say the network tab is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences, goals QA and code execution on page load.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in AB Tasty it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “datacollectAT” or “ariane.abtasty”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Custom goals are work with the same API call as click goals (so it’s also tracked as an event). That’s why we add a text ‘Custom’ before all custom goals to differentiate between click and custom goals.

You can see the list of custom events that fired on the bellow screenshot.

Local storage:

AB tasty log all records events that occur as you interact with a page on your browser Local storage. When you trigger an event in AB Tasty it fires a tracking call and stores that information on browser local storage.

To access on Local storage tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Local storage tab.
  4. Select the Domain name of your site.
  5. Filter with “ABTastyData”.
  6. Click to ABTastyData you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Note: For pageview goal, we have to rely on AB Tasty campaign result page, but the bad news is that it does not update immediately, need to wait 3-4 hours to see the reflections.

We cannot check pageview for AB Tasty by Network console/ Local storage as it’s work differently; It’s tracked the page URL for each of the page and record under each campaign(It has other benefits, like; we can filter the result with any URL without adding it as a pageview goal). AB Tasty manipulates all the goal along with the pageview goals in a certain period and updates that specific campaign results.

Goals Troubleshooting/QA in AB Tasty

No comments yet

A/B testing is a marketing technique that involves comparing two versions of a web page or application to see which performs better. AB test developing within AB Tasty has few parallels with conventional front-end development. Where the most important thing is goals that decide a winning test. So, if can we do proper QA/Troubleshoot to check each goal are working that will serve our AB tasting purpose well.

 

We work hard to make AB test work in properly, but sometimes technology doesn’t work the way you expect it to. For those less-happy moments, AB Tasty provides several ways to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview link: helps you to view variation and move one variation to another, you can also track click goals by enabling “Display Click tracking info’s”.
  • Network console: helps you verify whether events in a live experiment are firing correctly.
  • Local storage: helps you verify whether events in a live experiment are firing correctly. It’s stored all the information about all click & custom goals.

 

Among all of them, I will say the network tab is your best friend. This contains all the information that developers need for troubleshooting experiments, audiences, goals QA and code execution on page load.

 

Network console:

The network panel is a log in your browser that records events that occur as you interact with a page. When you trigger an event in AB Tasty it fires a tracking call, which is picked up in the network traffic.

To access on network tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Network tab.
  3. Filter with “datacollectAT” or “ariane.abtasty”.
  4. Click to fire the event you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Custom goals are work with the same API call as click goals (so it’s also tracked as an event). That’s why we add a text ‘Custom’ before all custom goals to differentiate between click and custom goals.

You can see the list of custom events that fired on the bellow screenshot.

Local storage:

AB tasty log all records events that occur as you interact with a page on your browser Local storage. When you trigger an event in AB Tasty it fires a tracking call and stores that information on browser local storage.

To access on Local storage tab:

  1. Right-click on the page. From the dropdown menu, select Inspect in Chrome or Inspect Element in Firefox.
  2. Select the Application/Storage tab.
  3. Select the Local storage tab.
  4. Select the Domain name of your site.
  5. Filter with “ABTastyData”.
  6. Click to ABTastyData you’d like to see the details.

You can see the list of all events(click/custom/transection) that fired. I have highlighted the events name for click/custom goals on the bellow screenshot.

Note: For pageview goal, we have to rely on AB Tasty campaign result page, but the bad news is that it does not update immediately, need to wait 3-4 hours to see the reflections.

We cannot check pageview for AB Tasty by Network console/ Local storage as it’s work differently; It’s tracked the page URL for each of the page and record under each campaign(It has other benefits, like; we can filter the result with any URL without adding it as a pageview goal). AB Tasty manipulates all the goal along with the pageview goals in a certain period and updates that specific campaign results.

Over the years I kept on preaching about making sure that your traffic is exposed to only one test at any given time. This is to ensure the quality of the test result data, making sure that the winners you are getting from each test are true winners without diluting the data.

This is great when you are gradually getting yourself into the habit of testing. However, when we are talking about implementing a culture of experimentation, constantly running tests to improve customer experience and conversion, this method of testing, unfortunately, fails to achieve that.

As a result, no matter how much you would like to implement a culture of experimentation, you are bound to frustrate your team due to the lack of velocity of running tests. To overcome this, we are introducing a method called Simultaneous Divisional Optimisation (SDO).

What is Simultaneous Divisional Optimisation:

An Experimentation programme that allows you to run multiple tests on any given website by breaking the site down into smaller areas and treating each area as the separate, individual website.

Each area or site section will have its own primary metric to improve via the CRO program. Metrics such as sales or revenue will be tracked to ensure that these don’t have a detrimental impact on the tests running on individual site areas. The winner will be declared based on the performance of the selected primary metric.

How does Simultaneous Divisional Optimisation work:

The best way to understand how SDO works, let’s take a fashion retail business as an example.

If we consider the user journey on any fashion retail site, it typically falls into three core categories:

  1. Browse and find
  2. Research and decide
  3. Transact / Complete purchase

Browse and find

Users who are in this state of the journey are typically looking for the product that they need/want. Pages such as Homepage, Category Landing pages, and Category Listing Pages (or product listing pages) usually support this journey for the users.

Your aim here is to optimise the customer experience by making it easy for the users to find the right product for themselves. Whether to highlight relevant products via the homepage, entice them with the latest collections or give them an easy way to find the category of products they are looking for.

The success or the primary metric potentially at this stage of the user journey is to take the user to one or more product details pages.

Research and decide

Once the user finds a product that they are looking for, they want to find out more about the product. This includes materials, fit, reviews and pricing. The ultimate goal at this stage of the user journey is to add the product to the shopping basket, with a secondary metric of adding it to the wish list.

Optimisation ideas here are potentially to provide the right information to the user in the right manner. Additionally, for the fashion retail website, you might want to help the user by giving them alternatives or “wear it with” to potentially “shop the look”. You can also consider having social messaging such as how many visitors purchased this product, how many visitors have added this product to the basket, and how many visitors are looking at the same product at the same time to show the popularity of the product.

Primary metric is for users to satisfy their needs of finding all the necessary information and add this product to their basket.

Transact / Complete purchase

The final step of the user journey is to complete their purchase. They have done their research, they have added the products they like into their basket, and now this is where they need to go through the process of the transaction. Yes – here the primary metric is indeed the sales. Your optimisation ideas on the basket and checkout funnel are to focus on getting the user to the ‘Thank you’ page. That is the main aim and you need to continuously improve the experience to move users from the basket page to complete the transaction.

Multiple tests in different sections

Now let’s bring all three sections together – in essence, you have now got three site sections that you can optimise independently as the aim of optimising each section is now different from one another. When you are running a test on the Category Listings or Product Listings Pages, your aim to get the user to one or more product details page. Similarly, when running a test on the product details page, your aim is to make the user to add the product to their basket. And finally, when you are improving the checkout funnel, your main aim is for visitors to complete the purchase.

What about other types of websites?

You can use the same theory on pretty much any transactional site. The sections will, of course, be different – for example, if you are working with a travel site, your ‘Browse and find’ section is potentially the search results page of the holiday/flight. The primary metric for optimisation here would be to send the users to a holiday page. Once the users are on an individual holiday page, then the primary aim is to start the booking process.

In a similar manner, you can divide one website into multiple sub-sites with a clear primary objective to be achieved from those sub-sites. This way, you can simultaneously optimise the customer experience for each of the subsections in parallel.

How do you analyse the test results of SDO?

You need to make sure that your primary metric for each site section has been accurately defined. If you find a winner (or loser) based on the primary metric without negatively impacting the final business goals, you can make the decision based on the result.

For any organisation, no matter what the final business goal is, you can always break things down into smaller goals. The SDO simply provides a way to optimise the smaller goals with the aim to ultimately optimise the final business goal.

Taking things further with SDO

What if you would like to launch multiple tests at the same time within the same section? You can still do that with the following two options:

  1. Mutually exclude traffic within that site section
  2. Create variations – one for each hypothesis and a final one with the combination of both hypotheses together

This way, you will be able to make sure that you are getting clear results for your tests running within the same section.

In summary:

SDO provides a way to run multiple experiments at the same time with the aim to continuously improve the customer experience of individual site sections. We have applied this method in a couple of businesses where the traffic amount is over a million per month. This resulted in an increase of test velocity by 10X with a significant revenue impact to the overall business in comparison to running just one test at a time.

SDO allows the organisation to implement an experimentation culture by dividing one website into multiple sites. It engages product owners from different sections to be involved and independently improve their site areas. This method of experimentation utilises the resource more efficiently to get the best out of the CRO program.

Simultaneous Divisional Optimization to support large volume of testing

No comments yet

Over the years I kept on preaching about making sure that your traffic is exposed to only one test at any given time. This is to ensure the quality of the test result data, making sure that the winners you are getting from each test are true winners without diluting the data.

This is great when you are gradually getting yourself into the habit of testing. However, when we are talking about implementing a culture of experimentation, constantly running tests to improve customer experience and conversion, this method of testing, unfortunately, fails to achieve that.

As a result, no matter how much you would like to implement a culture of experimentation, you are bound to frustrate your team due to the lack of velocity of running tests. To overcome this, we are introducing a method called Simultaneous Divisional Optimisation (SDO).

What is Simultaneous Divisional Optimisation:

An Experimentation programme that allows you to run multiple tests on any given website by breaking the site down into smaller areas and treating each area as the separate, individual website.

Each area or site section will have its own primary metric to improve via the CRO program. Metrics such as sales or revenue will be tracked to ensure that these don’t have a detrimental impact on the tests running on individual site areas. The winner will be declared based on the performance of the selected primary metric.

How does Simultaneous Divisional Optimisation work:

The best way to understand how SDO works, let’s take a fashion retail business as an example.

If we consider the user journey on any fashion retail site, it typically falls into three core categories:

  1. Browse and find
  2. Research and decide
  3. Transact / Complete purchase

Browse and find

Users who are in this state of the journey are typically looking for the product that they need/want. Pages such as Homepage, Category Landing pages, and Category Listing Pages (or product listing pages) usually support this journey for the users.

Your aim here is to optimise the customer experience by making it easy for the users to find the right product for themselves. Whether to highlight relevant products via the homepage, entice them with the latest collections or give them an easy way to find the category of products they are looking for.

The success or the primary metric potentially at this stage of the user journey is to take the user to one or more product details pages.

Research and decide

Once the user finds a product that they are looking for, they want to find out more about the product. This includes materials, fit, reviews and pricing. The ultimate goal at this stage of the user journey is to add the product to the shopping basket, with a secondary metric of adding it to the wish list.

Optimisation ideas here are potentially to provide the right information to the user in the right manner. Additionally, for the fashion retail website, you might want to help the user by giving them alternatives or “wear it with” to potentially “shop the look”. You can also consider having social messaging such as how many visitors purchased this product, how many visitors have added this product to the basket, and how many visitors are looking at the same product at the same time to show the popularity of the product.

Primary metric is for users to satisfy their needs of finding all the necessary information and add this product to their basket.

Transact / Complete purchase

The final step of the user journey is to complete their purchase. They have done their research, they have added the products they like into their basket, and now this is where they need to go through the process of the transaction. Yes – here the primary metric is indeed the sales. Your optimisation ideas on the basket and checkout funnel are to focus on getting the user to the ‘Thank you’ page. That is the main aim and you need to continuously improve the experience to move users from the basket page to complete the transaction.

Multiple tests in different sections

Now let’s bring all three sections together – in essence, you have now got three site sections that you can optimise independently as the aim of optimising each section is now different from one another. When you are running a test on the Category Listings or Product Listings Pages, your aim to get the user to one or more product details page. Similarly, when running a test on the product details page, your aim is to make the user to add the product to their basket. And finally, when you are improving the checkout funnel, your main aim is for visitors to complete the purchase.

What about other types of websites?

You can use the same theory on pretty much any transactional site. The sections will, of course, be different – for example, if you are working with a travel site, your ‘Browse and find’ section is potentially the search results page of the holiday/flight. The primary metric for optimisation here would be to send the users to a holiday page. Once the users are on an individual holiday page, then the primary aim is to start the booking process.

In a similar manner, you can divide one website into multiple sub-sites with a clear primary objective to be achieved from those sub-sites. This way, you can simultaneously optimise the customer experience for each of the subsections in parallel.

How do you analyse the test results of SDO?

You need to make sure that your primary metric for each site section has been accurately defined. If you find a winner (or loser) based on the primary metric without negatively impacting the final business goals, you can make the decision based on the result.

For any organisation, no matter what the final business goal is, you can always break things down into smaller goals. The SDO simply provides a way to optimise the smaller goals with the aim to ultimately optimise the final business goal.

Taking things further with SDO

What if you would like to launch multiple tests at the same time within the same section? You can still do that with the following two options:

  1. Mutually exclude traffic within that site section
  2. Create variations – one for each hypothesis and a final one with the combination of both hypotheses together

This way, you will be able to make sure that you are getting clear results for your tests running within the same section.

In summary:

SDO provides a way to run multiple experiments at the same time with the aim to continuously improve the customer experience of individual site sections. We have applied this method in a couple of businesses where the traffic amount is over a million per month. This resulted in an increase of test velocity by 10X with a significant revenue impact to the overall business in comparison to running just one test at a time.

SDO allows the organisation to implement an experimentation culture by dividing one website into multiple sites. It engages product owners from different sections to be involved and independently improve their site areas. This method of experimentation utilises the resource more efficiently to get the best out of the CRO program.

In order to make any tools (AB Tasty, Optimizely, VWO, Convert etc) work with your site, you need to insert a snippet (it may have a different name in different tools, like tag, Smartcode etc).

Every tool works hard to ensure that the snippet delivers the best possible experience for visitors to your site, but a few best practices can help ensure optimal site performance. As we are concerned about performance issues or page flickering. We have created this best practice guidance to install the snippet.

Below guidance can improve your testing performance:

 

Snippet placement:

Place the code in the <head> section of your pages so changes are displayed more quickly. Otherwise, a flickering effect may occur: your visitors may see the original page for a fraction of a second before they see the modified page. By calling snippet as high in the source code of your page as possible, our script can apply the changes before the content is displayed.

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.

Note: If jQuery is already included natively on your site, place the snippet directly after the jQuery.

 

Snippet load:

You should not install snippet through tag managers such as Google Tag Manager. By default, all the tag managers load snippet code asynchronously, which may cause page flicker on the test pages. Also, using tag managers may lead to delayed loading of the snippet code, which can cause time-out issues and prevent visitors from becoming part of the test.

  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.

 

Snippet type:

The snippet generally comes in two versions: synchronous and asynchronous. Installing the snippet synchronously helps prevent page flickering. Asynchronous loading eliminates any delay in page load times but greatly increases the chances of flashing. You can learn more about synchronous and asynchronous snippet loading, including the strengths and drawbacks of both load types.

In most cases, most of the tools recommend using the synchronous snippet. If the snippet is placed in your site’s <head> tag, you’ll be sure that your modifications will be applied immediately, before the site loads. This will avoid the flickering effect, and offer the best user experience.

  • Use the synchronous snippet

Note: Few tools recommend using the asynchronous snippet, like VWO. Before using synchronous or asynchronous snippet please have a look on advantage and disadvantage from that specific tool’s documentation.

 

Use preconnect and preload:

Add preconnect and preload tags at the top of the head for faster synchronous loading. We recommend using preconnect to open a connection to the server of specific tools to event endpoint, ahead of time.

  • Use preconnect and preload tags

In the example below, replace “http://dev.visualwebsiteoptimizer.com/lib/2965490.js” with your snippet and “//dev.visualwebsiteoptimizer.com” with the server of your tool.

 

You can find the server address from to preconnect from asking the customer support of specific tools. Bellow adding few server addresses for specific tools that might help you.

Optimizely: //logx.optimizely.com

VWO: //dev.visualwebsiteoptimizer.com

AB Tasty: //ariane.abtasty.com/

Convert: //logs.convertexperiments.com

 

Minimize the number of pages and events:

In a few tools, all pages and events are included in the basic snippet that increases the size of the snippet. To keep the overall snippet size small, avoid creating pages where you don’t expect to run experiments, and archive any unused pages, events and experiments.

  • Minimize the number of pages, events and experiments.

 

Use analytics:

Use an analytics tool to identify traffic that represents your visitors so you can optimize your site for the majority of people who visit. For example, if you find that most of your traffic is from mobile devices, you can target your experiments for mobile users.

  • Use analytics to target your testing

 

Best practice documentation:

Every tool has its own documentation to implement the snippet where they mention the best practices guideline for improving site performance or strengths and drawbacks of various implementation type. Don’t forget to have a look at that because they might have a few more recommendation. Read the documentation carefully and implement it in a way that fulfils your requirements.

  • Read tools specific documentation.

Summary:

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.
  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.
  • Use the synchronous snippet
  • Use preconnect and preload tags
  • Minimize the number of pages, events and experiments.
  • Use analytics to target your testing
  • Read tools specific documentation.

Best practices to implement the snippet of AB testing tools

No comments yet

In order to make any tools (AB Tasty, Optimizely, VWO, Convert etc) work with your site, you need to insert a snippet (it may have a different name in different tools, like tag, Smartcode etc).

Every tool works hard to ensure that the snippet delivers the best possible experience for visitors to your site, but a few best practices can help ensure optimal site performance. As we are concerned about performance issues or page flickering. We have created this best practice guidance to install the snippet.

Below guidance can improve your testing performance:

 

Snippet placement:

Place the code in the <head> section of your pages so changes are displayed more quickly. Otherwise, a flickering effect may occur: your visitors may see the original page for a fraction of a second before they see the modified page. By calling snippet as high in the source code of your page as possible, our script can apply the changes before the content is displayed.

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.

Note: If jQuery is already included natively on your site, place the snippet directly after the jQuery.

 

Snippet load:

You should not install snippet through tag managers such as Google Tag Manager. By default, all the tag managers load snippet code asynchronously, which may cause page flicker on the test pages. Also, using tag managers may lead to delayed loading of the snippet code, which can cause time-out issues and prevent visitors from becoming part of the test.

  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.

 

Snippet type:

The snippet generally comes in two versions: synchronous and asynchronous. Installing the snippet synchronously helps prevent page flickering. Asynchronous loading eliminates any delay in page load times but greatly increases the chances of flashing. You can learn more about synchronous and asynchronous snippet loading, including the strengths and drawbacks of both load types.

In most cases, most of the tools recommend using the synchronous snippet. If the snippet is placed in your site’s <head> tag, you’ll be sure that your modifications will be applied immediately, before the site loads. This will avoid the flickering effect, and offer the best user experience.

  • Use the synchronous snippet

Note: Few tools recommend using the asynchronous snippet, like VWO. Before using synchronous or asynchronous snippet please have a look on advantage and disadvantage from that specific tool’s documentation.

 

Use preconnect and preload:

Add preconnect and preload tags at the top of the head for faster synchronous loading. We recommend using preconnect to open a connection to the server of specific tools to event endpoint, ahead of time.

  • Use preconnect and preload tags

In the example below, replace “http://dev.visualwebsiteoptimizer.com/lib/2965490.js” with your snippet and “//dev.visualwebsiteoptimizer.com” with the server of your tool.

 

You can find the server address from to preconnect from asking the customer support of specific tools. Bellow adding few server addresses for specific tools that might help you.

Optimizely: //logx.optimizely.com

VWO: //dev.visualwebsiteoptimizer.com

AB Tasty: //ariane.abtasty.com/

Convert: //logs.convertexperiments.com

 

Minimize the number of pages and events:

In a few tools, all pages and events are included in the basic snippet that increases the size of the snippet. To keep the overall snippet size small, avoid creating pages where you don’t expect to run experiments, and archive any unused pages, events and experiments.

  • Minimize the number of pages, events and experiments.

 

Use analytics:

Use an analytics tool to identify traffic that represents your visitors so you can optimize your site for the majority of people who visit. For example, if you find that most of your traffic is from mobile devices, you can target your experiments for mobile users.

  • Use analytics to target your testing

 

Best practice documentation:

Every tool has its own documentation to implement the snippet where they mention the best practices guideline for improving site performance or strengths and drawbacks of various implementation type. Don’t forget to have a look at that because they might have a few more recommendation. Read the documentation carefully and implement it in a way that fulfils your requirements.

  • Read tools specific documentation.

Summary:

  • Place the snippet as the first script tag in the head of the page, but after all charset declarations, meta tags, and CSS inclusions.
  • Include the snippet directly in HTML<head> tag. Don’t deliver the snippet via any tag managers or inject it via client-side scripting.
  • Use the synchronous snippet
  • Use preconnect and preload tags
  • Minimize the number of pages, events and experiments.
  • Use analytics to target your testing
  • Read tools specific documentation.

AB test development within Optimizely is delightful and seamless. Front end testing development has few similarities with the conventional front-end development work. However, the most important thing is the goals or metrics that decide the result of the test. We need to do proper QA/Troubleshoot to check each goal; that they are working as expected as otherwise, the whole development of the testing work would be meaningless.

We work hard to make a test work in properly, but sometimes technology doesn’t work the way you expect it to. In this article, I have provided a list of five options that Optimizely provides to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview tool: helps you check the experiments and campaigns functionality visual changes for different audiences and events fire details.
  • JavaScript API: helps you verify what live experiments and campaigns are running on a page and which variation you’re bucketed into.
  • Network console: helps you verify whether events in a live experiment or campaign are firing correctly.
  • Optimizely’s cookies and localStorage: helps you to uniquely identify visitors, track their actions, and deliver consistent experiences across page loads.
  • Optimizely log: helps you diagnose more difficult issues in a live experiment or campaign. It tells you about the activated experiment or campaign on the page, qualified audience, applied changes on a page and even events that are fired on each action.

Among all of them, I will say the Optimizely log is your best friend. This log contains all the information that developers need for troubleshooting experiments, segments, audiences, goals and code execution on page load.

I would like to discuss this Optimizely log with a few examples. If your requirements do not serve with this, you can go with other options available in the above links.

Optimizely log:

The Optimizely log allows you to “read Optimizely’s mind” by printing the execution of targeting and activation decisions, variation changes, events, and third-party integrations on a page in your browser’s console.

Use the Optimizely log to investigate all kind of issues, even those issues that you can’t easily diagnose. For goals QA, it is the best weapon in Optimizely.

The log can help you to check:

  • Is an experiment or campaign loading correctly?
  • Is the user qualified for an audience condition?
  • Are the changes you made, applied on the page?
  • Is the page activated on the URL(or specific condition)?
  • Is a click/custom goal fired?

You can check all of this with the Optimizely log. But here; I will show the example for page activation (Pageview goals) and click/custom goal.

You can access the log in two ways:

  1. With a query parameter: Add this query parameter to the URL and reload, boom!!
optimizely_log=info
  1. With the JavaScript API: Paste it to browser console and hit enter.
window.optimizely.push('log');

This will then return something like:

For pageview / click/ custom goal filter the console with “Optly / Track”. I have highlighted on the bellow screenshot for click/pageview/custom goals simultaneously.

For custom segments/attributes filter the console with “Optly / API”. I have highlighted on the below screenshot for custom segments.

Remember; custom segments could only fire once for a session. So, you might need to check in a new private window each time; to see the custom segments are working.

Reference: If you specifically troubleshoot for the audience, page, campaign, traffic allocation & bucketing, variation code and click/custom goals visit here.

Troubleshooting and Goals QA in Optimizely: Part 1

No comments yet

AB test development within Optimizely is delightful and seamless. Front end testing development has few similarities with the conventional front-end development work. However, the most important thing is the goals or metrics that decide the result of the test. We need to do proper QA/Troubleshoot to check each goal; that they are working as expected as otherwise, the whole development of the testing work would be meaningless.

We work hard to make a test work in properly, but sometimes technology doesn’t work the way you expect it to. In this article, I have provided a list of five options that Optimizely provides to troubleshoot your experiment or campaign.

Tools for QA:

  • Preview tool: helps you check the experiments and campaigns functionality visual changes for different audiences and events fire details.
  • JavaScript API: helps you verify what live experiments and campaigns are running on a page and which variation you’re bucketed into.
  • Network console: helps you verify whether events in a live experiment or campaign are firing correctly.
  • Optimizely’s cookies and localStorage: helps you to uniquely identify visitors, track their actions, and deliver consistent experiences across page loads.
  • Optimizely log: helps you diagnose more difficult issues in a live experiment or campaign. It tells you about the activated experiment or campaign on the page, qualified audience, applied changes on a page and even events that are fired on each action.

Among all of them, I will say the Optimizely log is your best friend. This log contains all the information that developers need for troubleshooting experiments, segments, audiences, goals and code execution on page load.

I would like to discuss this Optimizely log with a few examples. If your requirements do not serve with this, you can go with other options available in the above links.

Optimizely log:

The Optimizely log allows you to “read Optimizely’s mind” by printing the execution of targeting and activation decisions, variation changes, events, and third-party integrations on a page in your browser’s console.

Use the Optimizely log to investigate all kind of issues, even those issues that you can’t easily diagnose. For goals QA, it is the best weapon in Optimizely.

The log can help you to check:

  • Is an experiment or campaign loading correctly?
  • Is the user qualified for an audience condition?
  • Are the changes you made, applied on the page?
  • Is the page activated on the URL(or specific condition)?
  • Is a click/custom goal fired?

You can check all of this with the Optimizely log. But here; I will show the example for page activation (Pageview goals) and click/custom goal.

You can access the log in two ways:

  1. With a query parameter: Add this query parameter to the URL and reload, boom!!
optimizely_log=info
  1. With the JavaScript API: Paste it to browser console and hit enter.
window.optimizely.push('log');

This will then return something like:

For pageview / click/ custom goal filter the console with “Optly / Track”. I have highlighted on the bellow screenshot for click/pageview/custom goals simultaneously.

For custom segments/attributes filter the console with “Optly / API”. I have highlighted on the below screenshot for custom segments.

Remember; custom segments could only fire once for a session. So, you might need to check in a new private window each time; to see the custom segments are working.

Reference: If you specifically troubleshoot for the audience, page, campaign, traffic allocation & bucketing, variation code and click/custom goals visit here.